id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
231639057
pes2o/s2orc
v3-fos-license
The different shapes of spin textures as a journey through Brillouin zone chiral and polar symmetries Crystallographic space group symmetry (CPGS) such as polar and nonpolar crystal classes have long been known to classify compounds that have spin-orbit-induced spin splitting. While taking a journey through the Brillouin Zone (BZ) from one k-point to another for a fixed CPGS, it is expected that the wavevector point group symmetry (WPGS) can change, and consequently a qualitative change in the texture of the spin polarization (the expectation value of spin operator $\vec{S}^{nk_{0}}$ in Bloch state $u(n,k)$ and the wavevector $k_0$). However, the nature of the spin texture (ST) change is generally unsuspected. In this work, we determine a full classification of the linear-in-$k$ spin texture patterns based on the polarity and chirality reflected in the WPGS at $k_0$. The spin-polarization vector $\vec{S}^{nk_{0}}$ controlling the ST is bound to be parallel to the rotation axis and perpendicular to the mirror planes and hence, symmetry operation types in WPGSs impose symmetry restriction to the ST. For instance, the ST is always parallel to the wavevector $k$ in non-polar chiral WPGSs since they contain only rotational symmetries. Some consequences of the ST classification based on the symmetry operations in the WPGS include the observation of ST patterns that are unexpected according to the symmetry of the crystal. For example, it is usually established that spin-momentum locking effect requires the crystal inversion symmetry breaking by an asymmetric electric potential. However, we find that polar WPGS can have this effect even in compounds without electric dipoles or external electric fields. We use the determined relation between WPGS and ST as a design principle to select compounds with multiple ST near band edges at different $k$-valleys. Based on high-throughput calculations for 1481 compounds, we find 37 previously fabricated materials with different ST near band edges. I. Introduction Whereas symmetry generally allows or forbids numerous effects in solid state and molecular science, a recurring question is what aspect of group symmetries is responsible for given type of phenomena. For example, symmetries contained in the Crystallographic Point Group Symmetry (CPGS) establish the enabling conditions for macroscopic properties, such as electric polarization [1], magnetization [2], circular dichroism [3], and pyroelectricity [4]. The CPGS is, however, insufficient to universally describe all materials properties in crystals. In fact, wavevector dependent effects are enabling by other elements of symmetry such as the wavevector point group symmetry (WPGS) -the little point group of specific wavevectors ! in the Brillouin zone (BZ). For instance, symmetry protection of exotic Fermions [5,6] and energy band anti--crossing [7] depends on the WPGS. The Zeeman--type spin splitting (SS) [8,9] is an example of enabling physical mechanisms that seem to contradict the macroscopic crystal symmetry. Specifically, in contrast to the Zeeman effect in magnetic compounds, the Zeeman--type effect is observed in non--magnetic compounds (i.e., CPGS preserving the time--reversal symmetry) but at --point with WPGS breaking the time--reversal (TR) symmetry. Naturally, effects enabled by the CPGS are only allowed at those special wavevectors at which the two symmetries coincide (WPGS=CPGS). However, it is expected that other wavevectors lead to very different effects. Overlooking the distinction between different physics enabled by WPGS vs that enabled by the CPGS has often created an incompleteness of the symmetry classification of spin--related phenomena and their wavevector dependence. A curious historical development in this regard has been the association of the CPGS with the 'texture' of the spin--polarization !! ! -the expectation values of spin operators ! in a given Bloch wavefunction ( , ) that is centered at a specific ! with referring to a Bloch band. Specifically, figure 1 illustrates different shapes of spin texture (ST) that have been observed, including radial ST ( !" ∥ ) [10][11][12], or tangential ST ( !! ! ⊥ ) [13][14][15], or the tangential--radial ST [16,17]. These observations were generally established for highly specific wavevectors ! that satisfy WPGS=CPGS, e.g., the Γ point in GaAs (F43m) [16] and the Z point in GeTe (R3m) [13,14]. For this reason, ST shapes were often associated with the presence or absence of crystallographic inversion symmetry in the CPGS, rather than with the WPGS of the specific wavevector ! . Furthermore, when different ST shapes were noted in different functional materials, it was tempting to associate the specific ST with the particular underlying functionality. For instance, the observation of tangential ST in some ferroelectrics has been associated with the physics of electric polarization [14,18,19]. However, not all ferroelectrics have such ST type [20]. Similarly, the observation of tangential ST in some topological insulators has been associated with their topological character; however, normal insulators can also have this very same ST [21]. In • effective Hamiltonians ℋ( → ! ), for wavevectors around the origin ! of the expansion, the ST is properly determined by the little point group of wavevector ! [22,23]. However, despite extensive applications of the • effective Hamiltonians to study STs for particular wavevectors ! , associations of the resulting ST behavior with the crystallographic CPGS symmetry rather than the wavevector symmetry WPGS abound. For example, the tangential ST (Fig1.b) seen in bulk compounds is often associated with the Rashba physics of breaking the crystallographic inversion symmetry via asymmetric electric potentials (i.e., electric field or bulk electric polarization) [24], rather than with the WPGS of the particular wavevector studied. Although it is properly expected that while taking a journey through the BZ from one point ! to another, the vector symmetry and the thus the ST might qualitatively change, the nature of the change is generally unsuspected. This position was clearly expressed in a recent paper studying the radial spin texture of Weyl fermions in chiral Tellurium [11] concluding that "A full classification of the spin vector field geometry is beyond the scope of this study, and it will be the subject of a future investigation." Figure 1: Schematic representation of the (a) tangential texture S !" ⊥ k, (b) radial texture S !" ∥ k, and (c) tangential--radial texture at a two--dimensional plane !" normal to a rotation axis ! . The present paper discusses a direct resolution of the classification problem of STs. We show that the enabling symmetries underlying the ST types are not a reflection of material functionalities, nor are they caused by the presence or absence of polar fields in the CPGS [24]. Instead, ST shapes are caused by a symmetry principle cutting across different material functionalities: the existence of specific proper (rotations) and improper symmetries (reflections and inversion) in the wavevector point group symmetry (a subgroup of the CPGS). Specifically, we show that the spin--polarization vector !! ! controlling the ST is bound to be parallel to the rotation axis and perpendicular to the mirror planes. This imposes a prediction on the ensuing ST patterns according to the polarity and chirality of the ! WPGS. For example, for ! with non--polar chiral WPGS having more than one rotation symmetry and no mirror symmetry we expect !! ! to be parallel to the rotation axis i.e., that !" ! ∥ . Here, chiral (non-chiral) point groups have only proper (both proper and improper) symmetries, while polar (non--polar) point groups have one (more than one) rotation axis. Thus, a journey throughout the Brillouin zone of a fixed compound reveals ST types corresponding to different rotation and reflection symmetries of the little group of . For instance, we expect (and confirm) that compounds with non--polar crystallographic symmetry (i.e., without electric dipoles or electric fields) can show the tangential ST (i.e., Rashba--like ST) at the wavevector ! whose WPGS is polar. This illustrates that the breaking of the inversion symmetry mediated by and electric dipole or external electric field being a defining feature of Rashba effect is not a necessary condition for the Rashba ST (Fig. 1), contrary to what is generally assumed by the macroscopic CPGS and used to investigate the formation of spin spirals [24][25][26][27] for different classes of materials. Understanding the association of ST with the WPGS (rather than with the wavevector--independent CPGS) could be a productive basis for design of compounds with target ST and its control by accessing different valleys in the BZ. Thus, we explore the potential application to spin--valleytronics of the proposed journey throughout the BZ to access multiple ST types in the same compound. The application of our study is based on the inverse design approach [28] for the selection of compounds with single and co--functionalities [7,20]. Contrary to the "direct approach" based on the calculation for all possible materials candidates, the inverse design aims first the establishment of the physical mechanisms (or design principles) behind the target property, i.e., compounds with multiple ST shapes in the BZ. In the second step of this approach, the design principles are used as filters for the screening of compounds from known materials databases, e.g., the aflow--ICSD database [29]. In the first step, for BZs of 3D non-centrosymmetric Bravais lattices, we apply the relations between the ST shapes and the WPGS polarity and chirality, determining all possible symmetrically allowed WPGS in all 21 non--centrosymmetric CPGS (i.e., 139 crystallographic space groups). Only polar non--chiral, non--polar chiral, and non--polar non-chiral CPGS can have CPSG allowing high--symmetry --points with different STs. Based on the CPGS and WPGS, we select 1481 fabricated compounds and perform high--throughput DFT band structure calculations for them. Focusing on bands structures in which the relative energy at different valleys is smaller than 100 meV and SS larger than 1 meV, we identify 37 compounds with multiple ST shapes. Examples include non--polar chiral SiO 2 (P6 5 22), CPGS=D 6 , that have radial ST at the high--symmetry point A (WPGS=CPGS) and tangential--radial ST at the high--symmetry point H (WPGS=D 3 ). The SS at these -points are 26 and 6 meV, respectively. The proposed classification of the ST based on the WPGS and the selected compounds in the inverse design process are a potential platform for spin--valleytronics applications. II. Classic • Hamiltonians used to provide the ST type for the special case of WPGS=CPGS We next illustrate three linear--in--relativistic Hamiltonians (Rashba, Weyl, and Dresselhaus) set to a specific wavevector ! with WPGS equaled that of the crystallographic CPGS, which is not representative of other parts of the BZ, as summarized in Table I. Figure 2 shows the DFT calculated STs at different wavevectors ! with both WPGS=CPGS and WPGS≠CPGS for the representative compounds described in Table I. In 1984, Bychkov and E. Rashba established [30][31][32] that "if a crystal has a single high--symmetry axis (at least threefold)", i.e., crystals with polar CPGS, spin bands are described by the linear--in--k spin--orbit coupling (SOC) Hamiltonian, where the component of the momentum is set along the high--symmetry axis and ! are the Pauli matrixes. The Hamiltonian ℋ ! was historically used to study both two--dimensional compounds with perpendicular electric fields and heterojunctions with interfacial electric dipoles. In these systems, only wavevectors ! with WPGS=CPGS (e.g., ! = ) satisfy the Hamiltonian ℋ ! . The diagonalization of ℋ ! leads to the tangential ST (i.e. !" always perpendicular to the momentum ( !" ! ⊥ ) or , which is usually referred to as Rashba ST or spin--momentum locking effect. In the three--dimensional analogue, i.e., the bulk Rashba effect, compounds with polar CPGS (e.g., BiTeI (R3m) and GeTe (R3m)) [13,15,33], have an intrinsic non--zero electric dipole that effectively plays the role of the interfacial dipole in heterojunctions. As shown in the first line of Table I, in GeTe (R3m), the BZ wavevector ! = has the Rashba--like ST, as illustrated via relativistic DFT calculations in Fig. 2b. This spin--momentum locking effect is also observed in the surface states near the point of BZ of three--dimensional topological insulators [34]. In compounds with non--polar CPGS, in 1955 G. Dresselhaus determined [38] that for ! = Γ (WPGS=CPGS), spin bands are described by the Hamiltonian, where ! are the components of the total angular momentum operator and ! is fixed along a rotation axis ! in the BZ. In the normal plane to ! , the linear--in--Hamiltonian is given by the Dresselhaus term, The diagonalization of ℋ ! leads to the tangential--radial ST (i.e., , ! = ( ! , − ! , 0)), which is usually known as Dresselhaus ST. In the third line of Table I, we present GaAs (F43m) with non--polar CPGS ! , at ! = Γ (WPGS= ! ), as an historical example of the Dresselhaus ST, as illustrated via quantitative relativistic DFT calculations in Fig. 2h. III. dependent effects are reflected in the WPGS whereas macroscopic symmetry effects are reflected in the CPGS A journey through wavevectors in a BZ can visit symmetries different than the macroscopic CGPS. Specifically, for each CPGS, there exists another layer of symmetries of particular wavevectors ! (show in Table I) in the corresponding BZ [39][40][41]. This layer of symmetries consist of subgroups of the CPGS and enable specific momentum and band dependent properties in the crystal such as band crossing, anti--crossing [42], topological band inversion [43][44][45], and topological protection [46]. The inspection to STs at other --points reveals patterns that are not predicted by the CPGS. For instance, for the traditional ferroelectric compound GeTe (R3m) with Rashba ST at ! =Z (polar WPGS and CPGS) [47], Fig. 2c shows the ST obtained from the same relativistic calculation in Fig. 2b, but this time at another wavevector ! = (first line in Table I). The ST reveals a pattern that is not a Rashba--like ST (Fig. 2c). Likewise, in bulk Te, Fig. 2e presents the traditional Weyl ST at ! =A, which is deformed in another wavevector ! = showing an apparently undefined pattern (Fig. 2f). We will discuss this undefined pattern in the next sections. Finally, for the non--polar compound GaAs with Dresselhaus ST at ! = (WPGS=CPGS), Fig. 2i shows the typical Rashba--like ST obtained quantitatively from the same relativistic band structures used in Fig 2b, but this time at another wavevector ! = . The latter example of GaAs illustrates a Rashba ST in a compound without electric dipoles (i.e., compound with non--polar CPGS). Additionally, in contrast to the previous suggestion that Rashba ST is attributed to an intrinsic electric dipole and strong atomistic SOC [48], we see that even with relatively weak atomic SOC in the GaAs, the spin can be perpendicular to the momentum (Fig. 2i). Interestingly, despite the potential applications in spintronics and valleytronics, the general description of how the little group of --vectors determines the spin--polarization pattern has remained unappreciated. Indeed, the full classification of the spin vector field geometry is an open problem [11]. The common characterization of ST patterns in terms of the CPGS (e.g., the existence or absence of electric polarization [49]) applies only for WPGS ( ) equal to the CPGS (e.g., the point in all lattices, or the Z point in the BZ of GeTe), as shown in Table I. From the examples of ST prototypes in GeTe, Te, and GaAs (Table I and Fig. 2), we can directly anticipate some physical consequences: i) The existence of different symmetries of a particular wave vectors ! in the BZ leads to the possibility of having more than one spin--polarization prototype pattern in the same compound. This suggests that a ST assignment based only on the CPGS can lead to a misclassification of the ST shapes; and ii) Compounds without electric dipoles can have Rashba--like ST (Fig. 2i), meaning that contrary to what has been traditionally established, the asymmetry of the electric polarization is not a necessary condition for the Rashba ST. Derivation of the relation between the point group of the --point and spin--polarization patterns Examples of the connection between the WPGS in the BZ and macroscopic properties include the notion of "elementary band representation", proposed by Zak [50][51][52], and its extension [43] including SOC that distinguish compounds having symmetry protected topological phases. Inspired by this idea, we investigate how specific symmetry operations in the WPGS * imposes rules on the !" . These symmetry operations contained in WPGS * can be orientation--preserving symmetries termed first kind, such as rotations ! of an angle = 2 / (with being an integer), or orientation--reversing symmetries (e.g., reflections ! and roto--inversions) termed second kind [4]. We use two equivalent approaches to establish the relation between little point groups * and ST shapes for a given wavevector ! , namely: A. We derive the linear--in--k SOC Hamiltonian ℋ( → ! ) for all possible WPGS in the BZ of non--centrosymmetric compounds. After diagonalizing all Hamiltonian prototypes, the eigenvectors ! ( ) are then used to calculate the expectation value !! ! ( ) ; and B. based on the symmetry transformation properties of pseudo--vectors, we determine how the specific proper and improper symmetry operations determine the direction of the spin--polarization. A. Linear--in--k SOC Hamiltonian for WPGS in the BZ of non--censtrosymmetric compounds The symmetry operations contained in a specific WPGS * induce a set of irreducible representations ! ( ! ), which are described in the character table of the point group * [53]. Here, ! is the dimension of the representation ! , which in turns corresponds to the trace of identity symmetry operation . In Table II, the character table of the point group !! is represented. Using the symmetry operators in a given basis (the matrix form of the symmetry operators), one can then evaluate under which irreducible representation of * , a specific object or property is transformed. For instance, in table III, the irreducible representations of functions ! ! ( ) and Pauli matrices ! are shown for irreducible representations !,! = !,! and ! = . In general, bands at the wavevector ! can also be characterized by the irreducible representations ! of the point group * ! . If a band transforms under the irreducible representation ! , the dimension ! correspond to the degeneracy of the given band. [53]. The headers are the symmetry operations contained in the point group !! , namely: the identity , the threefold rotation symmetry ! , the diagonal mirror plane ! , and the time--reversal symmetry . For each irreducible representation ( The Hamiltonian must be invariant, so it transforms under the identity irreducible representations ! - the representation in which all characters are one (e.g., the representation ! for the point group !! in Table II). From the tables of the direct product of representations [53], we see that the tensor product ! ⨂ ! has usually at least one scalar that transform according to identity irreducible representations ! . The Hamiltonian can thus be constructed by considering the sum of products between a function ! ! ( ) and a basis matrix ! ! (e.g., Pauli matrices ! ( = 0, 1, 2, 3) are the basis for Hermitical matrices with = 2) that transform under the same irreducible representation ! . For instance, for a two--bands effective Hamiltonian, e.g., one orbital with spin ↑ and ↓, we have, where ! are real coefficients. As an illustrative example of these products, we use the point group !! . For example, ! transforms under the irreducible representation ! = ! (Table III) and there are no functions ! ! that transform under the representation ! , so there are no terms obtained from the product ! ⨂ ! in the Hamiltonian. Considering all products containing Pauli matrices that are even under the time--reversal symmetry, the resulting Hamiltonian for * ! = !! is given by: The physical interpretation of the Hamiltonian allows to directly determine the coefficients ! . The first term of the Hamiltonian is obtained from ! ⨂ ! (with ! = ! ) and gives the kinetic energy, meaning that ! = −ℏ/2 * . Similarly, the second term in Eq. 1 (obtained from This simple analysis not only allows to determine that the Rashba Hamiltonian ℋ ! is symmetrically allowed in the WPGS * ! = !! , but it also indicates that the Weyl and Dresselhaus Hamiltonians are symmetrically forbidden. A detailed description of this method can be found in Ref [22]. This method of invariants has been used to study the linear--in--Rashba SOC Hamiltonians allowed by the symmetry operations of polar point groups, as well as the high--order contributions to the Rashba--Bychkov effect [54,55]. Here, we extend this approach to determine the effective SOC Hamiltonian for all possible wavevector point groups * ! that are non--centrosymmetric (NCS). Figure 3 summarizes all SOC Hamiltonians that are symmetrically allowed by the specific WPGSs * ! , which according to the polarity and chirality are classified in four categories, namely: polar chiral ( ! , ! , ! , ! , and ! ), polar non--chiral ( ! , !! , !! , !! , and !! ), non--polar chiral ( ! , ! , ! , ! , , and ), and non--polar non--chiral ( !! , ! , !" , !" , and ! ). These four categories are based on the existence of specific proper (rotations) and improper symmetries (reflections and inversion) in the wavevector point group symmetry. Specifically, polar and non--polar PGs have a single and more than one rotation axis, respectively. On the other hand, chiral PGs have only proper symmetries, while non--chiral have both proper and improper symmetries. Thus, the WPGS classification according to polarity and chirality groups the kind of symmetry operations contained in the WPGS, which has implications in the symmetry enforced ST shapes. Specifically, we identify two extreme behaviors for the ST, i.e., !" ⊥ (tangential ST) and !" ∥ (radial ST), resulting from the diagonalization of the Hamiltonians ℋ ! and ℋ ! , respectively. The other possible STs are combinations of these two extreme behaviors as in the radial--tangential ST associated to the Hamiltonian ℋ ! . As summarized in Figure 3, there is a trend in the symmetrically allowed Hamiltonians for the four WPGS categories: (a) polar chiral WPGS ! , ! , and ! symmetrically allow the Hamiltonians ℋ ! and ℋ ! , while in ! and ! have no constrains; (b) polar non--chiral WPGS !! , !! , and !! only allow the Rashba Hamiltonian, while polar non--chiral point groups ! and !! lead to the SOC terms ℋ ! and ℋ ! ; (c) non-polar chiral WPGS ! , ! , ! , , and leads to the SOC term ℋ ! , but the non--polar chiral WPGS ! leads to both ℋ ! and ℋ ! ; and (d) all non--polar non--chiral WPGS !! , ! , !" , !" , and ! symmetrically impose the Hamiltonian ℋ ! . Additional symmetry constraints can be imposed by high--order--in--SOC terms, which usually results from functions ! ! ( ) related to and functions (e.g., functions in columns with > 1 in Table III). Since we focus here only on non--magnetic compounds, i.e., compounds preserving the time--reversal symmetry , the ST in anti--ferromagnetic compounds [56] as well as the Zeeman--type effect at ! breaking [8,9] are not included. Thus, all ST prototypes above described are assumed to intrinsically satisfy the condition !,!! = !,! = − !,! (i.e., the pseudo--vector !" at the inverted k--vector is also inverted). In general, the SOC Hamiltonian terms allowed at ! [55] intrinsically satisfy the symmetry constraints to the ST, as shown in the next phenomenological discussion. B. Analysis of how the specific proper and improper symmetry operations determine the direction of the spin--polarization Using the symmetry constrains imposed by the symmetry operations contained in the WPGS * , we identify the allowed spin--polarization patterns in the BZ of NCS compounds. This phenomenological analysis is based on the pseudo--vector properties of !! ! ( ) . If a vector property , such as the ferroelectric polarization, is perpendicular to a rotation axis, only ! with = 1 preserves the crystal unchanged (i.e., rotation of = 2 / , which by definition is the identity operation ). Equivalently, if ! with ≠ 1 is a symmetry operation of the crystal, the physical property must be parallel to the rotation axis. The spin--polarization !" is a pseudo--vector that locally must be parallel to the rotation symmetry operations contained in the WPGS * . On the other hand, if a reflection plane ! is a symmetry operation of * , a pseudo--vector (vector) property must be perpendicular (parallel) to the mirror plane. Thus, !" must be perpendicular to the rotation symmetry operations contained in the WPGS * . The SOC Hamiltonian terms symmetrically allowed at ! [55] intrinsically satisfy above--noted constraints imposed by rotations ! and mirror symmetries ! in the ST (i.e., !" ⊥ ! and !" ∥ ! ). Interestingly, the four WPGS categories defined by the polarity and chirality lead to group of rules (i.e., symmetry constrains) for the spin--polarization vector !! ! , as described in the four respective quadrants (a)--(d) in Figure 4. To illustrate the relations between categories of little point group symmetry * ( ! ) of a particular wavevector ! and the spin--polarization pattern around it, we explain below how simultaneous symmetry restrictions such as !" ⊥ ! and !" ∥ ! can determine the ST described by the specific SOC terms ℋ ! , ℋ ! , and ℋ ! : (a) k points having polar chiral WPGSs can be characterized by the absence of limiting spin textures: In the polar chiral WPGS (Fig. 3a), there is only one rotation axis ! (a polar axis). As a result there are no constrains on the in--plane directions of the polarization !" (Fig. 4a). We refer to this condition as the absence of pure limiting ST behaviors [either !" ⊥ or !" ∥ ]. Group symmetry analysis indicate that for ! , ! , and ! , the linear--in--SOC Hamiltonian at ! can be written as a superposition of the Rashba ℋ ! and Weyl ℋ ! SOC terms [55]. The spin--polarization pattern observed in WPGS * ! = ! (with = 3, 4, and 6) depend on the strength of Rashba and Weyl SOC terms ! and ! , respectively (all combinations between ! and ! are symmetry allowed). Rashba and Weyl SOC intrinsically imposes that at the rotated momentum vector (i.e., ! ), the spin--polarization !" near ! is also rotated, i.e., ! !! (with ! being the rotation symmetry operator), as required by the existence of the polar axis ! . In figure 4a, the ST expected in k--points with point groups having rotation symmetry ! (i.e, ! ) is illustrated. Due to the absence of limiting behaviors (i.e., pure Rashba or Weyl STs) and possible arbitrary ! and ! , in polar chiral WPGS, there are no defined patterns for the in--plane spin-polarization. (b) k points having polar non--chiral WPGS can be characterized by pure Rashba spin texture: In ! with polar non--chiral WPGS (Fig. 3b), there is one polar rotation axis ! and vertical mirror planes ( ! ), i.e., planes containing the polar axis. For instance, the * ! = !! is formed by threefold rotation symmetries ! and three vertical mirror planes ! (i.e., the planes containing the rotation axis ! ). Thus, !" is required to be perpendicular to the planes ! (as represented at the green dashed lines in Fig. 4b). Like in polar chiral WPGS, the spin--polarization pattern is also required to satisfy the rotation symmetry at all --points around the polar axis. Thus, the coexistence of polar rotation symmetries and vertical mirror planes implies that !" is perpendicular to the momentum ( !" ⊥ ), which is referred to as spin--momentum locking effect. This effect characterizing the "Rashba--ST" is enforced by symmetry rather than a consequence of the magnitude of the SOC or electric dipoles. In summary, non--Sohncke polar PGs are required to have !" ⊥ - the pure Rashba ST. The Hamiltonian describing this extreme behavior is then given by the "pure" Rashba SOC term ℋ ! . In actual compounds, high symmetry k--points having * ! = !! , !! , and !! are expected to have this limiting behavior. Examples include ! = L in GaAs (F43m) and ! = Z in GeTe (R3m), as shown in Fig. 3c and 3e, respectively. Although first predicted for surfaces with perpendicular external field, the Rashba ST has recently been generalized for bulk compounds, which has motivated the search of this compounds with large spin splitting [7]. (c) k points having non--polar chiral WPGS can be characterized by pure Weyl spin texture: For * ! = ! , ! , ! , ! , , or (Fig. 3c), there is a single polar rotation axis ! and at least one additional rotation axis ( ! ! and ! !! ) lying in the plane perpendicular to the polar rotation axis ! . Here !" is then required to be parallel to the rotation axis ! ! and ! !! (as represented at the green solid lines in Fig. 3c). Additionally, at the rotated momentum (corresponding to ! or ′ ! ), the pseudo-vector !" is also rotated ( ! !" or ′ ! !" ). The existence of rotation symmetries perpendicular to the polar axis ! implies that !" is parallel to the momentum (as in !" ∥ ), as shown in Fig. 3c. This radial ST (Fig. 1c) is locally given by SOC Weyl term ℋ ! . The "Weyl ST" is usually associated to topological Weyl semimetal [36,37] and Kramers--Weyl fermions in chiral compounds [10][11][12]. In actual compounds, high symmetry k--point having non--polar chiral WPGS include the and A point in chiral Te bulk [10][11][12], as well as the point of the insulators OsSi (P2 1 3) and SeF 4 (P2 1 2 1 2 1 ). As predicted by our description, these k--points indeed have the Weyl ST (Fig. 2h). In general, all non--polar chiral compounds can have the Weyl ST prototype around the point (or other k--point whose WPGS is equal to the CPGS). (d) k points having non--polar non--chiral WPGS will show a mixture of pure spin textures: The ST around k--point having WPGS * ! = !! , ! , !" , !" , or ! can be seen as a combination of the Weyl and Rashba STs, as shown in Fig. 4d. The reason is that these PGs contain one polar axis ( ! ), additional rotation axes ( ! ! and ! !! ) perpendicular to ! as required by Weyl ST, and mirror planes as required by Rashba ST. These reflection planes can be perpendicular to the polar rotation axis (horizontal mirror planes ! ) or can bisect the angle between a pair of rotational axis (diagonal mirror planes ! ). Thus, !" is required to be parallel to the in--plane rotation axis ! ! and ! !! and also perpendicular to mirror planes ! and ! , leading to a combination of the extreme behaviors !" ⊥ and !" ∥ (e.g., the point of GaAs, which is described by the SOC term ℋ ! ). C. Some consequences of the relation between the k--point point group and spin texture prototypes The above--noted ST classification and the existence of symmetries of particular wave vectors ! in the BZ leads to the possibility of having more than one spin--polarization prototype pattern in the same compound. Some additional consequences include: (i) Compounds without electric dipoles can have the tangential ST (Rashba ST), meaning that contrary to what has been traditionally established, the asymmetry of the electric polarization is not a necessary condition for the Rashba effect (Fig. 2c); (ii) compound with electric dipoles can have the Dresselhaus SOC term ℋ ! at polar chiral WPGS (Fig. 2f); (iii) Spin split bands can have vanishing ST. The tangential--radial ST is not the only way to combine pure Rashba !" ⊥ and Weyl !" ∥ STs. For instance, when a rotation axis ! is contained in a diagonal ! or horizontal ! mirror plane, the spin--polarization is simultaneously imposed to be parallel and perpendicular to the rotation axis. This contradiction implies that the pseudo--vector !" must vanish (even in spin split bands), as in two--dimensional SnTe thin film [57]; and (iv) another interesting consequence is the symmetry--enforced radial ST ( !" ∥ ) in bulk compounds. This ST is believed to be a characteristic only linked to symmetry protected topological phases [36,37]. We find that polar chiral compounds can possess a radial ST !" ∥ at the point, even without symmetry protected topological phase. For instance, Figure 5 shows DFT ST for the valence band maximum at the point of OsSi (P2 1 3). Our finding explain the experimental observation via spin--angle--resolved photoemission spectroscopy of the radial ST in Bulk Te [10][11][12]. Although we can directly identify some consequences from the ST classification, the relation between WPGS and ST do not establish all STs that can be find in single compound with a given CPGS. Indeed, one may erroneously believe that any compound can have any ST shape. In the next section, we thus complete this description by relating the CPGS to a set of symmetrically imposed STs. V. Description of the design principles imposed by the wavevector point group symmetries on the spin--texture of a single compound The developed classification for ST shapes for WPGSs without inversion symmetry ratifies that the ST for wavevectors ! at which WPGS=CPGS can be predicted by the CPGS. However, this classification does not disclose all ST shapes that can be observed near high--symmetry --points in the BZ of a given NCS crystalline compound with a specific CPGS . This is a fundamental problem for potential spintronic and valleytronic applications. In order to answer this question, we examine the subgroups for all CPGS without inversion symmetry. As very well established, for a given point group , there is a limited set of subgroups, which are imposed to have lower symmetry than , e.g., the subgroups of the point group ! are ! , ! , ! , !! , !! , ! , , ! , and !! . Since WPGSs are subgroups of the CPGS (including also the case WPGS=CPGS), the CPGS imposes a limited set of ST shapes in the BZ. The matrix illustrated in Fig. 6 summarizes the symmetrically allowed STs for each CPGS. The columns and lines stand for the CPGS and WPGS classified according to the polarity and chirality, respectively. The matrix component corresponding to the intersection of a CPGS with a WPGS are yellow (gray) when the WPGS is (is not) a subgroup of the CPGS. In other words, the yellow components indicate the WPGS that can exist in the BZ of a compound having any of the considered CPGS. For instance, since the subgroups have lower symmetry than the point group, there is no high--symmetry --point with WPGS that have larger symmetry than the CPGS and hence, the lower triangle of the square matrix in Fig. 6 is completely gray. When the WPGS lead to a ST with a defined shape, we include letters indicating the symmetrically allowed linear--in--SOC Hamiltonians: Rashba (R), Dresselhaus (D), Weyl (W), Rashba and Weyl (RW), Rashba and Dresselhaus (RD), and Weyl and Dresselhaus (WD). As illustrated in Fig. 6, the design principles (DP) for multiple ST shapes in a single compound can be summarized as follows: a. Polar chiral CPGS could only have high--symmetry --points with polar chiral WPGS (first column in Fig. 6). Thus, in compounds with these CPGS, the ST has no pure limiting ST behaviors [either !" ⊥ or !" ∥ ], resulting from the superposition of the SOC terms ℋ ! and ℋ ! . b. Polar non--chiral CPGS could have high--symmetry --points with polar WPGS that are chiral and non--chiral (second column in Fig. 6). The tangential ST, imposed by Rashba Hamiltonian ℋ ! , is thus the only limiting behavior that can be observed in polar non--chiral compounds. In these compounds with CPGSs ! and !! , the ST is a combination of the patterns arising from Rashba and Dresselhaus SOC terms (ℋ ! + ℋ ! ). Besides this ST, in polar non--chiral compounds with CPGSs !! , !! , and !! , it is also possible to have a ST arising from the simultaneous presence of Rashba and Weyl SOC terms (ℋ ! + ℋ ! ). c. Non--polar chiral CPGS can have high--symmetry --points with chiral WPGS that are polar and non--polar (third column in Fig. 6). The radial ST, imposed by the Weyl Hamiltonian ℋ ! , is the only limiting behavior in non--polar chiral compounds. Additionally, compounds with CPGSs ! , ! , ! , , and can also have ST arising from the simultaneous presence of Rashba and Weyl SOC terms (ℋ ! + ℋ ! ), as well as Dresselhaus and Weyl SOC terms (ℋ ! + ℋ ! ). d. Non--polar non--chiral CPGS can have WPGS with all possible combinations of polarity and chirality (four column in Fig. 6). The radial--tangential ST can be found in all non--polar chiral compounds. Additionally, both limiting behaviors for tangential and radial STs could be observed in non--polar chiral compounds. According to this description, only polar non--chiral, non--polar chiral and non--polar non--chiral CGPS can have multiple ST in the BZ. In order to illustrate these design principles (a--d), we study the DFT calculated valley--dependent ST in GaAs as represented in Fig. 7. These compounds have T d CPGS, and hence, the PG of high--symmetry k--points in the BZ of GaAs correspond to the subgroups of T d (i.e., ! , ! , ! , ! , !! , !! , !! , ! , and ). However, there is no unequivocal correspondence between the number of high--symmetry k--points and the number of subgroups of the lattice. For instance, high symmetry k--points Γ, X, L, W, K, and U have PGs T d , D 2d , C 3v , S 4 , C s , and C s (see Fig. 2g), respectively, meaning that there is no k--point in the BZ of GaAs with PG symmetry T, as illustrated by hierarchical decomposition of the subgroups of the PG T d represented in Fig. 7. The little PGs of k--points impose specific STs, represented only for high symmetry k--points Γ, X, and L (Fig. 7): the L point is required to have spin--polarization perpendicular to the momentum k ( !" ⊥ ), and the point can have a mixing of the extreme behaviors !" ⊥ and !" ∥ . In other words, while the point in GaAs has the tangential--radial ST, as expected, the L point has radial--like ST (Fig. 2h--i). Similarly, the high symmetry points X, W, and K have ST (not shown in Fig. 7) similar to the one expected at the point. Figure 7. The Bärnighausen tree provides a hierarchical decomposition of PG T d into its subgroups (i.e., ! , ! , ! , ! , !! , !! , !! , ! , and ), e.g., PG T d can be decomposed in the lower symmetry PGs !! , !! , and . Subgroups T can in turn be decomposed into PGs ! and ! . For each PG, the high--symmetry k--points are specified, i.e., Γ, X, L, W, K, and U for the PGs T d , D 2d , C 3v , S 4 , C s , and C s , respectively. The spin texture expected by the subgroup is represented for Γ, X, and L points. VI. DFT illustrations of the journey through the Brillouin Zone and high-throughput calculations The DPs establishing the relationship between each non--magnetic NCS CPGS and the possible ST shapes (Fig. 6) gives the possibility of rationally selecting compounds that have different spin textures in the BZ. In this section, we apply the previously described theory for the design of compounds that can potentially be used in spintronic devices. Besides the symmetry conditions established in Fig. 6 (i.e., enabling design principles (EDP)), we also consider DPs for the optimization of the target functionalities, e.g., multiple ST shapes, its position in the BZ and energy. Optimizing DPs (ODP) depend on the specific application. Here, we focus on spin--valleytronics, a rapid growing area based on the use of the spin-polarization patterns that are controllable by the degree of freedom of the --valleys in the BZ. Figure 8 summarizes the EDPs for different ST shapes in the same compound as well as ODPs for the control of these STs shapes for spin--valleytronics. Specifically, the starting point is a list of materials that are non-magnetic and gapped. The EDPs for compounds having multiple ST shapes in the BZ are (i) EDP1: NCS CPGS, and (ii) EDP2: CPGS should also be polar non--chiral, non--polar chiral or non--polar non--chiral. Finally, as we discuss below, the ODP includes ODP1: linear--in--SSs larger than 1 meV at band edges and --valleys with different WPGSs, and ODP2: enough small energy difference between states at different valleys. After selecting compounds using the DPs as filters (Tables V and VI), we focus on the illustration of some specific prototypes. A. Inverse design of compounds with multiple spin texture shapes that can potentially be controlled by the valley degree of freedom In spin--valleytronics, one wants to have an association between valleys and spin--polarization as established in previous sections based on the WPGS. The basic idea is that if two high symmetry ! and ! in the BZ have different WPGS, the ST shapes can also be different around these wavevectors. In experiments, ! and ! can independently be accessed in order to select different STs. Additionally, since the electronic transport and spin--currents are governed by the electronic states near the band edges, the spin--currents also depend on the WPGS of the --point at which the band edges take place. The controllable valley energy requires a sufficiently small energy difference between states at different valleys. The use of this ODP as a filter requires the evaluation of the DFT SOC band structure for a relatively large set of compounds. For this reason, in order to reduce the computation cost, before applying the EDPs, we delimit the list of compounds by selecting non--magnetic gapped materials based on previous DFT calculations without SOC, as described below. Below, we describe the three steps of the materials selection process, namely: materials filtering based on the previous DFT calculations, materials selection based on the symmetry conditions (i.e., enabling DPs), and materials optimization (i.e., optimizing DPs). Find the subset of materials that are non--magnetic gapped compounds In order to reduce the computational cost of high--throughput density functional calculations, we delimit the studied compounds according to their atomic features, i.e., the number of atoms in the unit cell and the orbital type. Our starting point is thus a list from the aflow--ICSD database containing 20,831 unique compounds with less than 20 atoms per unit cell [29] and restricted to atoms having only s, p and d orbitals. In the aflow--ICSD database, there were initially 58,276 entries (32,115 removing duplicated entries). Since we focus on gapped compounds preserving the time--reversal symmetry, we restrict the materials selection to non--magnetic compounds with non--zero bandgap. The screening of non--magnetic insulators has the bias of the DFT calculations performed in the aflow--ICSD database, where the charge density is usually initialized with a ferromagnetic configuration. This could make anti-ferromagnetic compounds to be reported as ferromagnetic. In the aflow--ICSD database, we use the spin_cell feature, which correspond to the total magnetic moment per unit cell, to filter nonmagnetic compounds. This materials screening divides the initial database in two groups: 6,993 magnetic materials and 13,838 non--magnetic compounds. On the other hand, in the aflow--ICSD, non--spin-polarized calculations classify compounds as direct gap insulators, indirect gap insulators, metals, and half metals. Based on this classification, the 13,838 non--magnetic compounds are then divided into 7,483 non--gapped and 6,355 gapped compounds (i.e., band gap larger that 1 meV), as represented in line 1 of Fig. 8. These 6355 non--magnetic insulator were previously obtained by us [7,20]. Find the subset of non--magnetic gapped compounds that can have different ST in the BZ We use the crystal point group of the compounds to filter materials satisfying EDP1 and EDP2. Table IV presents the space group index for each CPGS for different polarity, chirality, and Bravais lattices. We notice that there are Bravais lattices with symmetry forbidden CPGS categories. For instance, in orthorhombic and cubic (triclinic and cubic) lattices, there are no polar chiral (non--chiral) CPGS, and in triclinic and monoclinic (triclinic, monoclinic, orthorhombic, and rhombohedral) lattices, there are no non--polar chiral (non--chiral) CPGS. All these point groups are necessarily NCS (EDP1 in Fig. 8). We thus filter from the list of non--magnetic insulators those compounds having NCS CPGS. We find 1709 compounds with NCS CPGSs and other list of 4645 compounds with CS crystal point groups, as represented in line 2 of Fig. 8. The 1709 compounds with NCS CPGS are divided into 228 polar chiral, 723 polar non--chiral, 253 non--polar chiral, and 505 non--polar non--chiral compounds. In previous papers, we study the spin splitting and ferroelectric properties of the 951 polar (chiral and non--chiral) compounds [7,8,20]. In Table IV, we specify the abundance of compounds for each NCS CPGS. Curiously, there are no non--magnetic insulators having the NCS non--polar chiral CPGS . Besides the CPGS !! with 320 compounds, the point groups !! , ! , and !! are the most abundant NCS CPGS with 194, 183, and 176 compounds, respectively (Table IV). Selecting compounds that can have multiple ST in the BZ (i.e., those with polar non--chiral, non--polar chiral or non--polar non--chiral CPGS), we obtain 1481 compounds (EDP2 in Fig. 8). Table IV. Space group indexes for 3D Bravais lattices classified according to the inversion symmetry as centrosymmetric, NCS non--polar, and NCS polar. Although polar compounds usually have an intrinsic electric polarization, polar CPGS is as necessary but not sufficient condition for electric polarization. The cancelation of dipoles in polar compounds can be geometrically determined for each atomic site by considering vectors along the atomic bonds. Specifically, the electron transfer given by the atomic bonding of two different elements creates a microscopic dipole whose direction is opposite to electron transfer direction. For a given atomic site, if all neighbor atoms were locally distributed in such a way that the dipole vectors generated by each bonding cancel each other, then the local atomic site dipole would be zero, e.g., non--polar atomic sites. Local dipoles can add up to zero for first or second neighbors or be near zero dipole for more distant atomic neighbors (e.g., atomic distances larger than the sum of Van der Waals radius for two given elements), which can also be verified geometrically. Thus, polar compounds with non--polar atomic sites cannot have a total non--zero electric dipole, i.e., non--zero local dipoles can only be found in compounds with polar atomic sites. This gives an intuitive way to verify the local cancelation of dipoles using only the atomic positions and lattice vectors. From the 1481 NCS non--magnetic insulator, 1018 compounds have at least one polar site. We identify 867 compounds with non--zero dipole and 151 compounds in which local dipoles cancel each other. The proposed approach based on the geometrical information can result in false positives non--zero dipole, since the electron dipole also depends on the specific chemical species, which requires more exhaustive first principles calculations. Therefore, we retain this list of 1481 non--magnetic NCS insulators for our next step. Find the subset of non--magnetic NCS gapped compounds with ST that can potentially be controlled by the valley Since controllable valley energy requires a sufficiently small energy difference between states at different valleys ! ! ! ! , we restrict the materials selection based on this ODP (ODP2 in Fig. 8). In order to evaluate ! ! ! ! , we perform high--throughput DFT band structure calculations for the previously selected 1481 non--magnetic NCS insulators with CPGS allowing multiple ST shapes in the BZ. The DFT calculations with SOC are performed using Perdew--Burke--Ernzenhof generalized gradient approximation (PBE) [58] as the exchange--correlation functional and the Coulomb self--repulsion on--site term U for transition metals [59] as implemented in the Vienna Ab--initio Simulation Package (VASP) [60,61]. Based on the high--throughput calculations for the 1481 non--magnetic NCS insulators that potentially have multiple ST shapes in the BZ, we evaluate the ODPs. We find that there are 64 compounds within ODP1 (i.e., linear-in--SSs larger than 1 meV at band edges and --valleys with different CPGS categories). The final number of selected compounds depends on the threshold used for ! ! ! ! , which in turns depends on the resolution of the measurement and the specific device application. For instance, only 37 compounds have enough small energy difference between states at different valleys (ODP2), when we use ! ! ! ! = 100 meV. Tables V and VI show the experimentally synthetized compounds at the convex hull (i.e., energy above the convex hull equal to zero E ch =0 meV) obtained in the inverse design process. For each compound we present the ICSD code, energy above the convex hull (E ch ) given by the materials project (meV/fu), spin splitting (SS) in meV, and R--factor for refinement of the experimental structure. The double entries stand for compounds with the same atomic composition but different ICSD number and different SS. Table V. Binary selected compounds with multiple ST shapes in the BZ. The energy difference between states at different valleys is Δ ! ! ! ! = 100 meV. The space group (SG), SG index, CPGS, and high symmetry k--points with non-zero spin splitting (SS) are given for each compound, as well as the ratio between the SS and momentum offset = Δ !! /k (eVÅ). The SOC Hamiltonian predicted by the WPGS (Fig. 3) is given for each high--symmetry k--point. Table V. Ternary selected compounds with multiple ST shapes in the BZ. The energy difference between states at different valleys is Δ ! ! ! ! = 100 meV. The space group (SG), SG index, CPGS, and high symmetry k--points with non-zero spin splitting (SS) are given for each compound, as well as the ratio between the SS and momentum offset = Δ !! /k (eVÅ). The SOC Hamiltonian predicted by the WPGS (Fig. 3) is given for each high--symmetry k--point. VII. Conclusions Despite the fact that the crystal point group symmetry (CPGS) (e.g., absence or presence of electric dipoles) does not unequivocally determine the spin texture (ST), we find that the wavevector point group symmetry (WPGS) * ( ! ) can be a descriptor for this functionality. The important consequence of this discovery is that the selectivity in types of STs can be rationalized on the basis of symmetry (not spin--orbit physics neither the mere existence of electric fields/dipoles), and therefore the ST can be designed. These consequences are extended to other spin related phenomena. For instance, the spin--momentum locking effect is enforced by symmetry, rather than a consequence of strong SOC or topological effects. Additionally, our findings suggest the possibility of accessing different ST in the same compound by controlling the relative energy position of states in different valleys of the Brillouin zone, which has potential application for spin--valleytronics. We use the symmetry conditions defining the ST to establish all possible ST prototypes in the 21 NCS crystal point group symmetry for three--dimensional crystals. Using these symmetry relations as design principles, we select 1481 compounds from the aflow--ICSD database [29]. Performing DFT band structure calculations for the selected compounds, we predict 37 materials unnoticed to have multiple ST shapes at different --valleys. The ST symmetry classification as well as the predicted compounds with multiple ST can be a platform for potential application for the control of the ST by accessing to different valleys.
2021-01-20T02:15:40.340Z
2021-01-19T00:00:00.000
{ "year": 2021, "sha1": "b77bc79cbb91c07eace38f1141b00d663c7ec829", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2101.07477", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b77bc79cbb91c07eace38f1141b00d663c7ec829", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118870853
pes2o/s2orc
v3-fos-license
Stripe Correlations of Spins and Holes and Phonon Heat Transport in Doped La_2CuO_4 We present experimental evidence for a dramatic suppression of the phononic thermal conductivity of rare earth and Sr doped La_2CuO_4. Remarkably, this suppression correlates with the occurrence of superconductivity. Conventional models for the phonon heat transport fail to explain these results. In contrast, a straightforward explanation is possible in terms of static and dynamic stripe correlations of holes and spins. The structural phase transition from the orthorhombic (LTO) to the low temperature tetragonal (LTT) phase observed in rare earth (RE) doped La 2−x Sr x CuO 4 (LSCO) has a pronounced influence on the electronic properties of these materials. In particular, in a certain composition range the LTT phase is not superconducting, but antiferromagnetic order occurs at low temperature and at finite Sr, i.e. charge carrier concentration [1]- [7]. Various mechanisms have been suggested to explain the strong sensitivity of the electronic structure to the subtle structural changes associated with the phase transition as e.g. band structure effects [8], hole concentration dependent commensurability effects and charge density wave-like instabilities [9], as well as a novel coupling of the charge carrier motion to the tilt displacements of the CuO 6 octahedra via spin-orbit scattering [10]. More recently Tranquada et al. [7] have presented evidence for an important role of stripe correlations of spins and holes, i.e. an ordering in the CuO 2 planes into antiphase antiferromagnetic stripe domains separated by domain walls to which the holes segregate. We show in this letter that in addition to its well established influence on the electronic properties the structural transition has also dramatic consequences for the lattice dynamics. Our main result is that the phononic contribution κ ph to the thermal conductivity increases strongly below the structural transition, but only if the LTT phase is not superconducting [11]. Remarkably, the behavior of the thermal conductivity of Sr doped La 2 CuO 4 without additional RE doping suggests the even more general conclusion that κ ph is strongly suppressed for all superconducting compositions of RE and Sr doped La 2 CuO 4 . Our results have strong implications for the interpretation of heat transport in doped La 2 CuO 4 . In particular, the conventional models for phononic thermal conductivity based on enhanced phonon-defect scattering on alloying or conventional phonon-electron scattering fail to account for these observations. In contrast, a straightforward interpretation is possible in terms of static and dynamic stripe correlations of spins and holes. If this interpretation is correct our data in turn imply that stripe correlations are dynamic for superconducting compositions, whereas they are either static or absent for non-superconducting compositions [11]. The samples used in this study have been prepared by the usual solid state reaction [12]. They are well characterized with various physical properties as described in ref. [2,3,5,12]. The thermal conductivity κ was measured by a standard method for compounds with a wide range of Sr und RE concentrations, x and y, respectively. We note that the absolute values of the thermal conductivity in sintered materials differ from those in single crystals due to scattering from grain boundaries. In contrast, the temperature and doping dependencies of κ found in single crystals and polycrystals of LSCO agree well with each other [13,14]. Since we shall analyse only the temperature and doping dependence of κ we show normalized data throughout this paper. We mention that the absolute values of κ at high temperatures do not change significantly as a function of RE doping within experimental errors. As an example for the typical behavior of the thermal conductivity of RE doped LSCO we show in fig. 1 our results for a Pr and a Nd doped sample both with a Sr content of x = 0.12. The Pr doped sample does not show a low temperature structural transition [15]. Its thermal conductivity decreases monotonously with decreasing temperature, similar to what is found in LSCO at a comparable Sr content but without additional RE doping [13,14] (inset fig. 1). In the Nd doped sample the structural transition to the LTT phase at T LT ≈ 80K has a dramatic influence on the thermal conductivity: κ increases strongly below T LT reaching a maximum around 25K. We note that this temperature dependence of κ in the LTT phase is similar to that found for the purely phononic thermal conductivity of undoped insulating La 2 CuO 4 (inset fig. 1). In the high T c superconductors the thermal conductivity has an electronic and a phononic contribution, κ el and κ ph , respectively [16]. Nevertheless, in the present case the increase of κ below T LT is clearly due to an increase of κ ph : Firstly, the electrical conductivity decreases below T LT [2,5]. According to the Wiedemann-Franz law κ el should then decrease also and the increase of κ = κ ph + κ el must be due to an increase of κ ph . Secondly, the pronounced increase of κ below T LT occurs also for strongly underdoped samples (see fig. 2), where the charge carrier concentration is so low that κ el ≪ κ ph and any increase of κ must be due to an increase of κ ph . From the data presented so far one may conclude that it is the structural phase transition which causes the strong increase of κ ph below T LT . However, this is not true in general: We show in fig. 2 the thermal conductivity for Eu doped samples (y = 0.15) with Sr contents between 0.05 and 0.22. X-ray diffraction experiments reveal the occurrence of the low temperature structural transition to the LTT phase at T LT ≈ 120K in all these samples [5]. However, the low temperature increase of κ occurs only in the limited Sr concentration range of x ≤ 0.17. Remarkably, Eu doped LSCO with y = 0.15 is not superconducting for x ≤ 0.17, whereas for x > 0.17 superconductivity occurs [5]. We find similar results in the Nd doped samples. As a measure of the increase of κ at low temperatures one may use the magnitude ∆κ of the jump of κ at T LT , since it is apparent from the data of fig. 2 that ∆κ correlates with the size of the low temperature maximum. In fig. 3 we show ∆κ as a function of the Sr concentration for samples with Nd contents of y = 0.3 and y = 0.6. The low temperature increase of κ, i.e. ∆κ > 0, occur for x ≤ 0.17 for Nd doping of y = 0.3 and for x ≤ 0.23 for Nd doping of y = 0.6. A comparison of these concentration pairs to the low temperature Nd/Sr phase diagram of ref. [2] shows that they lie on the boundary separating the superconducting and the non-superconducting region of the LTT phase. Thus, the low temperature maximum of κ ph occurs in the LTT phase only for compositions which are not superconducting. Taking together these results we arrive at a remarkable correlation of the lattice dynamics with the electronic properties of the LTT-phase: The phononic thermal conductivity of RE doped La 2−x Sr x CuO 4 increases strongly below the low temperature structural transition, but only if the LTT phase is not superconducting. At this point it is instructive to reexamine the thermal conductivity of LSCO without additional RE doping (inset fig. 1) [13]. Insulating La 2 CuO 4 has a purely phononic thermal conductivity with a pronounced maximum occurring around 25K. Notably, in strongly overdoped, non-superconducting samples the maximum of κ at low temperatures occurs also. We shall come back to this below. For intermediate Sr doping of x ≤ 0.25, when samples are superconducting, κ is strongly suppressed and the low temperature maximum is absent. Obviously this Sr concentration dependence of κ in LSCO fits very well into the correlation between the magnitude of κ ph and the occurrence of superconductivity suggested by our results on the RE doped samples. We are thus lead to the more general conclusion: Whenever Sr and RE doped La 2 CuO 4 is superconducting, the phononic thermal conductivity is strongly suppressed; in particular, the low temperature maximum of κ ph is absent then. We shall now discuss this result within the conventional models for phonon heat trans-port in high T c materials. Since κ ph depends strongly on the RE and the Sr concentration the scattering of the phonons should also depend on the doping. Two possible mechanisms are then apparent: 1. Doping induced phonon-impurity scattering and 2. phonon-electron scattering. We note first that the suppression of κ ph with increasing Sr doping in LSCO without additional RE doping (inset fig.1) is usually attributed to phonon-defect/disorder scattering which increases upon alloying [13,14,16]. However, our data immediately rule out such an explanation due to the reappearance of the maximum of κ ph at large Sr and RE doping in the non-superconducting LTT phase ( fig.1). Regarding conventional phonon-electron scattering we note that, firstly, the suppression of κ ph does occur for Sr concentrations of x ≃ 0.05. For such low charge carrier concentrations phonon-electron scattering is unimportant. Secondly, if phonon-electron scattering suppresses κ ph in the superconducting samples one expects that at least part of the phononic thermal conductivity reappears below the superconducting transition (which is in the same temperature range as T LT ), since the opening of the energy gap would strongly suppress phonon-electron scattering. However, no such enhancement of κ ph is found in LSCO or RE doped LSCO below T c . We should mention here as a further mechanism a possible suppression of κ ph due to scattering of phonons on anharmonic soft phonon vibrations associated with the structural transitions in doped La 2 CuO 4 . It is well known that the LTO phase is characterized by substantial anharmonicity of the tilting vibrations of the CuO 6 -octahedra, which seems to increase with Sr-doping [17]. A possible reason for this anharmonicity is the instability of the LTO-phase towards the LTO → LTT phase transition. However, it is well known that this structural instability is mainly determined by sterical lattice properties which depend on the RE content. In particular, the low temperature transition does occur at comparable T LT for both Sr doped compounds and insulators with x = 0 [1,5,18]. It is apparent that the associated weak Sr concentration dependence of the structural instability does not correlate with the pronounced change of κ as a function of x in LSCO (inset of fig. 1). Moreover, a scenario which explains the pronounced damping of κ ph in LSCO due to the structural instability becomes very unlikely when taking into account the findings for insulators with x = 0. We find essentially the same κ ph in the entire temperature range for La 2 CuO 4 and for RE doped compounds regardless whether the structure is LTO or LTT. This means that neither the low temperature structure, i.e. the presence of the tilting instability at low temperatures in La 2 CuO 4 and its absence in the LTT phase of the RE doped compounds, nor the very close proximity to the structural transition, which is apparentely present close to T LT in the RE doped compounds, causes significant differences in the phonon heat transport of these antiferromagnetic insulators. Thus we conclude that anharmonicity due to the structural instability can not explain the damping of κ ph in LSCO (x = 0) and its reappearance in the non-superconducting LTT phase. Note that this conclusion is also supported by our findings for compounds with high Sr contents, since the strong low temperature increase of κ is not linked to the phase transition alone but to the electronic properties of the LTT-phase (see fig. 2,3). In particular, in the superconducting composition range of the LTT-phase no low temperature enhancement of κ ph is observed, although anharmonicity due to the structural instability is absent. We are thus led to the conclusion that conventional models for the phonon heat transport fail to explain the experimental results for RE and Sr doped La 2 CuO 4 . On the other hand, the correlation of the behavior of κ ph at low temperatures with the occurrence of superconductivity suggests an electronic scattering channel for the phonons. If conventional phonon-electron scattering is inadequate, as shown above, one might imagine scattering of phonons on some 'collective electronic excitation'. In the following we suggest a possible such mechanism based on the formation of so called stripe correlations of spins and holes and their dynamics. The neutron scattering experiments of Tranquada et al. [7] give evidence for static stripe correlations, i.e. a static spatial modulation of the charge and spin density in the CuO 2 planes, in the non-superconducting low temperature tetragonal phase of Nd doped LSCO (y = 0.4, x = 0.12). Remarkably, the corresponding magnetic superstructure reflections occur exactly at the wave vector of the well known incommensurate peaks in inelastic neutron scattering on superconducting LSCO [20] (without additional RE doping) leading to the suggestion that dynamic stripe correlations are present in superconducting LSCO [7]. Within this scenario a rather natural explanation of our results is possible. We first note that via the well known relation between bondlengths and charge density a spatially inhomogeneous charge distribution implies corresponding variations of the lattice constants. Note that just these variations are observed in neutron scattering as superstructure reflections in the case of pinned hole stripes in the non-superconducting LTT phase. If the stripe correlations are dynamic one expects that dynamic lattice modulations are induced provided that the time scale of the dynamic stripe correlations is comparable to that of the lattice vibrations. We note that, being a collective excitation linked to the magnetic correlations, the time scale of the stripe correlations is not given by the Fermi energy but should be much smaller. In fact, the inelastic neutron results on superconducting Sr doped La 2 CuO 4 [20] suggest an energy scale for the dynamic stripe correlations around 10meV, comparable with typical phonon energies. Then the dynamics of the stripe correlations will couple strongly to the lattice causing a pronounced damping of the phonons [19]. Accordingly, one expects a strong reduction of the lattice heat conductivity. In contrast, static stripe correlations will at most lead to some minor modifications of the phonon dispersion compared to pure La 2 CuO 4 , but they will not suppress the phonon heat conductivity significantly. Note that, if we assume that this scenario is the correct explanation for the behavior of κ ph , we may in turn conclude from the correlation between the suppression of κ ph and the occurrence of superconductivity that the dynamics of the stripe correlations is important for superconductivity. Up to now we have discussed the behavior of κ ph at the structural transition and have attributed the increase of κ ph to the pinning of stripe correlations. However, we emphasize that the reappearance of the maximum of κ found in strongly overdoped, non-superconducting LSCO without additional RE doping (inset fig. 1) finds a straightforward explanation within this picture also. At very high doping the magnetic correlations in the CuO 2 planes are weak or absent. Therefore the tendency to form stripe correlations vanishes or is at least strongly reduced. Accordingly, the strong suppression of phonon heat conductivity discussed above is absent and the phonon maximum should reappear. We should note here that the maximum of κ at high Sr-doping is usually attributed to the electronic contribution to κ. However, at such high Sr-doping disorder scattering should suppress any low temperature maximum of the electronic contribution to the thermal conductivity, as is well known for conventional metals. Therefore, be believe that an electronic origin of the low temperature peak at high doping is possible in principle, but rather unlikely. We mention at this point that as a further check for our interpretation we have measured the thermal conductivity of La 2−x Sr x NiO 4 with x = 1/3. This material is an insulator, i.e. κ = κ ph . Moreover, no structural transition occurs below 300K. On the other hand, the presence of charge and spin ordering below about 240K is well established in this compound from neutron and electron diffraction studies [22]. Remarkably, the behavior of κ compares well with our findings for doped La 2 CuO 4 in the non-superconducting LTT-phase [23]. Due to charge ordering κ increases, the slope ∂κ/∂T changes sign from negative to positive, and at low temperatures a pronounced maximum of κ occurs. These findings clearly confirm the interpretation for the cuprates presented here. We finally mention that when assuming a relation between the dynamics of stripe correlations and superconductivity the influence of the buckling distortion on the superconducting properties of the LTT phase is qualitatively expected. We recall that it has been shown in ref. [2] that the LTT phase is not superconducting, if the buckling of the CuO 2 planes, i.e. the tilting of the CuO 6 octahedra, exceeds a critical value [11]. According to the results of Tranquada et al. the pinned stripes in a single CuO 2 layer are either parallel to the [100]or to the [010]-direction (using the notation of an undistorted lattice). From the structure of the LTT phase it is apparent that there are differences between these directions, which increase with increasing buckling distortion. Therefore, assuming that the pinning of the stripe correlations requires a finite 'pinning potential' one expects that within the LTT phase stripe correlations are pinned only beyond a certain value of the buckling. In conclusion, we have shown that in RE and Sr doped La 2 CuO 4 the phononic thermal conductivity is strongly suppressed for all superconducting compositions. The conventional models of phonon heat transport based on phonon-defect scattering or conventional phononelectron scattering fail to explain these results. In contrast, the recently suggested picture of dynamic and static stripe correlations of spins and holes allows for a straightforward explanation. If this explanation is correct the correlation between suppressed κ ph and the occurrence of superconductivity tells that the dynamics of stripe correlations is important for superconductivity in doped LSCO. FIGURES FIG. 1. Thermal conductivity of Pr (y = 0.85) and Nd (y = 0.6) doped La 1.88−y RE y Sr 0.12 CuO 4 normalized to κ(150K) as a function of temperature (see text). Inset: In-plane thermal conductivity κ ab of La 2−x Sr x CuO 4 single crystals as a function of temperature taken from ref. [13]. Samples with x = 0.10, 0.15, 0.20 are superconducting, those with x = 0, 0.30 are not.
2019-04-14T02:17:08.487Z
1997-12-03T00:00:00.000
{ "year": 1997, "sha1": "fc309992de183ed816bb71253c0a9c9994b231f1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9712047", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cc41aff26d4e7ae15fd55f8dfb26b73450599997", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7626816
pes2o/s2orc
v3-fos-license
Background and Objectives: To compare laparoscopic Background and Objectives: To compare laparoscopic appendectomy with traditional open appendectomy. Methods: Seventy-one patients requiring operative intervention for suspected acute appendicitis were prospectively compared. Thirty-seven patients underwent laparoscopic appendectomy, and 34 had open appendectomy through a right lower quadrant incision. Length of surgery, postoperative morbidity and length of postoperative stay (LOS) were recorded. Both groups were similar with regard to age, gender, height, weight, fever, leukocytosis, and incidence of normal vs. gangrenous or perforated appendix. Results: Mean LOS was significantly shorter for patients with acute suppurative appendicitis who underwent laparoscopic appendectomy (2.5 days vs. 4.0 days, p<0.01). Mean LOS was no different when patients classified as having gangrenous or perforated appendicitis were included in the analysis (3.7 days vs. 4.1 days, P=0.11). The laparoscopy group had significantly longer surgery times (72 min vs. 58 min, p<0.001). There was no significant difference in the incidence of postoperative morbidity. Conclusions: Laparoscopic appendectomy reduces LOS as compared with the traditional open technique in patients with acute suppurative appendicitis. The longer operative time for the laparoscopic approach in our study is likely related to the learning curve associated with the procedure and did not increase morbidity. INTRODUCTION Several reports have demonstrated that laparoscopic appendectomy is technically feasible in the management of acute appendicitis. 1-10 Proponents of the procedure have claimed that it offers several advantages over the traditional open appendectomy through a right lower quadrant muscle splitting incision. However, the data concerning any superiority of the laparoscopic technique have not been entirely clear. To further evaluate this procedure, we prospectively compared patients undergoing laparoscopic versus traditional open appendectomy. MATERIALS AND METHODS We examined our experience with laparoscopic and open appendectomies for acute appendicitis during a 24-month interval. Surgical technique in each case was determined by individual surgeon preference and the availability of laparoscopy equipment. Patients who were not considered candidates for laparoscopic appendectomy were excluded from analysis and included pregnant women, patients with a palpable mass in the right lower quadrant (presumably representing a large phlegmon or abscess), and patients with diffuse peritonitis. Thirty-seven patients underwent laparoscopic appendectomy, and 34 patients underwent traditional open appendectomy. The two patient groups were similar with regard to age, gender, height, weight, fever, and leukocytosis. The number of patients pathologically classified as having acute appendicitis versus gangrenous or perforated appendicitis were also similar. Open appendectomies were performed through traditional transverse, right lower quadrant, muscle splitting incisions. The mesoappendix was serially ligated with 3-0 vicryl suture. The base of the appendix was doubly suture ligated with 0-chromic and then cauterized to prevent lymphocele. The right lower quadrant was irrigated with 500 cc of normal saline. The peritoneum and internal oblique fascia were closed with a running 0-vicryl suture as one layer separately from the external oblique fascia; scarpa's fascia was closed with a running 3-0 vicryl suture. The skin was closed with staples. Laparoscopic appendectomy was approached by a three trocar technique with the addition of a fourth trocar when necessary. Usually, a 10 mm port was placed at the umbilicus for the camera, a 12 mm port was placed in the suprapubic area, and another 10 mm port was placed in the right upper quadrant. When needed, a 5 mm or 10 mm port was placed in the left lower quadrant. The mesoappendix was transected using clips, ligatures, or an EndoGIA stapling and cutting device (United States Surgical Corporation, Norwalk, CT). The appendiceal stump was controlled with ligatures or an EndoGIA staple line. 11 The appendix was removed through the 12 mm port directly or after insertion into a bag. If the procedure could not be safely completed laparoscopically, the surgeon converted to an open procedure. These converted patients were included in the laparoscopy group for the subsequent analysis on an intent-to-treat basis. Surgical residents participated in nearly all cases in the roles of both "surgeon" and "first assistant." All attending surgeons and surgical residents had extensive prior experience with laparoscopic cholecystectomy. Postoperative pain was controlled by varying parenteral and oral regimens at the discretion of the physician. Patients were discharged when they were afebrile for 24 hours and tolerating a regular diet. Measurements of patient characteristics and illness severity included age, gender, height, weight, fever, leukocytosis, and appendiceal findings by pathology. Total operating room time was measured from time of patient entrance into the room to time of exit. Surgery time was measured from skin incision to skin closure. Postoperative hospitalization was measured from date of surgery to date of discharge. Postoperative pain control was evaluated by counting the number of administered analgesic doses. An unpaired Student's t-test was used to assess differences between the groups. RESULTS In the laparoscopy group, 31 patients had appendectomy completed laparoscopically; one had a normal appearing appendix left intact (there was a clear diagnosis of pelvic inflammatory disease), and five (14%) had conversion to open appendectomy (Table 1). In four of the conversions, extensive adhesions and phlegmon precluded safe laparoscopic dissection. These four patients all had a gangrenous or perforated appendix. A fifth conversion was due to dense adhesions from prior surgery and an inadvertent enterotomy. All patients in the open group had the appendix removed. In the open group, four patients (12%) experienced postoperative morbidity (Figure 1). Two patients developed wound infections: one patient developed a pelvic abscess requiring transrectal drainage, and one had a postoperative myocardial infarction which resulted in the only death of this series. In the laparoscopy group, four patients (11%) suffered postoperative morbidity (Figure 1). One patient sustained a bladder perforation while inserting a suprapubic port under direct vision. This complication was not recognized until the postoperative period at which time the patient underwent laparotomy for primary bladder repair. Another patient suffered an enterotomy during laparoscopic dissection through extensive dense adhesions from prior surgery. The abdomen was opened to complete the procedure. A third patient was readmitted with fevers 4 days after discharge. An abdominal CT scan showed a 3 cm diameter pelvic fluid collection which was aspirated under radiologic guidance. The fluid contained few white blood cells, and cultures were negative. The fevers resolved with antibiotic therapy. A fourth patient who had been converted to an open procedure also was readmitted with fevers. No infectious source was identified but the fevers resolved with antibiotic therapy. No wound infections occurred in the laparoscopy group. Analgesic and narcotic doses/postoperative hospital day were nearly identical in the laparoscopy and open groups, 2.1 vs. 2.2 analgesic doses/postoperative hospital day and 1.9 vs. 2.0 narcotic doses/postoperative hospital day, respectively (Table 1). Twenty-four patients in the laparoscopy group and 23 patients in the open group were classified as having acute suppurative appendicitis (non-gangrenous, non-perforated). Mean LOS for the laparoscopy vs. the open group was 2.5 days versus 4.0 days (p<.01). Overall LOS, with analysis including patients in both groups classified as having gangrenous or perforated appendicitis, was not statistically significantly different between groups (3.7 days vs. 4.1 days, P=0.11) DISCUSSION In the past few years, advances in laparoscopic technology have altered the surgical approach to various abdominal problems with cholecystectomy being the most prominent example. A major benefit of laparoscopy apparently derives from the reduced abdominal wall trauma as compared to traditional open procedures. However, the abdominal wall injury may not be a significant factor for every type of abdominal incision and pathologic process. Traditional open appendectomy through a muscle splitting incision generally produces a relatively small degree of abdominal wall trauma. Patients typically return to normal activities quicker than with other types of abdominal incisions. A more limiting factor in the postoperative course of acute appendicitis may be sequelae of the inflammatory/infectious process itself. Patients require time for resolution of inflammatory changes and return of bowel function. Our experience appears to offer a reasonable preliminary comparison between these two technical approaches. Although patients were not truly randomized, the various patient characteristics and measurements of disease severity were equally distributed between the two groups. In Attwood's series, 10 patients were randomized between laparoscopic and open approaches but the report contained little information about the patients and the severity of illness. The longer total operating room and surgery times in the laparoscopic group were not surprising. Setting up laparoscopy equipment generally took longer than setting up traditional surgical equipment. To some degree, the frequent introduction of residents to this approach contributed to procedure length. While we observed no decrease in surgery time with increasing numbers of procedures, we still feel that our experience with this procedure is not yet enough to conquer the learning curve. As our collective experience with this approach expands, we fully expect to improve our statistics in this area. The complication rates in our two groups were similar to those previously reported for open appendectomy. 12,13 No wound infections occurred in any of the laparoscopy trocar sites. Two of the complications in the laparoscopy group were technical in nature. The patient who experienced the bladder perforation had somewhat unusual anatomy. The patient with the enterotomy had extensive lower abdominal adhesions from prior surgery. The laparoscopic technique did reduce the number of wounds that were packed open. Although packed wounds in open appendectomies reduce the incidence of wound infection, they also constitute an additional element of postoperative care and patient concern. The issue of packed wounds was not addressed in Attwood's 10 randomized study. The patients requiring conversion from laparoscopic to open appendectomy were included in the laparoscopy group on an intent-to-treat basis. With further experience, we may see a drop in the number of conversion cases but it is unlikely that these will be completely eliminated. The laparoscopy group showed a tendency to earlier postoperative discharge but the difference was not statistically significant. Typical postoperative hospitalization in previous reports on laparoscopic appendectomy is somewhat difficult to assess. Scott-Conner 8 reported a mean postoperative stay of 2.4 days for patients treated by laparoscopic appendectomy but excluded patients converted to an open procedure from the calculation. McAnena 5 reported an average postoperative stay of 4.8 days for 36 open patients and 2.2 days for 27 laparoscopic patients. The difference was reported as significant but two conversion cases appear to have been excluded from the laparoscopy group statistics. Attwood's 10 randomized study showed significantly earlier discharge (2.5 vs. 3.8 days, p<0.01) for laparoscopic appendectomy cases. Nowzaradan 4 only stated that laparoscopic appendectomy was associated with "a shorter hospital stay." Saye 3 did not provide statistics about hospitalization. The 625 patients of Pier 2 were "generally discharged a week after the operation." Two major factors may have lengthened the average postoperative stay of our laparoscopy group to 3.7 days. First, our statistics included the five patients who were converted to an open procedure. Second, our population contained more patients with advanced appendicitis. Our 35% incidence of gangrenous or perforated appendicitis in the laparoscopy group is greater than the 11% reported by McAnena 5 and the 6% reported by Scott-Conner. 8 The report by Attwood 10 did not mention the incidence of advanced appendicitis. Our population also had relatively few negative appendices with only 14%. We feel that delayed patient presentation in our population may be the primary explanation. Interestingly, when we excluded the gangrenous and perforated appendicitis patients, the difference in postoperative hospitalization between the two groups (2.5 vs. 4.0 days) became statistically significant (Figure 2). For this subgroup, the 2.5 day postoperative hospitalization for the laparoscopy group was comparable to the 2.4 day, 2.2 day, and 2. Although this sort of statistical manipulation carries limited power in a small series, the results suggest a possible relationship between disease severity, operative approach, and postoperative course. For early appendicitis with minimal peritonitis, the amount of abdominal wall trauma may play a significant role in postoperative recovery. Thus, the laparoscopic approach may result in a shorter hospitalization than the open approach. However, for advanced appendicitis, the intraperitoneal inflammation may be a more important determinant of postoperative course than the amount of abdominal wall injury from the operative approach. An important issue not addressed in other reports concerns the criteria for patient discharge from the hospital. The clinical decision to send a patient home on a certain day rather than one day earlier or one day later can bias results which may be looking at only a one day difference between groups. Attwood 10 randomized the operative approach but provided no information about discharge criteria. It would be difficult to blind a surgeon with regard to operative approach so that an unbiased decision could be made regarding hospital discharge. We would advise surgeons to always remove the appendix when laparoscopically approaching patients with a preoperative diagnosis of acute appendicitis. This helps avoid future diagnostic dilemmas and prevents missing an early acute appendicitis with minimally visible inflammatory changes. In addition, the laparoscopic approach does not allow for appreciation of palpable abnormalities. One of our patients with a visibly normal appendix showed distinct microscopic acute appendicitis by pathology. We attempted to use laparoscopy as a technical approach to the appendix in patients with suspected appendicitis by traditional clinical criteria. We realize that most appendectomies are diagnostic procedures initially since there is no other confirmatory test prior to operative intervention. Laparoscopy may have utility as a diagnostic tool in a broader group of patients with lower abdominal pain but we are uncertain about the indications for use. Proponents have claimed that the laparoscopic approach allows patients to resume work or their normal lifestyle earlier than the traditional open approach but the data has not been clear. In Scott-Conner's report, 8 all patients had returned to "normal activities" by their first postoperative visit one to two weeks after surgery. Attwood's randomized study 10 attempted to gather more detailed information by contacting patients after their postoperative clinic visit and asking about duration of pain and return to "employment, sport, and full fitness." The laparoscopic appendectomy patients appeared to have experienced a quicker recovery. However, Attwood's series also contained mostly younger patients in the laparoscopy group with a mean age of 20.8 years (range 12 to 39) in the laparoscopy group. In addition, we do not know how many of his patients had advanced appendicitis. We suspect that patients with simple appendicitis recover more quickly than patients with complicated appendicitis. In our series, we observed that laparoscopic appendectomy patients typically returned to normal activities within one to two weeks but we could not gather sufficient data for presentation. A patient's return to work or "normal" activity can be influenced by factors beyond the illness or surgical technique. The type of work, the desire to work, the employment status, and the availability of vacation time can significantly influence a patient's "progress." We also observed many motivated open appendectomy patients who returned to work in one to two weeks after surgery. CONCLUSIONS Laparoscopic appendectomy reduces LOS as compared with the traditional open technique in patients with acute suppurative appendicitis. Mean LOS is not statistically different between groups when analysis includes patients with perforated or gangrenous appendicitis. The greater severity of illness in these patients likely outweighs those advantages of the laparoscopic approach which led to a decreased LOS in patients with uncomplicated appendicitis. Operative time for laparoscopic appendectomy was longer than for open appendectomy and is likely related to the learning curve associated with the procedure. Use of postoperative analgesia and incidence of postoperative morbidity were not statistically significantly different between groups. We agree with other authors recommending further investigation of the laparoscopic approach in the management of right lower quadrant pain. A randomized trial should be done to more clearly define the role of laparoscopy as a "diagnostic" and "therapeutic" modality in these patients.
2014-10-01T00:00:00.000Z
0001-01-01T00:00:00.000
{ "year": 1998, "sha1": "9a80483f6b4dc037a8038dc74acba2927305cfc9", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9a80483f6b4dc037a8038dc74acba2927305cfc9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
516985
pes2o/s2orc
v3-fos-license
The cilia-regulated proteasome and its role in the development of ciliopathies and cancer The primary cilium is an essential structure for the mediation of numerous signaling pathways involved in the coordination and regulation of cellular processes essential for the development and maintenance of health. Consequently, ciliary dysfunction results in severe human diseases called ciliopathies. Since many of the cilia-mediated signaling pathways are oncogenic pathways, cilia are linked to cancer. Recent studies demonstrate the existence of a cilia-regulated proteasome and that this proteasome is involved in cancer development via the progression of oncogenic, cilia-mediated signaling. This review article investigates the association between primary cilia and cancer with particular emphasis on the role of the cilia-regulated proteasome. Background The precise coordination and regulation of cellular processes is the basis for the development and the homeostasis of a multi-cellular organism. To ensure this high precision, the cell makes use of a special structure that is observed as a 1-10-μm-long cellular evagination-the primary cilium. Simplified, the structure of the cilium consists of three different compartments-the basal body (BB), the axoneme, and the transition zone (TZ). The BB is a remodeled mother centriole from which the ciliary scaffold (axoneme) consisting of circularly arranged nine doublet microtubules arises. The intermediate region from the BB to the axoneme is a short area of 0.5 μm called TZ. The primary cilium plays a decisive role in the initiation of the molecular mechanisms underlying cellular processes like proliferation, apoptosis, migration, differentiation, transcription, and the determination of cell polarity [1,2]. Consequently, ciliary dysfunction results in severe diseases collectively summarized as ciliopathies. Well-known ciliopathies are: Joubert syndrome (JBTS), Leber's congenital amaurosis (LCA), Senior-Løken syndrome (SLS), nephronophthisis (NPHP), Meckel-Gruber syndrome (MKS), Bardet-Biedl syndrome (BBS), orofaciodigital syndrome type 1 (OFD1), Alström syndrome (ALS), Jeune asphyxiating thoracic dystrophy (JATD), Ellis-van Creveld syndrome (EVC), and sensenbrenner syndrome (cranioectodermal dysplasia [CED]) [3]. Additionally, cilia are linked to cancer. The current, general view is that, on the one hand, primary cilia mediate oncogenic signaling and, on the other hand, cilia are lost in some types of cancer. In this review article, the role of cilia in cancer development will be discussed with particular regard to the cilia-controlled proteasome. The focus is on the question: What is the significance of the cilia-regulated proteasome in terms of cancerogenesis? Of all the investigated associations between primary cilia and signaling pathways, the relationship between primary cilia and SHH signaling is the best studied. In SHH signaling, the 12-pass transmembrane protein patched1 (PTCH1) is located in the ciliary membrane of vertebrates (Fig. 1a). When the SHH ligand binds to its receptor PTCH1, the SHH/PTCH1 complex leaves the cilium. As a consequence, the seven-transmembrane protein smoothened (SMO) is allowed to accumulate in the ciliary membrane and to invoke glioblastoma (GLI) transcription factors. Three GLI isoforms exist in vertebrates-GLI1, 2, and 3. The GLI proteins regulate the expression of SHH target genes and thereby cell proliferation, differentiation, survival and growth [19,20]. While GLI1 exclusively functions as a constitutive transcriptional activator [21,22], GLI2 and GLI3 can serve as an activator or a repressor [23]. In the presence of SHH, full-length GLI2 (GLI2-185) and GLI3 ( proteins are converted into a transcriptional activator (GLI2-A and GLI3-A, respectively) most likely by modifications [24,25]. In the absence of SHH, the full-length proteins can be proteolytically processed into transcriptional repressors (GLI2-R, also known as GLI2-78, and GLI3-R, also known as GLI3-83, respectively) [26]. It was reported that GLI3-R is the predominant repressor of SHH target gene transcription [26]. The ratio of activator and repressor forms regulates cellular processes dependent on SHH signaling. Similar to SHH signaling, activated PDGF receptors control cellular processes like proliferation, anti-apoptosis, migration, differentiation, actin reorganization, and cell growth [27][28][29]. The receptor PDGFRα localizes to cilia and undergoes dimerization and phosphorylation after being bound by its ligand PDGF-AA [14] (Fig. 1b). Stimulation of PDGFRα provokes the activation of signal transduction through the MEK 1/2-ERK 1/2 and AKT/ PKB pathways. In the absence of cilia, PDGFRα signaling is inhibited [14]. Additionally, PDGFRα signaling is restricted by the mammalian target of rapamycin (mTOR) signaling pathway [30][31][32], which is also associated with cilia-mediated signaling. LKB1, a negative regulator of mTOR, localizes to cilia and its action leads to an accumulation of phosphorylated AMPK at the basal body [33]. In turn, the phosphorylation of AMPK results in the inhibition of mTOR signaling via a mechanism that is only poorly understood. Interestingly, deregulation of mTOR signaling has been described in many cancer types [34][35][36]. Previously, it has been demonstrated that NOTCH signaling depends on primary cilia [16,17] (Fig. 1c). NOTCH signaling starts when the extracellular domain of a NOTCH ligand, e.g., delta-like1-4 or jagged1-2, binds to the NOTCH receptor (NOTCH1-4) [37]. A ciliary localization was shown for NOTCH1 and NOTCH3 [16,17]. After the binding event, the NOTCH receptor undergoes a three-step cleavage and finally releases the NOTCH intracellular domain (NIC). Following this, NIC enters the nucleus and interacts with its DNA-binding cofactor RBP-J/CBF1/CSL thereby activating NOTCH target genes. NOTCH signaling controls among other proliferation and differentiation [38]. Moreover, TGFβ signaling relates to cilia [18] (Fig. 1d). Both receptors of the pathway, TGFβ-RI and TGFβ-RII, are located at the base of primary cilia. The ligandinduced formation and activation of a heterotetrameric receptor complex composed of TGFβ-RI and TGFβ-RII results in the phosphorylation and activation of the SMAD2 and SMAD3 proteins which are present at the ciliary base [18]. The phosphorylated SMADs 2 and 3 associate with a co-SMAD called SMAD4 that is also detectable at the base of cilia. Subsequently, the complex consisting of SMAD2, 3, and 4 enters the nucleus and activates TGFβ target genes. TGFβ target genes control cellular processes like proliferation, differentiation, morphogenesis, tissue homeostasis, and regeneration [39]. Primary cilia are also connected to WNT signaling [40], which can be classified as canonical (β-catenin dependent) or non-canonical (β-catenin independent). In the inactive state of the canonical WNT pathway, a destruction complex consisting of adenomatous polyposis coli (APC) and AXIN triggers the phosphorylation of β-catenin by casein kinase 1 (CK1) and glycogen synthase kinase 3 (GSK3) (Fig. 1e). Afterwards, β-catenin gets phosphorylated, ubiquitinated, and finally degraded [41]. The WNT/β-catenin pathway becomes initiated by binding of WNT ligands to frizzled (FZ) receptors and low density lipoprotein-related proteins 5/6 (LRP 5/6) and leads to the activation of the cytoplasmatic phosphoprotein disheveled (DSH). Subsequently, DSH recruits the destruction complex to the plasma membrane, thereby inhibiting phosphorylation of β-catenin. This operation of DSH enables β-catenin to translocate into the nucleus for activating target gene transcription. Several processes are controlled by canonical WNT signaling: cell fate determination, migration, proliferation, tumor suppression, and self-renewal of stem and progenitor cells [42,43]. In contrast to canonical WNT signaling, the noncanonical WNT pathway is less well understood. Hence, it is unknown, if β-catenin-independent WNT pathways function as different distinct pathways or if these pathways form a large signaling network [44]. Like the canonical WNT pathway, it starts with a WNT ligand binding to the FZ receptor, but does not require the presence of LRP co-receptors or β-catenin. Noncanonical WNT signals are mediated through intracellular Ca 2+ levels and involvement of RHO A, ROCK, and JNK kinase. These factors play an important role in the regulation and remodeling of the cytoskeleton and are greatly involved in the control of planar cell polarity (PCP). PCP is established by intercellular communication that regulates the composition of cells polarizing structures within the plane of a tissue, i.e., stereocilia bundle orientation in the inner ear [45]. In addition to managing cytoskeleton organization, non-canonical WNT signals regulate proliferation and migration [46]. The restriction of canonical WNT signals by cilia is likely, since DSH is constitutively phosphorylated in Kif3a-negative mice which are unable to assemble cilia [47]. However, non-canonical WNT signaling seems to Fig. 1 Cilia-mediated signaling pathways whose proper regulation is dependent on the proteasome and the structure of the proteasome. a-e SHH, PDGFRα, NOTCH, TGFβ, and canonical WNT signaling is transduced by primary cilia. a In the absence of the ligand SHH, SMO remains in cytoplasmic vesicles and is inhibited by PTCH1. As a result, GLI2 and GLI3 (forming a complex with SUFU) are phosphorylated most likely within the cilium and subsequently get proteolytically processed to their repressor forms (GLI2/3-R) by the proteasome at the ciliary base. In turn, GLI2/3-R translocate into the nucleus and represses the expression of SHH target genes. Importantly, GLI3 is the predominant repressor. When SHH binds to its receptor PTCH1, the SHH/PTCH1 complex leaves the cilium and PTCH1 is not able to inhibit the action of SMO any longer. Thereupon, SMO is transported into the cilium and converts the full-length forms of GLI2 and GLI3 (GLI2/3-FL) into their activator forms. In the course of this conversion process, SUFU dissociates from the complex enabling the GLI2 and GLI3 activator forms to induce SHH target gene expression. b In the ciliary membrane, PDGFRα is bound by its ligand PDGF-AA and subsequently becomes dimerized and phosphorylated. The phosphorylation of PDGFRα induces the activation of the MEK 1/2-ERK 1/2 and AKT/ PKB signaling pathways. c Initiating NOTCH signaling, the extracellular domain of a NOTCH ligand (JAGGED or DELTA) binds to the NOTCH receptor which is located in the ciliary membrane. As a result, the NOTCH receptor undergoes a three-step cleavage and finally releases the NOTCH intracellular domain (NIC). NIC enters the nucleus and activates NOTCH target genes. d The receptors of the TGFβ pathway, TGFβ-RI and TGFβ-RII, are located at the ciliary base. When the TGFβ ligand binds to the receptors a heterotetrameric receptor complex composed of TGFβ-RI and TGFβ-RII is formed and activated. This activation results in the phosphorylation and activation of SMAD2 and SMAD3. The phosphorylated SMADs 2 and 3 associate with a co-SMAD called SMAD4. Afterwards, the complex consisting of SMAD2, 3, and 4 enters the nucleus and activates TGFβ target genes. e In the inactive state of the canonical WNT pathway, a destruction complex consisting of APC and AXIN triggers the phosphorylation of β-catenin by GSK3. After this phosphorylation event, β-catenin gets ubiquitinated and finally degraded. In the active state, WNT ligands bind to FRIZZLED and LRP receptors leading to the activation of DSH. DSH recruits the destruction complex to the plasma membrane, thereby interfering phosphorylation of β-catenin. Afterwards, β-catenin translocates into the nucleus and activates canonical WNT target gene expression. Primary cilia restrict canonical WNT signaling because the ciliary protein KIF3A is able to inhibit the phosphorylation of DSH. f The proteasome consists of the catalytic 20S subunit and two regulatory 19S subunits. The 20S subunit displays a cylindrical arrangement of four stacked heptameric rings. Each ring is composed of seven α and β subunits, respectively. Only three subunits (PSMB8-10) display a proteolytic activity equipping the proteasome with trypsin-like, chymotrypsinlike, and caspase-like abilities. The 19S subunit can be subdivided into two subcomplexes: a base complex (being constituted of six ATPases [PSMC1-6] and three non-ATPases [PSMD1, 2 and 4]) and a lid complex (consisting of nine non-ATPases [PSMD3, 6-8, 11-14, and SHFM1]) ▸ be mediated by primary cilia [8][9][10]. One core PCP gene product, van gogh-like 2 (VANGL2), was found in cilia [48]. The ciliary presence of VANGL2 [48] and the finding that VANGL2 is essential for the transduction of WNT5a-induced signals to establish PCP [49] suggest that non-canonical WNT signaling might be mediated by cilia. This hypothesis is supported by data showing that disruption of BBS protein function leads to ciliary dysfunction along with perturbation of PCP [48] and that ciliopathy genes interact genetically with VANGL2 [48,50]. In summary, these data suggest that primary cilia mediate non-canonical WNT signals and limit canonical WNT signaling [51]. Dysregulation of any of these pathways could lead to oncogenesis. In many cases, upregulation of their target gene expressions led to an increased cell proliferation, which in turn caused tumorigenesis [52][53][54][55][56]. One of the best studied oncogenic signaling pathways is the SHH pathway which was already analyzed in combination with cilia in cancer cells [57,58]. In 2009, Han et al. and Wong et al. [59,60] described the role of primary cilia in the development of medulloblastomas and basal cell carcinomas. In regard to SHH signaling, both groups showed that the absence of cilia can protect against tumorigenesis and, in addition, that the presence of cilia can be necessary for the induction of tumors. First, they induced tumorigenesis via a cell type-specific expression of an activated SMO protein. Then, they performed the experiments in mice that were unable to form cilia in the particular cell type for the formation of either medulloblastomas or basal cell carcinomas. In both cases, ciliary deficiency protected against SMO-induced tumorigenesis [59,60]. Second, the same groups investigated the consequences of constitutively active GLI2 on tumorigenesis [59,60]. In case of basal cell carcinoma development, constitutively active GLI2 was sufficient to induce carcinogenesis [60], while, in case of medulloblastoma development, constitutively active GLI2 did not give rise to carcinogenesis [59]. Importantly, the combination of constitutively active GLI2 and loss of cilia led to the formation of medulloblastomas [59] giving circumstantial evidence that the additional decreased amount of GLI3-R caused by ciliary absence might be necessary to induce oncogenesis. Accordingly, the activation of SHH target gene expression alone is not strong enough for driving the development of some cancer types, but in combination with an inhibited repression of SHH target gene expression by reducing the amount of GLI3-R, activation of SHH target gene expression is sufficient to induce oncogenesis. Possibly, the reason for these differences is that the importance of GLI3-R is different in diverse cancer types. Perhaps, it is even the case that the efficiency of GLI3 processing is different in different cancer types and the amount of GLI3-R varies. A decisive factor for the proteolytic processing of GLI3 is the proteasome. The proteasome and cancer The proteasome functions as the catalytic component of the ubiquitin-proteasome system and consists of 19S and 20S subunits (Fig. 1f ). Proteins destined to get degraded or proteolytically processed become phosphorylated and ubiquitinated. Polyubiquitin conjugation is realized by a cooperation of an ubiquitin-activating enzyme (E1), an ubiquitin conjugation enzyme (E2), and an ubiquitin ligase (E3). In search of molecular mechanisms underlying carcinogenesis, it was reported that while E1 was never found to be associated with tumor formation, deregulation of E2 and especially E3 was detected in tumors [61]. In some cases, E3 ligases are inactivated leading to a stabilization of oncogene products. In other cases, E3 ligases are overexpressed causing an increased degradation of tumor suppressor proteins [62]. Finally, ubiquitinated proteins bind to the 19S regulatory complex. Hereafter, they are degraded by the multiple peptidase activities containing 20S subunit [63]. Besides the degradation of proteins, the proteasome is able to proteolytically process proteins. A well-studied processing event is the transformation of full-length GLI3 into its shorter repressor form. This process depends on a three-part signal [64]. The first processing signal is the zinc finger domain of the GLI3 protein, which serves as a physical barrier to the proteasome. It prevents degradation of the GLI3 protein and is an essential prerequisite for GLI3 processing. Accordingly, the proteasome is not the factor which distinguishes degradation from processing, but the protein which is degraded or processed determines its fate via its sequence. The linker sequence which expands between the zinc finger domain and the lysines of the degron sequence functions as the second processing signal. Most likely, the proteasome binds to the linker area, which is assumed to be a proteasome initiation region. The degron is the third processing signal and the starting point of proteasomal processing. In addition to its role in SHH signaling, the proteasome is important for the proper course of several cilia-mediated signaling pathways. It was reported that PDGFRα signaling is upregulated in cancer cells due to an elevated amount of PDGFRα [65]. In these cells, HSP90 and the co-chaperone CDC37 form a complex with PDGFRα, making it inaccessible to proteasomal degradation (Fig. 1b). Previously, it was reported that the amount of PDGFRα could also be decreased in kidney tumors, while the amount of mTOR is increased and mTOR signaling is upregulated [30,31,66]. Because mTOR regulates PDGFRα signaling negatively by reducing the amount of PDGFRα [30] and mTOR governs proteasomal activity positively [67], it is conceivable that mTOR controls the PDGFRα amount via regulating proteasomal activity. If this hypothesis is true, it could be possible that cancer with a high PDGFRα amount is characterized by downregulated mTOR signaling. As far as we know, the evidence for this possibility has not been found yet. The proteasome is also involved in the regulation of NOTCH signaling, because it controls the NIC amount [68,69] (Fig. 1c). In lung adenocarcinoma cells, proteasomal degradation of NIC is impaired resulting in enhanced cell proliferation and hence tumorigenesis [70]. Furthermore, TGFβ signaling requires the services of the proteasome. Phosphorylated SMAD2 and SMAD3, the central transducers of the pathway, are inactivated by proteasomal degradation [71,72] (Fig. 1d). Accordingly, reduced proteasomal degradation of these SMADs gives rise to hyperproliferative diseases like cancer [71]. As previously mentioned, canonical WNT signaling is most likely restricted by primary cilia [47]. At the base of these cilia, the proteasome degrades β-catenin that is phosphorylated at Ser33, Ser37, and Thr41 [47,50] (Fig. 1e). In some tumors, this kind of phosphorylation is prevented by mutations resulting in a stabilization of β-catenin which then is able to activate the transcription of many oncogenes [73,74]. Consequently, canonical WNT signaling is not only restricted by primary cilia but also by proteasomal degradation of β-catenin. As opposed to the just described signaling pathways, an essential role of the proteasome in non-canonical WNT signaling has never been described. In sum, a decreased proteasomal activity causes a deregulation of signaling pathways, leading to an increased cell proliferation resulting in the development of cancer. However, numerous studies show that proteasomal activity is enhanced in cancer cells [75][76][77][78][79][80][81][82][83][84][85][86][87][88][89] representing an obvious discrepancy. A plethora of point mutations in cancer genomes lead to a very high number of misfolded proteins [90]. It was hypothesized that the cell faces this enormous boost of useless and even harmful proteins with enhanced proteasome-mediated degradation [91]. Moreover, estimates suggest that 90 % of human solid tumors comprise cells with more than two copies of one or more chromosomes [92]. For this reason, a huge surplus of proteins is produced in these cells resulting in a cellular protein imbalance [93,94]. Consequently, many proteins are not able to form a stable conformation and get degraded by the proteasome [95,96]. Thus, cancer cells show an increased proteasomal activity due to various reasons. This phenomenon has been designated as "proteotoxic crisis" [91]. Based on this knowledge, proteasome inhibitors are used in anti-cancer therapies [97]. However, there is a unique class of cancer cells with a decreased proteasomal activity in which the use of proteasome inhibitors would be counterproductive. Reduced proteasomal activity is a hallmark of several cancer stem cells (CSCs) [98][99][100][101][102][103]. In contrast, glioma stemlike cells (GSCs) show an increase of proteasomal activity [104] suggesting that proteasomal activity may vary among types of CSCs. But it is doubtful whether GSCs belong to the group of CSCs because they maintain only some properties of CSCs [105]. CSCs (also known as cancer-initiating cells) are part of a new understanding in terms of tumorigenesis. In contrast to the "stochastic model" in which every cancer cell of a tumor is capable of repopulating the entire tumor because of its property of self-renewal, this model conveys the idea that only a small group of cancer cells (CSCs) within a tumor has the ability to repopulate the tumor and that the progeny of these cells loses this ability [106][107][108][109]. Even in the course of chemotherapy, CSCs are able to survive and initiate the re-growth of tumors [110,111]. Thus, CSCs are the reason for the resistance of tumors to conventional anti-cancer therapies. Consequently, it is a challenging task for the current research to develop new anti-cancer therapies which target CSCs [111]. In the development of this type of anti-cancer therapies, a broad spectrum of pharmaceutical compounds were tested. Interestingly, natural dietary compounds came into focus [112]. Since proteasomal activity is reduced in most CSCs and since the decisive signals thought to underlie the self-renewal mechanism of the CSCs are, inter alia, SHH signaling, PDGFRα signaling, NOTCH signaling, TGFβ signaling, and WNT signaling [106,[113][114][115][116][117][118][119], one of these compounds is sulforaphane (SFN; 1-isothiocyanato-4(R)methylsulfinylbutane), an ingredient of broccoli, which functions as a proteasome activator [120]. In 2010, Li et al. [101] tested the effect of SFN on breast cancer cells. They came up with the conclusion that the SFN treatment downregulated canonical WNT signaling by promoting proteasomal degradation of β-catenin in CSCs. The SFN treatment eliminated breast CSCs [101], indicating that the decreased proteasomal activity is essential for the survival of CSCs and that SFN could be an effective drug in anti-cancer stem cell therapies. Primary cilia and the proteasome After reviewing the connections between primary cilia and cancer, as well as the proteasome and cancer, the relationship between primary cilia and the proteasome should be investigated in order to determine the molecular mechanisms underlying cancer development. As early as 2003, it was suggested that although proteasomes exist almost ubiquitously within the cytoplasm and the nucleus, "their function is likely to be different at different cellular locations" and that "this probably depends on post-translational modifications of proteasomal subunits and on their association and interaction with specific regulatory proteins" [121]. In 2007, Gerdes et al. [50] reported that the ciliary protein BBS4 is involved in the proteasomal degradation of cytoplasmic β-catenin, the mediator of canonical WNT signaling. In the following years, interactions of a whole range of ciliary proteins with proteasomal components were identified (Table 1) indicating a possible link between cilia and the proteasome. In this context, it was shown that the ciliary proteins BBS1, BBS2, BBS4, BBS6, BBS7, BBS8, and OFD1 interact directly with different proteasomal components [122]. The loss of BBS4, BBS7, and OFD1 leads to a reduced proteasomal activity, respectively, impairing intercellular signaling pathways [50,122,123]. In search of the molecular reason for the depleted proteasomal activity, Liu et al. [122] measured a decreased amount of different proteasomal components in the absence of BBS4 and OFD1, respectively, demonstrating that these proteins control the composition of the proteasome. Since all these proteins localize to the basal body which is equivalent to the mother centriole in ciliary absence, the authors of this study refer to the effect of these proteins on the "centrosomal proteasome" [122]. The existence of a centrosome-associated proteasome was already shown before [124,125]. Thus, the question arises whether the cilium is important for proteasomal function or whether it rests on the centrosome alone to regulate proteasomal activity. Three components of the 19S proteasomal subunit (PSMD2, PSMD3, and PSMD4) were detected at the BB of mouse embryonic fibroblast (MEF) cilia [126]. However, the detection of proteasomal components at the BB is not sufficient to answer this question; it might be that the centrosomal and the putative ciliary proteasome (a proteasome that functions cilia dependent) are one and the same. Remarkably, a component of the 20S proteasomal subunit (PSMA5) was found along the whole cilium increasing the likelihood of a ciliary involvement in proteasome assembly or function [126]. Interestingly, the ubiquitin conjugation system has been described in flagella of the single-cell green alga Chlamydomonas reinhardtii but, in contrast to the cilia of MEFs, no proteasomal components were detected in these flagella [127] indicating that the potential ciliary proteasome developed later in evolution and might even be vertebrate specific. Using the G-LAP-Flp purification strategy in mammalian cell lines [128] which ensures high-confidence proteomics, numerous interactions of the transition zone proteins INVS (also known as NPHP2), IQCB1 (also known as NPHP5), and RPGRIP1L (also known as FTM, NPHP8, or MKS5) with different components of the proteasome were detected [129]. It was already shown that these three proteins are located at the centrosomes during mitosis [126,[129][130][131][132] enabling a putative interaction with a component of the centrosomal proteasome. In Rpgrip1l-negative MEFs and limbs of mouse embryos, a reduced proteasomal activity was quantified at the ciliary base. In contrary to the situation in the absence of BBS4 and OFD1 which was characterized by a reduced overall cellular proteasomal activity, RPGRIP1L deficiency results in a decreased proteasomal activity exclusively at the base of cilia (in ciliary absence, the proteasomal activity at centrosomes of Rpgrip1l −/− MEFs is unaltered) demonstrating the existence of a ciliary proteasome [122,126]. This study could draw the attention from the connection between centrosome and proteasome to the link between primary cilia and proteasome. Contrary to the situation in the absence of BBS4 and OFD1 which was characterized by a depletion of proteasomal components, RPGRIP1L deficiency results in an accumulation of proteasomal 19S and 20S subunit components at the ciliary base [122,126]. Another difference between these ciliary proteins is the choice of their proteasomal interaction partners. While RPGRIP1L and OFD1 have been shown to interact with components of the 19S proteasomal subunit, BBS4 interacts with components of the 19S as well as 20S proteasomal subunits ( Table 1). All these findings indicate that ciliary proteins use different mechanisms with which they regulate proteasomal activity. Mutations in RPGRIP1L, BBS4, and OFD1 give rise to very severe ciliopathies which often lead to death in men and mice [133][134][135][136][137][138][139][140][141][142][143]. These ciliary proteins regulate proteasomal activity [50,122,126] and the proteasome is involved in the development and function of numerous organs and structures of the human body [144][145][146]. Therefore, reduced activity of the cilia-regulated proteasome is a potential cause of ciliopathies. Appropriately, in silico studies using a systematic network-based approach to work out the "cilia/centrosome complex interactome (CCCI)" revealed that the greatest community of the CCCI is composed of proteasomal components [147]. Thus, it is likely that the relationship between ciliary proteins and the proteasome is of great importance. Further evidence for this importance is given by rescue experiments in vivo. The injection of proteasomal component mRNA or SFN treatment restored defective convergent extension and somatic definition in zebrafish embryos treated with bbs4 or ofd1 morpholinos [122]. Additionally, it could be shown that the introduction of a constitutively active Gli3-R protein (Gli3 Δ699 ) rescues telencephalic patterning, olfactory bulb morphogenesis, and the agenesis of the corpus callosum in Rpgrip1l-negative mouse embryos [148,149]. Together, these data demonstrate that a decreased activity of the cilia-regulated IQCB1 (transition zone + basal body) PSMB1 (20S subunit) IMCD3 [129] IQCB1 (transition zone + basal body) PSME4 (proteasome activator protein) IMCD3 [129] IQCB1 (transition zone + basal body) PSMA3 (20S subunit) IMCD3 [129] IQCB1 (transition zone + basal body) PSMA7 (20S subunit) IMCD3 [129] proteasome is responsible for the development of ciliopathies in these model organisms. Future studies should address if this is also true for human ciliopathies. Does the cilia-regulated proteasome play a role in the development of cancer? Several studies have focused on the association between cancer and ciliary presence [150][151][152][153][154][155][156][157][158][159][160]. Since a reduced number of cilia was detected in different cancer types [57-60, 150-156, 158, 159, 161], it was reported that tumorigenesis results in a reduced cilia frequency in some cancer types. Until now, it is unknown why some cancer cell types possess cilia and others not ( Table 2). Although the absence of cilia is able to correct effects of an oncogenic initiating event that lies upstream of ciliary action [59,60], the loss of cilia is not the only solution to treat cancerogenesis. If the oncogenic initiating event lies downstream of ciliary action, therapeutic targeting of cilia would not help in the development of cancer therapies. Accordingly, genetic screening for the oncogenic initiator might be the most important point to design effective anti-cancer therapies. In this context, it would be an interesting question for future investigations whether ciliary genes are mutated in patients suffering from cancer. It was previously reported that the ciliary gene RPGRIP1L might serve as a tumor suppressor gene because RPGRIP1L was downregulated in human hepatocellular carcinoma [162]. Mechanistically, RPGRIP1L is thought to suppress tumor cell transformation in part by regulating MAD2, a mitotic checkpoint protein whose inactivation is realized by the proteasome [162,163]. Since knockdown of RPGRIP1L led to an increased amount of MAD2, the function of RPGRIP1L as a controller of ciliary proteasome activity could be of great importance in the prevention of human hepatocellular carcinoma formation. Proteasomal activity seems to be an important factor in cancerogenesis, since proteasomal activity is altered in many cancer types (Table 3) and the use of proteasome activators and inhibitors as anticancer therapeutics showed promising results [100,164,165]. In most cancer types, proteasomal activity is elevated [75][76][77][78][79][80][81][82][83][84][85][86][87][88][89]. Until now, the reason for this increase is unknown. Since mutations of genes encoding ciliary proteins led to a reduced proteasomal activity in ciliopathies of mice and zebrafishes [122,126], it might seem as if mutations in these genes could only play a role in cancer RPGRIP1L (transition zone) PSMD2 (19S subunit) NIH/3T3 [126] types with reduced proteasomal activity. However, it was reported that RPGRIP1L controls the ciliary proteasome in MDCK cells negatively opposing the findings in MEFs and embryonic mouse limbs [126,166]. These findings as well as studies on cilia length argue for a cell type-specific function of RPGRIP1L allowing that mutations in RPGRIP1L cause an increase of ciliary proteasome activity in some organs and a concomitant reduction of this activity in other organs [126]. Theoretically, it is conceivable that an increased amount of ciliary proteins leads to enhanced proteasomal activity. In this regard, a recent study demonstrated that the overexpression of the RPGRIP1L domain, which interacts with the proteasomal component PSMD2, gives rise to an elevated activity of the ciliary proteasome [126]. What remains to be determined is if the increased proteasomal activity found in most cancer types could be due to impaired regulation of proteasomal activity by ciliary proteins. Another cancer cell type in which the cilia-regulated proteasome might play a leading role is the CSC. Since the loss of ciliary proteins BBS4, BBS7, OFD1, and RPGRIP1L resulted in a reduced proteasomal activity [50,122,123,126] and CSCs lack cilia in addition to a decreased proteasomal activity [98][99][100][101][102][103]150], it is quite possible that a reduction of cilia-regulated proteasomal activity causes the development and/or ensures the survival of most CSCs. However, this is more of a meta-analysis. The only kind of CSC reported to lack cilia was a medulloblastoma CSC [150]. Until now, data about the existence of cilia on other CSCs are missing. Consequently, the presence of cilia in CSCs of other cancer types needs to be investigated. To gain insight into the potential relationship between the cilia-regulated proteasome and cancerogenesis, it is necessary to perform comparative investigations focusing on the activity of the ciliary proteasome and the presence of cilia in cancer cells. Conclusion Oncogenic signaling pathways are mediated by primary cilia. Consequently, an association between primary cilia and cancer is very likely. Altered proteasomal activity is an often observed feature in cancer cells [75][76][77][78][79][80][81][82][83][84][85][86][87][88][89][98][99][100][101][102][103] and it was demonstrated that ciliary proteins control proteasomal activity [50,122,123,126]. Previously, it was suggested that the dysfunction of the cilia-controlled proteasome is only one contributory factor of the ciliopathic pathology [122]. Thus, an important purpose of future studies will be to reveal the impact of the cilia-regulated proteasome in human ciliopathies. This aim is closely related to the analysis of the cilia-regulated proteasomal activity in cancer. Consequently, cancer therapies could be advanced by targeting cilia. In the context of proteasomal activity, SFN is a promising therapeutic agent for ciliopathies and any form of cancer in which proteasomal activity is reduced. It remains an open question whether the reduced activity in these cancer types corresponds to the cilia-controlled proteasomal activity. The answer to this question could extend the knowledge about oncogenic factors in a significant direction. Interestingly, a characteristic of most CSCs is a decreased proteasomal activity [98][99][100][101][102][103] making it possible that new insights into the field of cilia and in particular, the cilia-regulated proteasome, help to understand the biology of tumor formation and reformation as well as the therapeutic possibilities to treat various types of cancer. However, even if nearly all CSCs display a reduced proteasomal activity, most cancer types exhibit the exact opposite-an elevated proteasomal activity. There is scant evidence of ciliary dysfunction resulting in an increase of proteasomal activity, but it does not seem to be impossible due to cell type-specific functions of ciliary proteins [126,166]. In this regard, it would be helpful to know whether the higher proteasomal activity in cancer cells depends on "proteotoxic crisis" or not [91]. Based on the novelty of the relationship between the primary cilium and the proteasome, it is difficult to make a clear statement to the role of the cilia-regulated proteasome in cancerogenesis. However, this research topic is very promising and the relationship between the cilia-controlled proteasome and cancer holds enormous potential for the development of new anti-cancer therapies. Pancreatic cancer MIA-PaCa-2 human pancreatic cancer cells X [85] Prostate cancer LNCaP (AD) and PC3 (AI) PCa cells X [86] Renal cancer Renal cell carcinoma tissue; clear cell renal cell carcinoma (CCRCC) cell lines X [87][88][89] type 1; PCP: planar cell polarity; PDGF: platelet-derived growth factor; PDGFRα: platelet-derived growth factor receptor-α; PSMA5: proteasome subunit alpha type-5; PSMD2: proteasome 26S subunit, non-ATPase, 2; PSMD3: proteasome 26S subunit, non-ATPase, 3; PSMD4: proteasome 26S subunit, non-ATPase, 4; PTCH1: patched1; RBP-J/CBF1/CSL: recombining binding protein suppressor of hairless; RHO A: ras homolog gene family, member A; ROCK: rho-associated protein kinase; RPGRIP1L: retinitis pigmentosa GTPase regulator-interacting protein-1 like; SFN: sulforaphane; SHH: sonic hedgehog; SLS: Senior-Løken syndrome; SMAD: SMA-and MAD-related proteins; SMO: smoothened; TGFβ: transforming growth factor-β; TGFβ-RI/II: transforming growth factor β receptor I/II; VANGL2: van gogh-like 2; WNT: wingless/integrated; TZ: transition zone. Authors' contributions CG and UR wrote the manuscript. CG and TL compiled the tables. TL and JML designed the illustrations. All authors read and approved the final manuscript.
2017-07-14T17:31:52.028Z
2016-06-10T00:00:00.000
{ "year": 2016, "sha1": "016f9d71913ab06a426a722fa853cf9f57aec56c", "oa_license": "CCBY", "oa_url": "https://ciliajournal.biomedcentral.com/track/pdf/10.1186/s13630-016-0035-3", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "016f9d71913ab06a426a722fa853cf9f57aec56c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
263829154
pes2o/s2orc
v3-fos-license
Off-shell duality invariance of Schwarzschild perturbation theory We explore the duality invariance of the Maxwell and linearized Einstein-Hilbert actions on a non-rotating black hole background. On shell these symmetries are electric-magnetic duality and Chandrasekhar duality, respectively. Off shell they lead to conserved quantities; we demonstrate that one of the consequences of these conservation laws is that even- and odd-parity metric perturbations have equal Love numbers. Along the way we derive an action principle for the Fackerell-Ipser equation and Teukolsky-Starobinsky identities in electromagnetism. Introduction The black holes of nature are the most perfect macroscopic objects there are in the universe: the only elements in their construction are our concepts of space and time. Chandrasekhar [1] The advent in the past decade of gravitational-wave astronomy and black hole imaging have spurred a renewed observational interest in the foundational and endlessly fascinating black hole solutions of general relativity (GR).The Schwarzschild metric describing non-rotating black holes is in a sense gravity's analog of the hydrogen atom in quantum mechanics: it was the first exact solution of Einstein's equations to be discovered, 1 and is still often the first solution taught to students of GR. The humble Schwarzschild metric is, of course, far from sufficient for modelling gravitational-wave events: astrophysical black holes rotate and so are more accurately described by the significantly more complicated Kerr metric, and the two-body problem in general relativity is highly non-linear and requires numerical techniques to solve near the merger.But some progress can be made analytically, particularly during the inspiral and ringdown phases, through a variety of perturbative schemes.Among the simplest is black hole perturbation theory, in which the metric is a small perturbation around a black hole background, analogous to the flat-space perturbation theory which is itself an essential topic in introductory GR courses. Black hole perturbation theory, in other words, is a fundamental problem in GR with significant relevance to modern experiments.In this paper we explore some of the symmetries of this theory, particularly the Chandrasekhar duality between evenand odd-parity modes (which arrive to Earth as + and × polarizations), which most famously manifests itself in the fact that the quasinormal mode spectra of both sectors are identical [1,2]. We will have a particular emphasis on symmetries which hold off shell, that is, symmetries of the action rather than just of the equations of motion.Our principal motivation for this is the role played by the action in Noether's theorem; it is also relevant for the quantum theory, e.g., [3][4][5][6].For linear theories, which we consider in this work, it is always possible to construct an action from the equations of motion, so the distinction between on-and off-shell symmetries may seem somewhat artificial.Nevertheless there are interesting differences, as is illustrated by the classical example of electric-magnetic duality in Maxwell's theory. The electromagnetic field is described by the vector potential A = A µ dx µ .In terms of the field strength F = dA, the field equations in vacuum are 2d ⋆ F = 0, dF = 0. ( The former is Maxwell's equation, and the latter is the Bianchi identity, which is satisfied for all field configurations since d 2 = 0.If we perform a duality transformation, by sending F → ⋆F and ⋆F → −F , 3 then the Maxwell equation becomes the Bianchi identity and vice versa, leaving the full set of equations invariant.This is a particular case (θ = −π/2) of an SO(2) duality invariance of Maxwell's equations, Since electric-magnetic duality is a continuous symmetry, Noether's theorem tells us there must be an associated conservation law.To find this, one varies the action under a duality transformation with a spacetime-dependent parameter.However this does not mean simply varying the Maxwell action S = 1 4 d 4 x √ −gF 2 µν and setting δF µν = ǫ(x) ⋆ F µν , because A µ rather than F µν is the dynamical variable which we vary in the action to obtain Maxwell's equations. The Noether procedure requires us to vary A by a functional δA[A] implementing the duality symmetry, but it is impossible to construct a δA [A] such that dδA = ⋆F .If there were, we could take an exterior derivative to find d ⋆ F = 0, i.e., Maxwell's equation for A, which is precisely what we do not want to assume. 4The best we can do is construct a symmetry operator δA[A] which is only a duality transformation (in the sense that dδA = ⋆dA) on shell; the full expression contains additional terms which vanish when the Maxwell equations are satisfied [7][8][9]. Interestingly the off-shell duality transformation is typically non-local.To see this we note that we could flip the roles of the Maxwell equation d ⋆ F = 0 and the Bianchi identity dF = 0 by taking the former to define a potential, ⋆F = d Ã, and the latter to be the field equation for this "dual potential" õ .This dual potential is precisely the symmetry transformation, where à is a solution to the first-order equation d à = ⋆dA.Since solving this equation requires integration, in general à will depend non-locally on A. For instance, in a gauge where δA 0 = 0, the off-shell duality transformation of A i is [8] with ∇ −2 the inverse spatial Laplacian.This is a genuine symmetry of the Maxwell action, which can be used to derive conserved quantities, and which coincides with duality transformations δF = ⋆F on shell, i.e., when the Maxwell equations are satisfied.The goal of this work is to discuss a similar story for the Chandrasekhar duality in black hole perturbation theory.Along the way we will investigate the dynamics of scalar, electromagnetic, and gravitational fields on the Schwarzschild background in two covariant languages designed to exploit its symmetries, the 2 + 2 and Geroch-Held-Penrose (GHP) formalisms.These approaches are complementary: the 2 + 2 formulation is more intuitive but specifically adapted to a non-rotating black hole, while GHP generalizes straightforwardly to the full Kerr solution and is in a sense "more fundamental" in that it is based on the algebraically-special structure of black hole spacetimes.We will further see that objects arising naturally when studying dynamics in the 2 + 2 formulation have simple interpretations in GHP language. The rest of this paper is organized as follows.In section 2 we review the Schwarzschild solution and introduce the 2 + 2 and GHP formalisms.We study the dynamics of a massless scalar field on Schwarzschild in section 3, the electromagnetic field in section 4, and linearized gravity in section 5.In section 6 we discuss the off-shell Chandrasekhar duality and in section 7 explore its physical consequences for tidal Love numbers, before concluding in section 8. 2 Schwarzschild background in the 2 + 2 and GHP formalisms The black hole solutions in vacuum four-dimensional general relativity are highly symmetrical.In this section we will review the Schwarzschild metric, on which we will place various field theories, in two formalisms designed to exploit these symmetries in a coordinate-independent manner.The first is the 2 + 2 formalism, which treats objects covariantly on the two-sphere and on the (t, r) plane.The second is the GHP formalism, which takes advantage of the algebraically-special (type D) structure of black hole spacetimes in general relativity.The Schwarzschild metric in Boyer-Lindquist (or Schwarzschild) coordinates is with r s = 2GM the Schwarzschild radius and dΩ 2 S 2 the line element on the unit 2-sphere.As we will see, kinetic terms for fields on a Schwarzschild background are often more conveniently phrased in terms of a "tortoise coordinate" r ⋆ defined by 2) The horizon r = r s is located at r ⋆ = −∞ and spatial infinity r = ∞ at r ⋆ = ∞. Let us write the four-dimensional coordinates as x µ = (x a , θ A ), where lower-case Latin letters a, b, ... run over (t, r) and upper-case letters A, B, ... run over (θ, φ). The metric factorizes into with To avoid a clutter of notation, we will use ∇ µ , ∇ a , and D A for the covariant derivatives with respect to g µν , g ab , and Ω AB , respectively, and raise and lower indices with these metrics.We also use the same symbol for g µν and g ab ; which one is meant should be clear from context. 5he r appearing in eq. ( 2.3) is a spacetime scalar on M 2 and need not be aligned with one of the coordinate directions, though it is in Boyer-Lindquist coordinates.It and the 2-metric g ab obey the background Einstein equations, where = g ab ∇ a ∇ b and (∂r) 2 = g ab ∂ a r∂ b r.In coordinates, the Ricci scalar and the norm of ∂ a r are Note in particular that the latter of these allows us to use f (r) in coordinate-invariant expressions.We will find it convenient at times to use the shorthand As a consequence of its high degree of symmetry, equations of motion on the Schwarzschild background admit fully separable solutions [13].For a field of integer spin s, the general solution for the field variable or an observable constructed from it can be written in the schematic form (e.g., omitting indices) . (2.8) A further consequence of symmetry is that the radial and angular functions R ℓω (r) and Θ ℓm (θ) obey remarkably similar equations.The main difference is that the periodic boundary conditions on the angular coordinates constrain S ℓm (θ, φ) to the class of spherical harmonic functions, which are eigenfunctions of the Laplacian on S 2 , while R ℓω (r) obeys a Schrödinger-like equation (typically in terms of the tortoise coordinate r ⋆ rather than r). The spherical harmonics can be categorized by their transformation properties under rotations.In four dimensions, there are two such classes: scalars and vectors. 6he scalar harmonics are the familiar spherical harmonics, with P m ℓ (x) the associated Legendre polynomials.The vector harmonics decompose into longitudinal and transverse, or electric and magnetic, pieces, which are related to the scalar harmonics by with ǫ AB the Levi-Civita tensor on the 2-sphere, ǫ θφ = sin θ.In coordinates these are The scalar harmonics obey the Laplace equation on the 2-sphere with eigenvalue −ℓ(ℓ + 1), where Ω ≡ det(Ω AB ) = sin 2 θ, while the vector harmonics V A = (E A , B A ) are eigenfunctions with eigenvalue 1 − ℓ(ℓ + 1), (2.13) The spacetime integration measure appearing in a four-dimensional action contains the 2-sphere integration measure dΩ, 7 We will be able to integrate over S 2 in actions using the orthonormality relations of the spherical harmonics, We remind the reader that in our notation, Geroch-Held-Penrose (GHP) formalism In this subsection we describe an alternative formalism for leveraging the symmetry of black hole backgrounds: the Geroch-Held-Penrose (GHP) formalism, which is itself built on the famous Newman-Penrose (NP) approach.While this approach is somewhat more arcane than the 2 + 2 formalism, 8 it more directly makes use of the fundamental property underpinning the "magic" of the Schwarzschild and Kerr spacetimes, namely the fact that they are algebraically special. Newman-Penrose Recall that the Weyl tensor C µναβ of a generic spacetime has four principal null directions; 9 algebraically-special spacetimes are those where one or more of the four are degenerate.The Kerr black hole is of algebraic type D, with two singly-degenerate principal null directions.These special vectors, l µ and n µ , point along outgoing and ingoing null rays, respectively.In the Schwarzschild case they live on M 2 , and in fact can be thought of as zweibeins for the 2-metric, To complete the picture, we include null vectors parametrizing S 2 : a complex vector m µ and its complex conjugate mµ , with m µ dx µ = m A dθ A .These four vectors together comprise a complex null tetrad e a µ = (l µ , n µ , m µ , mµ ), in the sense that 10 g µν = −2l (µ n ν) + 2m (µ mν) . (2.18) The vielbeins are normalized so that all of their inner products vanish except for This setup does not completely fix (l µ , n µ , m µ , mµ ), as there is some residual Lorentz invariance.Insisting that ℓ µ and n µ remain principal null directions leaves a two-parameter symmetry comprising boosts of l and n, 8 Due at least in part to its heavy use of Icelandic runes. 9Principal null directions are null vectors l µ satisfying l ν l [ρ C µ]να[β l σ] l α = 0 [14]. 10This is the usual vielbein relation gµν = η ab e a µ e b ν with the internal Minkowski metric written in the form Here bold lowercase Latin letters represent 4D internal Lorentz indices. and rotations of m and m, with α and β real functions.We will choose the Carter tetrad [15], ) ) ) The frequently-used Kinnersley tetrad [16] is related by a rescaling (2.20) with α = f /2.The Carter tetrad is particularly useful for our purposes as it maintains symmetries of the background which can be obscured in other bases [17]. In the Newman-Penrose formalism one works with spacetime scalars obtained by projection along the null directions.For instance the Weyl tensor C µναβ is efficiently encoded in five complex Weyl scalars, which are the "components" of the Weyl tensor in the complex null basis, (2.23) where C lm mn = C µναβ l µ m ν mα n β and so on.(In general we will use the notation V µ l µ = V l , etc.) For type-D spacetimes the only non-vanishing Weyl scalar is Ψ 2 , providing a remarkably compact characterization of the full Riemann tensor.In the Schwarzschild case, the value of Ψ 2 in coordinates is11 Such a quantity is said to have GHP type {p, q}.They are also called spin-and/or boost-weighted, where the spin weight is s = (p − q)/2 and the boost weight is b = (p + q)/2. The residual Lorentz transformations (2.20)-(2.21)do not exhaust the symmetry in choosing a tetrad, which is invariant under several discrete tetrad interchanges: complex conjugation, which swaps m µ and mµ ; the prime ( ′ ) operation, which interchanges both l ↔ n and m ↔ m; and, less obviously, the star (⋆) operation, (l, n, m, m) → (m, − m, −l, n), which we will not use.These discrete invariances allow for a particularly economical description of field equations, since one equation implies its prime, conjugate, and prime conjugate versions. Scalars with well-defined GHP type include the Weyl scalars, which inherit their GHP types from the various factors of l µ , etc., in their definitions (2.23), 12 as well as the spin coefficient ρ, which is of GHP type {1, 1}.Examples of scalars without a well-defined GHP type include the spin coefficients β and ǫ (and their primes and conjugates), These are the only non-zero spin coefficients for Schwarzschild and completely describe the spin connection.In the Carter tetrad they take the coordinate values (2.29) Analogously to the non-coordinate-invariant Christoffel symbols, β and ǫ can be used to construct covariant derivative operators with well-defined GHP type.Unfortunately, the use of Icelandic runes for these operators is firmly embedded in the literature: The operator Þ sends a GHP type {p, q} object to one with type {p + 1, q + 1}, Þ ′ to {p − 1, q − 1}, ð to {p + 1, q − 1}, and ð ′ to {p − 1, q + 1}.Note that Þ and Þ ′ raise and lower the boost weight, while ð and ð ′ raise and lower the spin weight.For the Carter tetrad in Schwarzschild, the GHP derivatives take the coordinate form 12 Tensors like C µναβ are a priori unweighted. 13And by extension ρ ′ , ρ, and ρ′ , although for Schwarzschild ρ and ρ ′ are real. Note also that these derivatives have non-trivial commutators, along with their primes and complex conjugates.In this language, the scalar spherical harmonics are eigenfunctions of ðð ′ , and are a special case of the spin-weighted spherical harmonics, which can be obtained from the scalar harmonics by raising and lowering the spin weight with ð and ð ′ , Massless scalar We want to compute the action for linearized gravity on Schwarzschild, performing separation of variables and utilizing the 2 + 2 decomposition.Many of the basic steps of the computation are present in the simpler cases of a scalar and vector field, so we will work our way up to gravity one integer step in spin at a time. The action for a massless scalar is14 The field φ admits a spherical harmonic expansion of the form (2.8), Inserting this into eq.(3.1) and integrating over S 2 we find a sum over actions for each (ℓ, m) mode, To simplify notation, we will drop the ℓm subscripts and focus on an individual mode, with the summation over all modes implied.This is kosher because in linear theories modes of different (ℓ, m) decouple.The 2D field φ is not canonically normalized, as its kinetic term is multiplied by a factor of r 2 .We can remove this with a field redefinition [12,18], in terms of which the action is We identify the usual scalar potential on a Schwarzschild background [2], If we drop our insistence on covariance and write the action in terms of the coordinates (t, r), we find that the kinetic and gradient terms again have nonstandard factors in front. To canonically normalize we transform to the tortoise coordinate dr = f dr ⋆ [18], For completeness let us write the action (3.1) in GHP language.Writing the metric in terms of the null vectors, cf.eq.(2.18), we have If we separate variables and integrate over the 2-sphere, then the action for a single mode is Electromagnetism The next step on the road to gravity, which is the spin-2 case, is the spin-1 case, which is electromagnetism.The Maxwell action is The vector potential is a superposition of separable solutions: Herein we will focus on a single mode and drop ℓm subscripts, with the summation implied.Under a 2 + 2 decomposition the vector potential is Here we have used our gauge freedom to remove the longitudinal mode, which is proportional to E A dθ A .Gauge invariance adds a wrinkle that was not present for the scalar: in order to avoid losing information when fixing a gauge at the level of the action rather than the equations of motion, one must make a complete gauge fixing, in the sense that there are no integration constants left when fixing a gauge vector (rather than necessarily that all gauge freedom is exhausted, although we will insist on this too) [19,20].Our gauge choice satisfies this requirement [18]. Performing separation of variables and integrating over the 2-sphere, we obtain where We see that the even-parity (or electric) field A a and the odd-parity (or magnetic) field a decouple. The even sector has only one dynamical degree of freedom but depends on two variables A a .To isolate this dynamical field we integrate in an auxiliary variable λ(t, r): The λ equation of motion fixes it to be proportional to F ab on-shell, Inserting this back into L even,aux we obtain L even , establishing their dynamical equivalence.However we can also obtain an action for λ alone by integrating out A a using its equation of motion, and plugging back into the action, We canonically normalize the fields by scaling out appropriate factors of ℓ(ℓ + 1), so that We conclude that ψ ± are the "master variables" for the electric (+) and magnetic (−) sectors (see also Ref. [18]), each satisfying a Schrödinger equation with the usual vector potential [2]. Electric-magnetic duality The Lagrangian (4.11) is manifestly invariant under electric-magnetic duality, which acts as a rotation on the vector (ψ + , ψ − ) T .The infinitesimal version is that is, ) ) This is an off-shell symmetry of the action (4.5).As discussed in the introduction, this symmetry is non-local.This is reflected in the transformation law for a, which contains an inverse of the spherical Laplacian, although interestingly the symmetry transformation for A a is local.The transformation law (4.14a)-(4.14b)has a natural interpretation in terms of Hodge duality.Consider the dual field strength tensor, The Maxwell equation is d ⋆ F = 0, so that on-shell ⋆F = d à can be expressed in terms of a dual potential õ .It turns out that the off-shell duality transformation δA µ is just such a dual potential, that is, where If we further package the electric and magnetic master variables into a complex scalar, then the action (4.11) is simply Electric-magnetic duality acts as δψ = iψ, which is manifestly a symmetry.It is straightforward to obtain the conserved current via the standard Noether procedure, Intriguingly, the complex master field ψ, which we obtained by integrating out non-dynamical fields and canonically normalizing, turns out to be proportional to (ℓ, m) modes of the middle Newman-Penrose scalar For this reason, it will be illuminating to recontextualize the foregoing 2 + 2 calculation in the GHP formalism. Maxwell in GHP Analogously to the Weyl tensor, the electromagnetic field strength tensor F µν can be fully encoded in three complex Maxwell scalars, of GHP types {2, 0}, {0, 0}, and {−2, 0}, respectively.We remind the reader of the notation F lm = F µν l µ m ν , etc.The Maxwell Lagrangian is Now we introduce an auxiliary complex scalar λ of GHP type {0, 0}, meant to equal φ 1 on-shell, by sending Instead of decomposing A µ into M 2 tensors A a (t, r) and a(t, r) as in the 2 + 2 decomposition, in the GHP formalism we encode it in the four scalars (A l , A n , A m , A m). The gauge choice we made earlier can be written in a GHP-invariant manner as In this gauge, the even modes live in A l and A n while the odd modes live in A m and A m through the combination To work with the equations of motion coming from the Lagrangian (4.25), it is helpful to establish just a bit more notation.First, we write the Maxwell scalars in terms of operators T i acting on A [21], ) Second, we introduce Wald's notion of adjoint operators [22].The adjoint O † of an operator O satisfies AOB − BO † A = ∇ µ v µ for some vector v µ and tensors (with indices suppressed) A and B, so that under an integral we obtain the adjoint when integrating by parts, The adjoints of the GHP derivatives are along with their primes.The adjoints of T i are [21] .31c) We now have the tools to vary the Maxwell Lagrangian (4.25) with respect to A, finding Note that this is a vector-valued equation, per the definitions of T † i .The components along l and n determine A l and A n in terms of λ and its complex conjugate, where is zero in the gauge used in the previous subsection; we will fix g = 0 herein.We can also integrate out A m and A m using the imaginary part of the λ equation of motion, which implies Now that we have solutions for each component of A µ in terms of λ, we can plug them into the Lagrangian (4.25) to find a theory for λ alone.However, to avoid the complications of dealing with the inverse ðð ′ operator, we first perform a simple field redefinition, λ = ðð ′ ψ (4.37) so that the solution for A µ is To integrate out A µ we plug this solution into eq.(4.25).The Maxwell scalars evaluated on this solution are ) ) Putting these in the action we find, freely integrating by parts, 15 This is a remarkably simple result.To switch back to λ = ðð ′ ψ, we integrate by parts and use the GHP commutators, to write where ψ = (ðð ′ ) −1 λ.The equation of motion obtained by varying with respect to ψ On shell λ = φ 1 , for which this is the Fackerell-Ipser equation [23] in GHP notation [24].Electric-magnetic duality transformations act as complex rotations on the Maxwell scalars, φ i → e iθ φ i , essentially since they are the components of the (anti-)self-dual parts of the Maxwell tensor.The action (4.42) is indeed manifestly invariant under λ → e iθ λ, or infinitesimally δλ = iλ (along with δ ψ = −i ψ). 16 A natural extension of the setup with φ 1 as an auxiliary field is to introduce auxiliary fields for all three Maxwell scalars, that is, a triplet (λ 0 , λ 1 , λ 2 ) which onshell satisfy λ i = φ i . 17First let us note that we can "chop off" the +c.c. in the real Maxwell Lagrangian (4.24) by adding the total derivative (i/4)F µν (⋆F Now we add in the full triplet of auxiliary fields, Together with the adjoints (4.30), these imply that up to total derivatives 16 The Lagrangian (4.42) does not look real, but it is up to a total derivative, as can be explicitly checked using the commutators and adjoints of the GHP derivatives, and in particular the identity The λ i equations of motion set λ i = φ i as desired, while the A equation of motion is or in vector notation, This formulation yields first-order constraints among the φ i on-shell.These are equivalent to the Teukolsky-Starobinsky identities, which are second-order differential relations between φ 0 and φ 2 , or equivalently fourth-order relations for φ 0 and φ 2 separately.To obtain the Teukolsky-Starobinsky identities we therefore need to take combinations of derivatives of E a to remove φ 1 .The correct combinations are ) where the T-S identities following the arrows can be found in, e.g., eq.43 of Ref. [21]. The third identity can also be obtained from Here we have used the background equation Þ ρ We note that φ 1 is special not just because it appeared naturally in the dynamical construction of the previous subsection, but also because it is closely related to the Killing-Yano 2-form and its dual, where The Killing tensor, which underlies separability, is the square of the Killing-Yano tensor, in coordinates, To connect explicitly to the 2 + 2 formulation of the previous subsection, we note the useful identities ) Using these we can calculate the Maxwell scalars in terms of 2 + 2 quantities, ) ) We conclude with speculation about the structure discussed in this section and its generalization to Kerr.There the Fackerell-Ipser equation is not separable, which is why it is typical to work with the Teukolsky equations [26] for the extreme-weight scalars φ 0 and φ 2 , which are separable due to the aforementioned Killing tensor structure [13].It would be very interesting to obtain an action principle for the Teukolsky equations analogously to the one we have constructed for the Fackerell-Ipser equation and Teukolsky-Starobinsky identities.We note that in Ref. [27] such an action was constructed using the fact that the Teukolsky equations are linear, which may provide a hint: the Teukolsky Lagrangian derived there is of the form L ∼ ρ −2 φ 2 Oφ 0 , where O is the Teukolsky operator for φ 0 .It would also be interesting to understand how the Debye and Hertz potentials which appear in reconstruction methods [22,[28][29][30] arise from the action formulation.We leave these important open questions for future work. Gravity Consider linear perturbations around the Schwarzschild metric ḡµν ,18 and expand the Einstein-Hilbert action to quadratic order in h µν , The even-and odd-parity perturbations decouple at this order, so each is described by a separate quadratic action: Herein we will drop bars on background quantities, since we will only be interested in δ 2 S. Expanding the Ricci scalar to second order in perturbations is a non-trivial task, and ultimately not necessary, since we can write the action in first-order form.To see this, consider a metric variation g → g + δg and Taylor expand the action, Matching to eq. ( 5.2) we see that It is a foundational result in GR that δ d 4 x √ −gR = d 4 x √ −gG µν δg µν .Taking a second variation we obtain where G[h] µν ≡ δG µν [g +h] is the linear-in-h part of the Einstein tensor for g µν +h µν , (5.7) For simplicity (and to facilitate comparison to the literature) we will continue to call this δG µν , with the understanding that it is evaluated on g µν + h µν rather than g µν +2M −1 Pl h µν .Integrating by parts we recover the standard Fierz-Pauli Lagrangian for a spin-2 field, (5.8)The 2 + 2 components of δG µν [g + h] are standard and can be found in, e.g., Refs.[10][11][12]. 19We present relevant components in appendix A. The quadratic action (5.6) is expanded as (5.9) We remind the reader that M 2 indices are raised with g ab and S 2 indices with Ω AB .There are at least two useful gauges which can be safely fixed at the level of the action [20].One is the standard Regge-Wheeler gauge, in which h aA is purely odd and h AB = r 2 KΩ AB .Another is the "α gauge" used in, e.g., Refs.[18,32,33], where h aA contains both even and odd pieces and h AB = 0.The gauge choice affects the auxiliary structure of the action.To see this, consider the gauge-invariant variables hab and K defined in Ref. [11], which correspond (by construction) to h ab and K in the Regge-Wheeler gauge, and in α gauge contain derivatives, (5.10a) K = −2f rα. (5.10b) We will remain agnostic about which of these two gauges to pick, and write down expressions for both.In these gauges, the components of h µν are ) ) where we remind the reader that r a ≡ ∂ a r.As usual we will drop the summation and the subscripts and focus on a single (ℓ, m) mode.In Regge-Wheeler gauge we set α = 0, and in α gauge we set K = 0. We will also find it convenient to decompose h ab into its trace and tracefree parts, hg ab , ĥa a = 0, (5.12) and to work with the Ricci tensor rather than the Einstein tensor, In terms of these variables, the even and odd actions are ) To integrate over the 2-sphere, we note that the S 2 scalars δR ab and Ω AB δR AB are expanded in Y ℓm , while the even and odd parts of δR aA can be written as (5.15) Performing the integral over S 2 and writing the actions as S = d 2 x √ −gL, the Lagrangians are ) where h ab denotes h ℓm ab , etc. Let us treat the odd and even sectors separately. Odd sector The odd piece of the Ricci tensor is (see appendix A) where so the Lagrangian (5.16b) is where in the last line we have integrated by parts.Note that (ℓ+2)(ℓ−1) = ℓ(ℓ+1)−2. Finally we rescale (5.20) so the action takes the form (5.21) In coordinates this is [18] where h a dx a = h 0 dt + h 1 dr, and overdots and primes denote ∂ t and ∂ r , respectively.Physically we can think of eq. ( 5.21) as describing a two-dimensional vector with an r-dependent mass, 20 where we remind the reader that r is a background scalar rather than necessarily a coordinate direction. Note the close resemblance to L even for the Maxwell field (4.5).We can repeat the same trick to integrate out the two fields h a in favor of a single dynamical field.We integrate in an auxiliary variable λ(x a ) via a perfect square so as not to affect the dynamics, This is dynamically equivalent to L odd , which is recovered by plugging in the solution to the λ equation of motion, λ = (1/2)r 2 ǫ ab F ab , and we will write it as L odd accordingly.The introduction of λ gives us the option to integrate out h a by solving its equation of motion, Substituting this into the action we have (5.25) We perform a further rescaling to canonically normalize the kinetic term, so that the action becomes, using the background equations of motion (2.5), (5.27) The mass term explicitly evaluates to where V − (r) is the Regge-Wheeler potential [34].Putting everything together we obtain the odd-sector Regge-Wheeler action, The equation of motion, where is the usual Regge-Wheeler equation [34] for Ψ − , This means that Ψ − must be proportional to the Regge-Wheeler variable up to time derivatives.Indeed, recalling Martel and Poisson's [11] gauge-invariant definition of the Cunningham-Price-Moncrief variable [35], which is itself a time integral of the original Regge-Wheeler variable [34], we find agreement with Ψ − up to a numerical factor: We conclude the discussion of the odd sector by noting an interesting alternative approach discussed in, e.g., Ref. [10].Consider the M 2 1-form h = h a dx a .The action is and the equation of motion is Taking a divergence by applying d⋆, we find that the 1-form r 2 ⋆ h is closed, By the Poincaré lemma we can write it in terms of a scalar potential φ, or in index notation, Comparing to eq. ( 5. 24) we see that this potential is related to our auxiliary variable λ by λ. (5.39) The auxiliary field method is a technique for consistently implementing eq. ( 5.37) at the level of the action.In particular, if we were to naïvely plug the solution (5.37) directly into the original action (5.21), the resulting theory would be of fourth order in derivatives of φ, and could not describe the same physics: it contains two degrees of freedom rather than one, and possesses an Ostrogradski ghost instability [36]. Even sector The action for the even sector is given by eq.(5.16a).Expressions for relevant components of the perturbed Ricci tensor are in appendix A. The resulting actions after many intergrations by parts are in Regge-Wheeler gauge and in α gauge, respectively.We begin by noting the wellknown fact that these expressions are significantly more complicated than eq.(5.21). 21t is convenient to perform a coordinate-like decomposition on objects with indices by projecting along r a and the timelike direction t a = ǫ ab r b = f ∂ a t, in terms of which the metric is [11] f g ab = r a r b − t a t b . ( In particular, we do not lose any information by projecting the traceless perturbation ĥ once along r a [12], ĥa ≡ ĥab r b , ( as we can reconstruct ĥab via22 where angular brackets denote traceless symmetrization, T ab = T (ab) − 1 2 T g ab .This simplifies the actions somewhat, For concreteness, let us fix α gauge.We will discuss Regge-Wheeler gauge at the end of the section.After the gauge freedom has been used up, there are four fields for one underlying dynamical degree of freedom.Two auxiliary variables are apparent by inspection of the action (5.44): t a ĥa ∼ h tr and h.Here we will essentially follow the procedure of Ref. [18] and begin by integrating out the former.To isolate the components of ĥa we decompose it as ĥa = ĥ0 t a + ĥ1 r a . ( We will also need to perform some simple field redefinitions to demix fields.We begin by shifting h, h = h − 2 ĥ1 . (5.46) Note that h contains both h tt and h rr , whereas h ∼ h rr .In this field basis the action is We can integrate out ĥ0 using its equation of motion, to find (5.49) Now we perform a second field redefinition, 23 comprising a shift to demix α and h and an overall rescaling, where we have introduced the function [11] Λ(r) ≡ ℓ(ℓ + 1) + 1 − 3f. (5.51) The action becomes Note that ψ is precisely the gauge-invariant Zerilli-Moncrief function defined in Ref. [11], multiplied by −1/4. The upshot of all these field redefinitions is that two of the remaining three fields are manifestly non-dynamical: ĥ1 is a Lagrange multiplier (it appears linearly) and h is auxiliary (it appears quadratically but without derivatives).The constraint obtained by varying with respect to ĥ1 fixes h in terms of ψ, while the equation of motion for h is This fixes ĥ1 once we use eq.(5.53), although we do not need to know ĥ1 in order to integrate it out of the action, as it multiplies the constraint (5.53) that it enforces.We will however need this equation in order to construct off-shell duality operators for the metric perturbations.Plugging eq.(5.53) into the action we finally obtain, after some integrations by parts and algebra, where is the Zerilli potential [37].Finally we canonically normalize, to obtain the Zerilli action for the even sector: (5.58) The main benefit of working with α gauge is that the field redefinitions we needed to perform did not involve derivatives, but a choice of gauge is not a choice of physics, and indeed in Regge-Wheeler gauge we can follow a similar procedure to reduce the action (5.40a) to the Zerilli action (5.58).We begin again by integrating out h tr ∼ ĥ0 , while h tt ∼ h − 2 ĥ1 is a Lagrange multiplier that imposes a constraint on K and h rr ∼ h + 2 ĥ1 (and in turn drops out of the action).To demix the remaining two variables and canonically normalize we perform a field redefinition, The even sector is inordinately complicated, and the procedure we have done is not unique, and may not be the simplest or clearest.Alternative approaches would therefore be interesting to explore.An obvious alternative is to integrate out h first rather than ĥ0 .Furthermore, the decomposition (5.45) can be swapped for a more elegant argument in terms of differential forms analogously to the odd sector [12], which may therefore admit an auxiliary variable formulation.And of course an approach eliding the Regge-Wheeler and Zerilli equations altogether in favor of the Teukolsky equation would be of exceptional interest. Chandrasekhar duality The linearized Einstein-Hilbert action is a complicated functional of the metric perturbations (cf.eqs.(5.21) and (5.40)), but by integrating out the non-dynamical degrees of freedom we obtained a simple action in terms of the Regge-Wheeler and Zerilli variables, where V + and V − are the usual Zerilli [37] and Regge-Wheeler [34] potentials, respectively. It is important to pause here to emphasize the difference between on-shell and offsymmetries.We could have constructed eq. ( 6.1) directly from the Regge-Wheeler and Zerilli equations, but it was a non-trivial exercise to get there from the Einstein-Hilbert action using standard field theory tools.Having done this exercise, we will be able to construct an off-shell duality symmetry of the original action (5.2). First let us demonstrate the duality invariance of the Regge-Wheeler/Zerilli action (6.1).It is a remarkable fact that the two seemingly-disparate potentials V ± (cf.eqs.(5.28) and (5.56)) can be written in a unified form in terms of a single superpotential [1,2,[38][39][40][41],24 where the superpotential W (r) and constant β are given by 3) It is straightforward to check that the action (6.1) is invariant under the duality symmetry The transformation (6.4) is an off-shell symmetry of the action, and coincides on shell with the venerable Chandrasekhar duality [1,[38][39][40]. 25This "hidden" symmetry of the linearized Einstein equations relates a solution Ψ ± to the Regge-Wheeler or Zerilli equation to a solution Ψ ∓ to the other equation, which can be constructed in frequency space via26 We note that, intriguingly, this symmetry structure also appears in supersymmetric quantum mechanics, the theory of 0 + 1-dimensional supersymmetry [44]. 27The Chandrasekhar duality is responsible for the crucial result that, for four-dimensional black holes in GR, the even and odd sectors are isospectral, meaning they share the same quasinormal mode spectrum. 28ith the off-shell symmetry (6.4) in hand, we can compute conserved quantities using the Noether procedure.The conservation law, in coordinates, is with the current ) Here overdots denote derivatives with respect to t, and primes denote ∂ r⋆ derivatives. A complex master variable Similarly to the spin-1 case, we can combine the Regge-Wheeler and Zerilli variables into a complex variable, in terms of which the Lagrangian (6.1) takes a very simple form, as does the duality transformation, .12)and writing the symmetry as Let us confirm this is a symmetry.Under a general variation, the Lagrangian changes as δL = ĒδΨ + Eδ Ψ, (6.13) where the equation of motion E is In terms of the quantity the variation of the Lagrangian under δΨ Now we calculate QE and freely integrate by parts, The last line is manifestly real, so that the variation of the Lagrangian vanishes as expected, δL = 2 Im QE = 0. (6.18) Using similar manipulations we can also calculate the conserved current, where we have defined Analogously to the spin-1 case, it is natural to wonder whether this complex master variable is related to the middle-weight Weyl scalar, Ψ 2 .A new complication in the gravitational case is that Ψ 2 has a background value, and accordingly its perturbation δΨ 2 is not gauge-invariant.Nevertheless one can construct a gaugeinvariant version δΨ 2 which contains the Regge-Wheeler and Zerilli variables [45], This is not quite our master variable Ψ, as the real (even) piece is a rescaling of the Zerilli variable.We leave a further exploration of this question for future work. Flat-space limit: linearized gravitational duality We can gain some physical insight by looking at the flat-space limit, r s → 0. The expression (6.4) for δΨ ± diverges due to the 1/r s scaling in W (r), which can be remedied by sending δΨ ± → r s δΨ ± before taking the limit.In this limit we have an SO(2) symmetry acting on (Ψ + , Ψ − ) similar to the electromagnetic case, ) Direct calculation shows that, on shell, this duality generates rotations between the Riemann tensor and its dual, ) where the dual Riemann tensor is defined as This is the well-known gravitational "electric-magnetic" duality, lifted to an off-shell symmetry for linear perturbations around flat space [46].We conclude that the symmetry (6.4) is an extension of electromagnetic duality to Schwarzschild backgrounds.An off-shell duality symmetry has also been found to hold for Minkowski [46], de Sitter [47], and anti-de Sitter backgrounds [48].Adding to this list Schwarzschild, which is less symmetric than the others, raises interesting questions: which other backgrounds possess a linearized duality symmetry, and what physical mechanism underlies these symmetries? Chandrasekhar duality off-shell The symmetry (6.4) can be lifted to a symmetry of the linearized Einstein-Hilbert action in terms of the metric perturbations, eqs.(5.21) and (5.40), analogously to electromagnetism.The calculation itself is cumbersome and not especially enlightening, so we will outline the steps without presenting full expressions.Let us begin with the transformation of the odd-sector variable h a .Using its solution (5.24) and undoing various rescalings, we have where δΨ − is given by eq.(6.4).That expression is constructed from Ψ + , which we in turn write in terms of even-sector metric perturbations by following the chain of field redefinitions.For the even sector, we vary the expressions in terms of Ψ + for h ab and α or K, use eq.(6.4), and relate Ψ − to h a via Physical implications: Love numbers Another aspect of black hole perturbation theory in which symmetry has recently been found to play a crucial role is in the computation of tidal Love numbers.In particular, the puzzle over the unexpected vanishing of black hole Love numbers [49][50][51][52][53] spurred the discovery of underlying symmetry structures [54][55][56][57].It turns out that the duality symmetry which is the focus of this paper also plays a role in the symmetry story for Love numbers.Consider the Regge-Wheeler action (5.22) in the static sector, i.e., setting time derivatives to zero, where primes denote r derivatives.In the static limit h 1 is auxiliary and decouples from h 0 , so can be consistently set to zero.The Regge-Wheeler variable Ψ − is related to h 0 by In Ref. [54] it was shown that the static Regge-Wheeler equation is invariant under ladder symmetries which are responsible for the vanishing of tidal Love numbers in the odd sector.These come in the form of raising and lowering operators which relate solutions of the Regge-Wheeler equation to a solution with ℓ raised or lowered by one, At the lowest rung of the ladder, ℓ = 2, there is a further symmetry given by δΨ ℓ=2 − = Q 2 Ψ ℓ=2 − , with Q 2 = r 6 f ∂ r − 3r 5 f.(7.4) It follows that any ℓ mode is symmetric under the "horizontal" ladder symmetry where Q ℓ is built recursively from Q 2 , Transforming from Ψ − to h 0 , we see that the metric transforms under the horizontal odd-sector ladder symmetry as It is straightforward to check that eq.(7.7) is a symmetry of eq.(7.1).However, such a symmetry of the Zerilli equation is not apparent.Indeed, the argument for the vanishing of Love numbers for the Zerilli equation in Ref. [54] relied on the fact, as we will show, that the duality invariance (6.4) implies that the even and odd Love numbers are equal. 29adder operators for the Zerilli equation can be constructed straightforwardly by sandwiching a Regge-Wheeler ladder operator between two applications of the duality symmetry, e.g., for the horizontal operators, δΨ +,ℓ = (∂ r⋆ − W ) Q ℓ (∂ r⋆ + W ) Ψ +,ℓ . (7.8) It would be very interesting to know whether this symmetry is responsible for universal relations such as I-Love-Q [58,59]. Equality of Love numbers from gravitational duality Let us finish by establishing that the vanishing of the duality Noether current requires the tidal Love numbers in the even and odd sectors to be equal.Following Ref. [60], we calculate the Love numbers for static solutions by imposing regularity at the horizon and examining the behavior of the fields at infinity, Ψ ± → Ψ± r ℓ+1 + λ± r −ℓ , (7.9) where λ± are the Love numbers for the even (+) and odd (−) sectors and Ψ± are constants. Since we are looking at static solutions, conservation of the Noether current (6.8) becomes the statement that the r ⋆ component (6.9b) is constant.First we need to ensure that the duality transformation (6.4) preserves the boundary conditions, namely, if Ψ ± is regular at the horizon, then so is ∂ r⋆ Ψ ± ∓W (r)Ψ ± .From eq. ( 6.3) we see that W (r s ) is finite, which leaves us to check that ∂ r⋆ Ψ ± = f (r)∂ r Ψ ± is regular at r = r ⋆ .We can see this by solving the Regge-Wheeler and Zerilli equations perturbatively near the horizon, (7.10) It is convenient to use f as our radial coordinate, so that we can simply expand around f = 0 to look at the horizon.Using the fact that the Regge-Wheeler and Zerilli potentials both scale as f near the horizon, and that ∂ r = f ′ (r)∂ f ≈ ∂ f /r s , we have Near the horizon this is solved by Regularity at the horizon demands c 2 = 0, so that f ∂ r Ψ ± ≈ f ∂ f Ψ ± → 0 as f → 0.30 So if Ψ ± is a solution with boundary conditions suitable for computing Love numbers, then Ψ± ≡ Ψ ± + δΨ ± is as well.Now we simply need to compute J r⋆ at the horizon and at infinity and equate the two, where for static solutions We begin by evaluating this at the horizon.Primes denote r ⋆ derivatives, and we are assuming that Ψ ± are regular at the horizon, so Ψ ′ ± = (1 − r s /r)∂ r Ψ ± = 0 at r = r s .From eq. ( 6.3) we see W 2 (r ⋆ ) + β = 0, so that the current vanishes for static solutions with regular boundary conditions, J r⋆ = 0. (7.14) At infinity, we again have W 2 (∞) + β = 0, so the leading-order terms will be those with only one derivative, Since J r⋆ = 0 everywhere, we conclude that i.e., the even and odd sectors are forced to have equal Love numbers as a consequence of symmetry. Discussion We have computed the actions for scalar, electromagnetic, and linearized fields on a Schwarszchild background in the 2+2 formalism.In each case we focused on isolating and canonically normalizing the underlying dynamical degrees of freedom.In the cases of electromagnetism and gravity, this exercise revealed a manifest electricmagnetic duality symmetry, which holds off shell and accordingly can be used to construct conserved quantities.As a physical application of the Noether current associated to linearized gravitational duality, we showed that duality forces the even-and odd-parity perturbations to have identical tidal responses.Combining this duality with a "ladder" symmetry [54] which causes the odd Love numbers to vanish therefore extends that particular argument for vanishing Love numbers to even perturbations.It would be interesting to explore whether these symmetries play a role in universal relations for compact objects. In the case of electromagnetism, we found a clear connection to objects arising in the Newman-Penrose and Geroch-Held-Penrose formalisms: the dynamical master variable is related to the middle-weight Maxwell scalar φ 1 .This observation enabled us to derive actions for the Fackerell-Ipser equation and Teukolsky-Starobinsky identities.It would be quite interesting to extend these constructions to the Teukolsky equation for the extreme-weight Maxwell scalars, to gravity, and to Kerr, which is the case of prime astrophysical interest.We leave these questions for future work. 18) on shell.The non-local relation between A µ and õ , ⋆dA = d Ã, underlies the non-local nature of δA µ . 6r s ℓ(ℓ + 1 ) (ℓ − 1)(ℓ + 2)(2ℓ + 1) λ+ − λ− .(7.15) .26)In this way we construct (rather complicated) expressions δh µν [h] which one can verify by explicit calculation comprise an off-shell symmetry of eqs.(5.21) and(5.40).Interestingly they can be simplified somewhat using the equations of motion, in which case the expressions become entirely local.A natural question for future investigation is whether the δh µν constructed this way is equal to a dual potential hµν .Since only the electric part of the Weyl tensor has a non-vanishing background value, the linearized duality transformations do not simply rotate C µναβ and Cµναβ .
2023-10-11T18:43:47.944Z
2023-10-06T00:00:00.000
{ "year": 2023, "sha1": "508b533e23f47a91ca0051058d9e6b465b066709", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2571-712X/6/4/61/pdf?version=1699373991", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "508b533e23f47a91ca0051058d9e6b465b066709", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
240373858
pes2o/s2orc
v3-fos-license
Clinical Outcomes after Intravenous Alteplase in Elderly Patients with Acute Ischaemic Stroke: A Retrospective Analysis of Patients Treated at a Tertiary Neurology Centre in England from 2013 to 2018 Intravenous thrombolysis with alteplase within 4.5 hours from symptom onset is a well-established treatment of acute ischaemic stroke (AIS). The aim was to compare alteplase for AIS between patients aged >80 and ≤80 years in our registry data, from 2013 to 2018. Mechanical thrombectomy cases were excluded. We assessed clinical outcomes over the six-year period and between patients aged over 80 and ≤80 years, using measures including the discharge modified Rankin Scale (mRS), 24-hour National Institutes of Health Stroke Scale (NIHSS) improvement, and symptomatic intracerebral haemorrhage (sICH) rate. Of a total of 805 AIS patients who received intravenous alteplase, 278 (34.5%) were over 80 years old, and 527 (65%) were younger. 616 (76.5%) received thrombolysis ≤ 3 hours after symptom onset and 189 (23.5%) within 3-4.5 hours. Median baseline mRS and NIHSS of the elderly cohort were 1 (IQR 0-5) and 13 (IQR 2-37), respectively, compared to the younger cohort 0 (IQR 0-5) and 9 (IQR 0-29). The sICH rate was 7.2% in the elderly and 4.6% in those ≤80 years, p = 0.05. NIHSS improved within 24 hours in 34% of the elderly cohort compared to 35% in the younger cohort. At hospital discharge, the mortality rate was 9% in the elderly cohort compared to the 6% in the younger cohort, p = 0.154. 25% of patients aged >80 years had mRS ≤ 2 compared to 47% in the younger patients (p < 0.0001). In conclusion, thrombolysis in elderly patients results in clinical improvement comparable to younger patients. Introduction Intravenous (IV) thrombolysis is a well-established treatment for AIS. Its use has been widespread for stroke within three hours of onset since the National Institute for Neurological Disorders (NINDS) trial reported in 1995 [1]. Thrombolysis for AIS up to 4.5 hours after onset became a common practice after the results of the European Cooperative Acute Stroke Study (ECASS) III trial were published in 2008 [2]. However, this study had stringent criteria, excluding patients aged over 80 years and those with a combination of previous stroke and diabetes mellitus. Subsequently, evidence emerged that treating patients over 80 was appropriate [3,4] with the most recently updated meta-analysis [5] including all IV thrombolysis trials comparing alteplase with placebo, confirming that the elderly population benefitted from treatment. Now, IV thrombolysis is the standard hyperacute reperfusion therapy for AIS within 4.5 hours from symptom onset in all adult age groups worldwide. At our centre, we have performed thrombolysis for stroke since 2004. In 2010, stroke care in London was reorganised to create eight Hyperacute Stroke Units (HASUs) where the majority of stroke cases are first admitted. One of the main objectives of this was to facilitate the delivery of urgent reperfusion therapy. The HASU model was very successful with an increase in thrombolysis rates from 5% to 13% and decrease in mortality of 3% [6,7]. In 2013, the Sentinel Stroke National Audit Programme (SSNAP) was started, which replaced previous sentinel audits of stroke. We have been entering data into SSNAP since its inception. It provides a readily accessible method for identifying stroke patients and analysing data to audit a unit's performance. Here, we investigated the elderly compared with the younger stroke patients' clinical outcomes after IV thrombolysis. Materials and Methods This is a retrospective study of our data (from a tertiary neurology center in England) entered prospectively in the UK SSNAP audit identifying all patients thrombolysed between January 2013 and December 2018. SSNAP has permission from the NHS Health Research Authority under section 251 of the Health and Social Care Act 2006 to collect patient data without prospective consent. We defined the outcomes as favorable clinical outcome (modified Rankin Scale ðmRSÞ ≤ 1), independence in activities of daily living (mRS ≤ 2) at hospital discharge, neurological improvement at 24 hours after IV alteplase (National Institutes of Health Stroke Scale (NIHSS) improvement ≥ 8 or score = 0-1), rate of symptomatic intracranial haemorrhage (sICH), and mortality rate. We excluded patients if (1) they underwent mechanical thrombectomy and (2) symptom onset to treatment was >4.5 hours. Data were compared using chi-squared tests or Mann-Whitney U tests. Multiple logistic regression models were used to assess the association between age groups and binary clinical outcomes and adjusted for patient characteristics (comorbidities, prestroke mRS score, and NIHSS score). The shift of clinical outcomes was compared using ordinal regression analysis. All statistical tests were two-sided, and p values < 0.05 were considered to be statistically significant. Statistical analysis was performed in SPSS (V.22; SPSS Inc, Chicago, Illinois, USA). No external funding was involved in the conduct of this study. Discussion In our retrospective, single-centre, large cohort analysis, we observed a higher rate of alteplase administration to elderly stroke patients compared to the national average [8] (34.5% vs. 11%), pooled cohort of randomized trials, and European SITS-MOST registry database (28.6%) [9,10]. Stroke tends to disproportionately affect older individuals [11], so data on the risks and benefits of alteplase in the older population is important. We explored the safety profile of alteplase in >80-year-old cohort and their outcome at hospital discharge compared to younger patients. 2 Stroke Research and Treatment Table 2 shows that in either time-to-treatment window, the older cohort had a much lower chance of enjoying an excellent outcome or independence at discharge; this was highly significant. Table 1 provides one possible explanation for this. It shows that at baseline, the older patients in our cohort on average had a worse baseline clinical status (higher mRS) and presented with larger syndromes (measured by NIHSS). Overall, the outcome in the elderly group was poorer; mortality at hospital discharge in our treated elderly cohort was comparable to those under 80 years of age (p = 0:154). Our results agree with the Third International Stroke Trial (IST-3) that showed no overall difference in long-term mortality in the elderly patients who received alteplase versus standard care alone [12,13]. Furthermore, the sICH rate in our overall cohort was comparable to that in published data [5] and in our elderly cohort specifically, while higher than that in the younger group (7.2% vs. 4.6%) but did not quite reach statistical significance 3 Stroke Research and Treatment (p = 0:05). This is consistent with a recent registry study [9] that showed that the incidence of sICH and overall mortality following thrombolysis of patients aged >80 years was not increased in routine practice versus clinical trials [12,13]. Time-to-treatment windows (0-3 vs. 3-4.5 hours) did not have an impact on case fatality (p = 0:12) or sICH rate (p = 0:47) in our elderly cohort. However, our data does provide a novel insight: the proportion of patients who improved within 24 hours of treatment was similar irrespective of age. In fact, a greater proportion of the older group improved in the later time window than their younger counterparts ( Table 2). This implies that there was a group of patients who were able to benefit from treatment irrespective of their age, severity of syndrome, or preexisting disability. This supports evidence [12] that alteplase is as effective in older as in younger patients compared to placebo. We speculate that some of these patients may have a core/penumbra mismatch which manifests as substantial early clinical improvement following successful thrombolysis. Such core/penumbra mismatch has been demonstrated with perfusion imaging [14,15] and has been utilised successfully in clinical trials [16,17]. Pooled analysis of randomized trials and the European SITS-MOST registry [9,10] showed better outcomes in younger versus older patients in both time-to-treatment periods. Interestingly, elderly patients with severe stroke were excluded from both these studies [9,10], as were those with history of diabetes mellitus and stroke. Compared to these studies, more elderly patients in our centre received alteplase within 3-4.5 hours of onset of symptoms (35% vs. 22%) [9,10]. Our elderly cohort also included 5% of those with history of diabetes mellitus and stroke (13%). Thus, our analysis had sufficient power to inform the relationship between age, early neurological improvement, and treatment time beyond three hours. Early neurological improvement is reported to be the best predictor or surrogate marker of 3month functional outcome and recanalization after thrombolysis [18][19][20]. Therefore, our study provides support for thrombolysis in the elderly. Advanced stroke imaging may help inform which patients aged over 80 should receive IV thrombolysis based on individual assessment, such as Computer Tomography (CT) perfusion [21][22][23]. We suggest that the routine use of advanced imaging in stroke thrombolysis may better focus its administration. Our data intriguingly also showed that significantly more elderly patients with big stroke syndromes (about NIHSS 16) were thrombolysed within three hours (Figure 1). The reason for this is not evident, but we speculate that the treating physician, patient, and their family may have felt that there was little to lose in accepting treatment. This observation may explain why our elderly cohort was treated in spite of having more severe stroke syndromes than their younger counterparts. Several limitations of our study have to be noted including methodological limitations related to the retrospective analysis. We did not have three-month outcome data available. Therefore, our results appear not to tally with IST-3 findings [12], but this is probably because our outcome was "discharge mRS" while the outcome in IST-3 was "3month mRS." Patients would have received more rehabilitation and made further improvement by 3 months post stroke. However, our well-kept stroke registry (with minimal missing data points and consecutive patients) did not identify a selection bias favoring the younger cohort. There was no evidence that younger, less disabled patients were selectively chosen for thrombectomy and therefore excluded from this study, while older, more disabled patients were treated with alteplase alone. Conclusions Over a 6-year period, our ability to deliver thrombolysis with alteplase safely in a very large elderly cohort adds additional reassurance that alteplase does not cause more harm in patients aged >80 years than in younger adults. Older patients had worse outcomes, probably because they had larger stroke syndromes and were more disabled at Data Availability Data availability is on request. Conflicts of Interest Xuya Huang, Phillip Nash, Vafa Alakbarzade, Brian Clarke, and Anthony Pereira declare that they have no potential conflicts of interest that might be relevant to the contents of this manuscript.
2021-11-02T15:09:21.759Z
2021-10-31T00:00:00.000
{ "year": 2021, "sha1": "4c62aae29467b5f8345ef226e7e4817d689b0037", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/srt/2021/3738017.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "81ee32ca5fb2e2a0283f0e180c5489deb9b95de5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
88516227
pes2o/s2orc
v3-fos-license
Hikita conjecture for the minimal nilpotent orbit We check that the statement of Hikita conjecture holds for the case of the minimal nilpotent orbit of a simple Lie algebra $\mathfrak{g}$ of type ADE and $\mathbb{C} ^2 / \Gamma$. Introduction Symplectic duality is a hypothetical duality between conical symplectic singularities. At the moment, no rigorous definition exists, though there are a number of conjectured examples and a great number of connections that should tie together dual singularities. Namely, the Springer resolution is dual to the Springer resolution for the Langlands dual group ( [4]), hypertoric varieties are dual to other hypertoric varieties ( [5]), finite ADE quiver varieties are dual to slices in the affine Grassmannian for the Langlands dual group ( [6]) and many others. Not always it is known which symplectic resolution is dual to which, as the theory is a work in progress. The case we are interesed in is, however, known: the closure of the minilal orbit in a simply laced Lie algebra is dual to the Slodowy slice to the subregular orbit. Now, if we have a pair of "dual" conical singularities, Hikita conjecture is a relation between the cohomology of one resolution and the coordinate ring of the other one. Namely, if X → X and X * → X * are a pair of dual conical symplectic resolutions and T is a maximal torus of the Hamiltonian action on X * , there is an isomorphism between the cohomology ring of X and the coordinate ring of the fixed points scheme (X * ) T . Hikita observed it in his paper ( [3]) for many of aforementioned cases and proved it for Hilbert scheme of points, finite type A quiver varieties and hypertoric varieties. Let g be a simply laced simple Lie algebra. The closure of its minimal nilpotent orbit is expected to be dual to the Slodowy slice to the subregular orbit. The Slodowy slice to the subregular orbit in a Lie algebra g is the same as C 2 /Γ, where Γ is a finite subgroup of SL(2, C) (corresponding to g). It is a symplectic variety with rational double points. It admits a unique symplectic resolution C 2 /Γ, given by the minimal resolution. The cohomology algebra of this resolution is known (we will prove it) to be Sym ≥2 [h], where h is the abstract Cartan algebra of a Lie algebra g, corresponding to Γ. The "dual" symplectic variety to it is given by the closure of the minimal nilpotent orbit in g, or, equivalently (via the isomorphism), the closure of the minimal orbit O min in g * . We will work with the latter. If we choose a generic action of C * , such that the fixed point scheme for it and for torus T are same, Hikita conjecture for this pair of singular symplectic varieties states that 2. We first prove the statement about the cohomology ring: where h is the abstract Cartan algebra of g. Proof. First, one notices that on both C 2 /Γ and C 2 /Γ there is an action of C * , that contracts the lower to the point zero. Thus, due to homotopy equivalence, we can restrict the computation of cohomology ring of C 2 /Γ to the fiber of the resolution over zero point of C 2 /Γ. This is known to be a tree of P 1 's that is a Dynkin diagram of the types ADE. Its cohomology ring can be seen to be given by Sym ≥2 [h]. First of all, π 1 of it is clearly 0, since we can retract every loop to a tree, formed by the points of intersection and connecting lines between them, and tree is contractible. We are left to deal with H 2 . To do so, one can observe that our tree clearly has a decomposition into affine cells and dots, so we can obtain the generators for H 2 from each affine line. Since the number of P 1 's (and, thus, A 1 's) equals the rk of corresponding Lie algebra g of type ADE, we thus obtain the cohomology ring, isomorphic to Sym ≥2 [h], where h is the abstract Cartan algebra of g. Since the left part is known already what we are left to do is to check that the right hand side is the same. To do this it will be useful to simplify the problem as follows. First of all one should notice that taking C * -invariants means the same as intersecting with the Cartan subalgebra -O min C * = h ∩ O min as a scheme. So, instead of working with C * -invariant functions we can simply take the ideal of O min in Sym[g], look at the image of its projection in Sym[h] and factorize by it. The result of the factorization will be the ring we seek. Now, let g be a simple Lie algebra, fix h a Cartan subalgebra and let O min denote the closure of the minimal nilpotent orbit in g * . The statement about the equality of algebras (with the reasoning above) will clearly follow from the following theorem. Before moving to its proof we want to take a closer look at some simple special cases of this statement. Example 2.3. Consider the easiest example possible: the case of sl(2). Since we have chosen h, we have both e and f and the nilpotent orbit (in this case there is only one nilpotent orbit apart from 0) is given by the equation h 2 + ef = 0 (or, equivalently, by the ideal, generated by h 2 + ef ). This, after the projection to Sym[h] will give us h 2 , which clearly generates the whole algebra in degree ≥ 2. Example 2.4. One more example would be the Lie algebra sl(n) case. It is a bit more complicated -at least there is be more than one nilpotent orbit. If a matrix belongs to the minimal nilpotent orbit its square is zero and its rank is 1. In terms of matrix equations such a matrix A is given by the A 2 = 0, rk(A) = 1every 2 × 2 minor should be zero and the matrix squared should be zero. If we restrict those to the Cartan subalgebra we will get a 2 ii = 0 (from the A 2 = 0) and a ii a jj = 0 (from det ij = 0) which gives us all the functions in degree 2 of Sym[h] for gl(n), thus in sl(n) too. To understand the theorem better we are going to use the following knowledge about adjoint representation of g. One should note that g * is a representation of the type V * θ there θ is the highest weight for the adjoint representation. Moreover, Sym n [g] can therefore be given as a sum Sym n [g] = V (nθ) ⊕ L n where L n stands for the sum of representations of lower weight. The fact is that the ideal we are interested in is actually given by the n L n . Indeed, every function from a representation of lower weight kills the highest weight vector v θ from g * = V * (θ). To find the generators of this ideal we can observe that it is actually given by kernels of maps like The objects of the form V (nθ) form a subring in the ring of all highest weight representations of our algebra g. The structure of generators of such a ring is given by the following theorem of Kostant (see [2], [1]). Theorem 2.5. Let g be a semisimple Lie algebra and let Γ ω 1 , . . . , Γ ωn be irreducible representations corresponding to the fundamental weights. Let . This is a commutative graded algebra, and we can split it into pieces where a = (a 1 , . . . , a n ) is an n-tuple of nonnegative integers. A a then is the direct sum of the irreducible representation Γ λ , whose weight is given by λ = a i ω i and the sum J a of representations with strictly smaller highest weights. Thus, J = J a is an ideal in A, and it is generated by the elements of the form where v 1 and v 2 stand for vectors in Γ ω 1 and Γ ω 2 , and the sum [ n i=1 (x i ⊗ y i + y i ⊗ x i )] denotes the Casimir element of the Lie algebra g Namely, for our case the theorem says, that the kernel of the morphism g * ⊗ g * = V (θ) ⊗ V (θ) → V (2θ), is generated by the elements of the form C(v · w) − 2(θ, θ)v · w, where C is the Casimir element. Since we are interested in the image of the projection of this subspace on the Sym 2 [h] we can take vector h ∈ h for both v, w and see what happens to it. Namely if we choose a basis e i , f i , h j in g, Casimir element can be written as a sum
2019-03-28T18:16:03.000Z
2019-03-28T00:00:00.000
{ "year": 2019, "sha1": "9204fee4bdca0964c510368eab77b1acf1a5fb8b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1903.12205", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9204fee4bdca0964c510368eab77b1acf1a5fb8b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
232041990
pes2o/s2orc
v3-fos-license
SARS-CoV-2-Specific Memory T Lymphocytes From COVID-19 Convalescent Donors: Identification, Biobanking, and Large-Scale Production for Adoptive Cell Therapy Syndrome coronavirus 2 (SARS-CoV-2) pandemic is causing a second outbreak significantly delaying the hope for the virus’ complete eradication. In the absence of effective vaccines, we need effective treatments with low adverse effects that can treat hospitalized patients with COVID-19 disease. In this study, we determined the existence of SARS-CoV-2-specific T cells within CD45RA– memory T cells in the blood of convalescent donors. Memory T cells can respond quickly to infection and provide long-term immune protection to reduce the severity of COVID-19 symptoms. Also, CD45RA– memory T cells confer protection from other pathogens encountered by the donors throughout their life. It is of vital importance to resolve other secondary infections that usually develop in patients hospitalized with COVID-19. We found SARS-CoV-2-specific memory T cells in all of the CD45RA– subsets (CD3+, CD4+, and CD8+) and in the central memory and effector memory subpopulations. The procedure for obtaining these cells is feasible, easy to implement for small-scale manufacture, quick and cost-effective, involves minimal manipulation, and has no GMP requirements. This biobank of specific SARS-CoV-2 memory T cells would be immediately available “off-the-shelf” to treat moderate/severe cases of COVID-19, thereby increasing the therapeutic options available for these patients. INTRODUCTION The new severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged as a worldwide pandemic in late 2019, causing an infectious disease known as COVID-19, with a wide and diverse range of symptoms. In most infected patients, the virus causes mild symptoms including fever and cough. In some cases, however, the virus causes a life-threatening disease with symptoms that include pneumonia, dyspnea and a hyperinflammatory process that includes cytokine storms and systemic immune thrombosis. Patients suffering from these symptoms require hospitalization and intensive treatment. A common feature of this severe disease is lymphopenia, which makes patients more vulnerable to co-infections and correlates with the severity disease (Qin et al., 2020;Zhao et al., 2020). The first wave of the pandemic was contained with strong restrictive measures, social distancing, and healthcare interventions, although thousands of patients died. Far from disappearing, the SARS-CoV-2 pandemic has begun its second wave, thereby dimming the hopes for its complete eradication. The development of vaccines has vigorously pursued to generate active immunity through immunization (Thanh Le et al., 2020), but there is uncertainty as to the duration of the antibodymediated immune response to COVID-19 (Li et al., 2008). Effective treatments are needed that can reduce symptoms severity and hospital stays and increase survival. So far, the only treatment for COVID-19 is supportive. Antiviral therapy with lopinavir-ritonavir is ineffective in improving the outcomes hospitalized patients with COVID-19 (Cao and Hayden, 2020). Remdesivir has been recently approved to treat COVID-19, although its beneficial effect is still controversial (Jomah et al., 2020;Wang et al., 2020). Preliminary results with anti-inflammatory therapies such as dexamethasone (Recovery Collaborative Group et al., 2020) and mesenchymal stromal cells (Sanchez Guijo et al., 2020) have shown promising results for critically ill patients (World Health Organization [WHO] grade 6 and 7) (World Health Organization, 2020), but there are as of yet no effective antiviral therapies for stopping the progress of this disease in its early stages (WHO grade 1-4 moderate and severe) or even to prevent COVID-19. The role of adaptive immunity in COVID-19 and the protective immunity conferred by T cells is still being characterized (Grifoni et al., 2020;Huang et al., 2020;Leung et al., 2020;Rodda et al., 2020;Sekine et al., 2020), and the role of memory T cells in conferring protection against SARS-CoV-2 has not yet been properly defined. The presence of memory T cells specific for another SARS coronavirus was found up to 11 years post-infection (Ng et al., 2016). This immunological memory creates a more rapid and robust secondary immune response to reinfections, which is determinant and constitutes the basis of adoptive cell therapy for viral infections in immunosuppressed patients in the context of allogeneic hematopoietic stem cell transplantation (HSCT). With this approach, the infusion of CD45RA − memory T cells considerably reduces the morbidity and mortality induced by viral reactivations by, for example, cytomegalovirus (CMV) and Epstein Barr virus (EBV) and simultaneously reduces the alloreactivity conferred by naïve CD45RA + T cells (Bleakley et al., 2014(Bleakley et al., , 2015Teschner et al., 2014;Triplett et al., 2015Triplett et al., , 2018. Memory T cells do appear when T cells recognize a pathogen presented by their local antigen-presenting cells. These T cells activate, proliferate, and differentiate into effector cells secreting compounds to control the infection. Once the pathogen has been cleared, most of the antigen-specific T cells disappear, and a pool of heterogeneous long-lived memory T cells persist (Mueller et al., 2013;Pennock et al., 2013). This population of memory T cells, defined as CD45RA − or CD45RO + , is maintained over time conferring rapid and long-term immune protection against subsequent reinfections (Berard and Tough, 2002;Channappanavar et al., 2014). In this study, we report the presence of a SARS-CoV-2 specific T-cell population within CD45RA − memory T cells from the blood of convalescent donors that can be easily, effectively, and rapidly isolated by CD45RA depletion. These specific SARS-CoV-2 CD45RA − memory T cells may be able to clear virally infected cells and confer T-cell immunity for subsequent reinfections. These cells can be stored for use in moderate and severe cases of COVID-19 patients requiring hospitalization, thereby representing an off-the-shelf living drug. Donors' Characteristics The study included 6 COVID-19 convalescent donors and 2 healthy controls ( Table 1). The convalescent donors were all tested for SARS-CoV-2 using reverse transcriptase polymerase chain reaction (RT-PCR) in nasopharyngeal samples between March and April 2020. The eligibility criteria included an age of 21-65 years, a history of COVID-19 with a documented positive RT-PCR test for SARS-CoV-2. At the time of this study, all donors tested negative for SARS-CoV-2. The median age of the convalescent donors was 37 years (range 23-41), 3 were women and 3 were men. The median duration until a negative PCR for SARS-CoV-2 was 13 days (range 5-17). Two of the donors presented with bilateral pneumonia but did not require hospitalization. Only 1 of the donors underwent treatment with oral hydroxychloroquine plus azithromycin plus lopinavir/ritonavir, while the other was treated with oral hydroxychloroquine plus azithromycin. The study enrolled two healthy donors who had not been exposed to COVID-19 patients and tested negative for anti-SARS-CoV-2 antibodies in June 2020. All participants granted their written consent, and the study was approved by the Hospital Institution Review Board (IRB number: 254/20). Cell Processing and Detection of SARS-CoV-2-Specific Memory T Cells by Interferon-Gamma Assay Peripheral blood mononuclear cells (PBMCs) from healthy donors and convalescent donors were isolated from their peripheral blood by density gradient centrifugation using Ficoll-Paque (GE Healthcare, Chicago, IL, United States). Briefly, Outpatients, n (%) 6 (100%) Treatment Acetaminophen 4 Hydroxychloroquine + azithromycin + lopinavir/ritonavir 1 Hydroxychloroquine + azithromycin 1 the cells were rested overnight (o/n) at 37 • C in TexMACS Medium (Miltenyi Biotec, Bergisch Gladbach, Germany) supplemented with 10% AB serum (Sigma-Aldrich, Saint Louis, MO, United States) and 1% penicillin/streptomycin (Sigma-Aldrich). The following day 1 × 10 6 cells were stimulated with pooled or individual overlapping SARS-CoV-2 peptides at a final concentration of 0.6 nmol/mL. For the positive control, 1 × 10 6 cells were stimulated in the presence of the plate-bound stimulator OKT3 at a final concentration of 2.8 µg/mL (mouse anti-human CD3 Clone OKT3, BD Biosciences). Cells with SARS-CoV-2 peptides and positive control were co-stimulated with CD28/CD49d at a final concentration of 5 µg/mL (anti-human CD28/CD49d Purified Clone L293 L25, BD Biosciences). Basal interferon gamma (IFN-γ) production by PBMCs was included as a background control in the absence of stimulation and costimulation. The peptide pools were short 15-mer peptides with 11 amino acid overlaps that can bind MHC class I and class II complexes and were therefore able to stimulate CD4 + and CD8 + T cells. The peptides cover the immunodominant sequence domains of the surface glycoprotein S, the complete sequence of the nucleocapsid phosphoprotein N and the membrane glycoprotein M (GenBank MN908947.3, Protein QHD43416.1, Protein QHD43423.2, Protein QHD43419.1; Miltenyi Biotec, Germany). After 5 h of stimulation, the cells were labeled with IFN-γ Catch Reagent (IFN-γ Secretion Assay-Detection Kit, human Miltenyi Biotec) containing bispecific antibodies for CD45 and IFN-γ, which were secreted by the stimulated target cells. After the secretion phase, the cell surface-bound IFN-γ was targeted using the IFN-γ PE antibody included in the kit. Interleukin-15 Stimulation of Memory T Cells CD45RA − memory T lymphocytes from the convalescent donor were thawed and stimulated with interleukin (IL)-15 to obtain an activated phenotype. Cells were incubated in TexMACS Medium (Miltenyi Biotec, Germany) supplemented with 5% AB serum (Sigma-Aldrich, Saint Louis, MO, United States), 1% penicillin/streptomycin (Sigma-Aldrich, Saint Louis, MO, United States) and 50 ng/mL of IL-15 o/n and for 72 h. After that time the cells were harvested and the phenotypic assay was performed. The same culture without IL-15 was run in parallel as a control. Donor Selection, Human Leukocyte Antigen Typing, and Large Clinical Scale CD45RA + T Cell Depletion The criteria for selecting convalescent donors were as follows: (1) IFN-γ secretion upon activation with the three SARS-CoV-2-specific peptides (M, N, S) and, (2) the most frequent human leukocyte antigen (HLA) typing to cover most of the population. The HLA phenotype of the convalescent donor was performed at the Centro de Transfusión of the Comunidad of Madrid on two independent samples by SSO and NGS: A * 02:01,A * 24:02/B * 44:02,B * 51:01/C * 16:02,C * 16:04/ DRB1 * 07:01,DRB1 * 11:03/DQB1 * 02:02,DQB1 * 03:01. Non-mobilized apheresis was performed at the Bone Marrow Transplantation and Cell Therapy Unit of University Hospital La Paz (Madrid, Spain) using a CliniMACS Plus cell separation system (Miltenyi Biotec). The donor provided written informed consent, and the study was conducted according to the Declaration of Helsinki protocol and the guidelines of the local ethics committee (IRB number 5579). The Unit was responsible for complying with the requirements regarding the quality and safety of the donation, obtention, storage, distribution, and preservation of human cells and tissues under the Spanish specific regulation. Following apheresis, CD45RA + cells were depleted by immunomagnetic separation using a CliniMACS CD45RA Reagent and the CliniMACS Plus system (both from Miltenyi Biotec), following the manufacturer's instructions. CD45RA − cells were frozen using autologous plasma plus 5% dimethyl sulfoxide (DMSO) and stored. We were able to cryopreserve 30 aliquots at various doses according to the trial design. The viability, purity, phenotype and spectratyping of the CD45RA − fraction were analyzed by flow cytometry (FCM). TCR Spectratyping Most of the CDR3-encoding regions of the TCRV-and TCRVγ genes were amplified using 2 V-J multi-primer sets for each locus and one additional multi-primer set covering D-J TCRV-β (Vitro, Master Diagnostica, Spain). Primers marked at their 5 end with 6-FAM fluorochrome enabled the denatured fragment size analysis by capillary electrophoresis (ABI3130 DNA-analyzer) and Genemapper software (Thermo Fisher Scientific, United States). Statistical Analysis The quantitative variables are expressed as mean ± standard deviation (SD), while the qualitative variables are expressed as percentages (%). A two-tailed Mann-Whitney nonparametric test was used for comparison means for the non-paired samples using GraphPad Prism (version 8.0.0 for Windows, GraphPad Software, San Diego, CA, United States). A P-value < 0.05 was considered statistically significant. Memory T Cells From Convalescent Donors Contain a SARS-CoV-2-Specific Population We detected the presence of a SARS-CoV-2-specific population in both subsets of naïve CD45RA + and memory CD45RA − T cells in the PBMCs of the convalescent donors but not in the healthy controls ( Table 2 and Figure 1). The subsets also showed reactivity for the single peptides M, N, and S (data not shown). The mean CD45RA − CD3 + population in the convalescent donors was 90.01%. IFN-γ expression within the CD45RA − CD3 + population was 1.12%, whereas IFN-γ expression within the CD45RA + CD3 + population was 0.40% (P = 0.065) ( Table 2). We detected no IFN-γ expression in the healthy individuals. Despite the cohort's small size, we found no synergistic effect on the percentage of IFN-γ when the three peptides were mixed when compared with the single peptides (data not shown). Identification of SARS-CoV-2-Specific CD4 + T and CD8 + T-Cell Responses Within CD45RA − Memory T Cells We then sought to determine whether both CD8 + and CD4 + subsets contained specific SARS-CoV-2 T cells within the PBMCs of the convalescent donors. Among all subsets studied, we observed that CD45RA − CD4 + and CD45RA − CD8 + cells expressed 1.25 and 0.85% of IFNγ, respectively (P = 0.132) (Table 2, Figure 2, and data not shown). Thus, all of the convalescent donors who recovered from COVID-19 generated CD4 + T and CD8 + T-cell responses against SARS-CoV-2 within the memory CD45RA − T-cell population. We then analyzed the T central memory (CM) (CD45RA − CD3 + CD27 + ) and T effector memory (EM) (CD45RA − CD3 + CD27 − ) compartments. Although there were no significant differences, we observed responses to the SARS-CoV-2-specific peptides within all subpopulations. Within the CD4 + and CD8 + CM T-cell subsets, we detected a mean of 1.26 and 0.79% of IFN-γ + cells, respectively (P = 0.132). When examining the CD4 + and CD8 + EM T-cell subsets, we detected a mean of 1.25 and 1.06% of IFN-γ + cells, respectively (P = 0.108) (Table 2 and Figure 3). These data demonstrate the presence of a population of memory T cells specific for SARS-CoV-2 within the CD45RA − CD3 + memory T cells. Large-Scale CD45RA Depletion. Creation of an Off-the-Shelf Biobank of CD45RA − Memory T Cells From a COVID-19 Convalescent Donor After depletion of the CD45RA + cells (as described in the Materials and Methods section), 99.8% of the cells were CD45RA − CD3 + , and most of the CD45RO + cells were CD4 + with a high CD4/CD8 ratio. The viability of the cells after thawing was 98-99% (data not shown). Most of the CD4 + and CD8 + cells had a CM phenotype (89.1 and 61.6%, respectively). Both the CM and EM subpopulations expressed IFN-γ after exposure to the three peptides. Thus, we found that 0.38, 0.70, and 0.39% of the cells within the CD4 + CM, CD8 + CM, and CD4 + EM subsets expressed IFN-γ, respectively. We found no specific IFN-γ expression within the CD8 + EM subset ( Table 3). CDR3 Use of TCR The polyclonal distribution for both TCR-β and TCR-γ CDR3encoding regions was almost identical among the controls and the CD45RA − population, whereas different oligoclonal fragments were observed in the CD45RA + population. Three of the oligoclonal fragments seen in the CD45RA + cells were also identified in the CD45RA − cells (Figure 4). Induction of an Activated Memory T-Cell Phenotype Within CD45RA − Memory T Cells After CD45RA Depletion IL-15 is an essential cytokine for memory T cells that induces the activation, proliferation, and survival of T cells. After incubating the CD45RA − T cells with IL-15, we observed an increase in the activation markers HLA-DR, CD69, and CD25 after 72 h of incubation (2.63-fold, 29.59-fold, and 8.52-fold, respectively) when compared with the o/n incubation (1.15-fold, 3.18-fold, and 1.50-fold) ( Table 3). The expression of the exhaustion markers NKG2A and PD-1 was also higher after 72 h of incubation (Supplementary Figures 1-5). We then examined the expression of chemokine CCR7 and integrin CD103, which are important for the homing of T-cells to the respiratory tract (Campbell et al., 2001;Uss et al., 2006). We observed that most of the CD3 + CD4 + cells expressed CCR7 + cells (88.13%), whereas the CD3 + CD8 + subpopulation expressed 45.42% of CCR7 + cells. As expected, CD103 expression was low in the peripheral blood (Uss et al., 2006). We detected an expression of 2.18, 5.96, and 1.38% in the CD3 + CD103 + , CD3 + CD8 + CD103 + , and CD3 + CD4 + CD103 + compartments, respectively. Although the fold increase was not particularly remarkable in the CD45RA − CD3 + cells after 72 h of incubation (1.00-fold CCR7 and 1.24-fold CD103), the increase was higher (1.40) within the CD8 + subpopulation (Table 3 and Supplementary Figures 6,7). DISCUSSION In the absence of an effective vaccine and with the emergence of a second wave, there is an urgent need to find effective treatments for COVID-19. Here we report the presence of a SARS-CoV-2-specific T-cell population within the CD45RA − memory T cells of blood from convalescent donors. These cells can be easily, effectively, and rapidly isolated following a donor selection strategy based on IFN-γ expression after exposure with SARS-CoV-2-specific peptides and HLA antigen expression, thereby obtaining clinical-grade CD45RA − memory T cells from the blood of convalescent donors. These cells can be biobanked, thawed, and employed as a treatment for moderate to severe cases of COVID-19. These so-called "living drugs" retain the memory against SARS-CoV-2 and other pathogens the donors have encountered. Unlike plasma, where the concentration decreases after infusion, memory T cells expand and proliferate and should therefore have a more lasting effect. In previous studies, this population of CD45RA − memory T cells showed no alloreactivity when compared with the CD45RA + counterpart (Fernández et al., 2017). These cells were mainly CD4+ (Fernández et al., 2019) and showed effectiveness against viral infections (Triplett et al., 2015). Phenotypically, we found that CD45RA − memory T cells were fully capable of producing IFN-γ in the presence of SARS-CoV-2-specific peptides. Both CD4 + and CD8 + CM and EM subsets were able to generate IFN-γ after exposure to the SARS-CoV-2 peptides, showing coverage of response. CD8 + cytotoxic cells can kill virally infected cells by secreting cytokines; at the same time, CD4 + T cells increase the ability of CD8 + T cells to eliminate the virus. They have been shown to play an important role in controlling the viral replication of other viruses such as EBV and CMV (Juno et al., 2017). EM T cells are the first responders to infection, with a quick and strong response to pathogens, whereas CM T cells proliferate and create a new round of effector T cells (Pennock et al., 2013;Farber et al., 2014). IL-15 is essential for the survival of memory CD8 + and CD4 + T-cell subsets, promoting the activation of CD4 + T cells, cytokine production and proliferation, and the maintenance of the memory population (Brincks and Woodland, 2010;Chen et al., 2014). After incubating the cells for 3 days with IL-15, we obtained a phenotype characteristic of an activated state, as shown by the fold increase in the activation markers HLA-DR, CD69, and CD25 and in the CCR7 and CD103 markers characteristic of the homing of T cells to the lymph nodes and mucosal tissues. In our study, we observed no IFN-γ production by the SARS-CoV-2-specific T cells in healthy unexposed individuals, which agrees with the findings of Peng et al. (2020) but differs from other previously published data (Grifoni et al., 2020). This discrepancy could be due to the different detection methods employed and the small sample size. Studies have shown the correlation between neutralizing antibodies and symptom severity, where antibody responses wane over time even in as short a period as 6-7 weeks after symptoms onset (Ibarrondo et al., 2020;Long et al., 2020;Seow et al., 2020). Importantly our data shows the presence of SARS-CoV-2 memory T cells in convalescent donors with mild symptoms, which has enormous implications for protection against further SARS-CoV-2 infections and in decreasing the severity of COVID-19. Further studies with larger cohorts are needed to determine the SARS-CoV-2 memory duration and thereby elucidate the long-term protection to SARS-CoV-2, as has been previously demonstrated for another coronavirus (Ng et al., 2016). For proper T-cell recognition, both the donor and recipient need to share HLA alleles. Given the vast number of convalescent donors, finding the proper haploidentical donor based on HLA typing would not be difficult. Based on previously published data (Leung et al., 2020), this donor can cover around 93.6% of Spain's population. In accordance with the HLA donor-recipient match, we estimate that four donors will cover almost the whole of Spain's population (Leen et al., 2013;Leung et al., 2020). These cells are expected to remain in the patients until the host immune system is recovered. Previous experience with HSCT has shown that these cells can be detected in the host for weeks (Triplett et al., 2015). Besides data from the phase I clinical trial (unpublished) shows that we can detect donor chimerism for 3 weeks. The procedure for obtaining the cells is easy to implement for small-scale manufacture, is quick and cost-effective, and involves minimal manipulation. AlsoCD45RA − memory T cellbased therapy is manufactured under the quality standards that apply to blood banks that perform HSCT daily with complex manipulations that are not considered advanced therapy medicinal product and can therefore be obtained without GMP condition requirements. The manufacturing of CD45RA − memory T cells is carried out in closed automated systems similar to clean rooms that guarantee an aseptic process for the administration to the patient. These factors make it feasible to create a biobank or stock from the blood of convalescent donors, which would be immediately available "off the shelf " for subsequent outbreaks, increasing the therapeutic options in the current SARS-CoV-2 pandemic. A clinical trial is currently assessing the safety of these cells for patients with moderate to severe COVID-19 (NCT04578210). These cells could provide patients with (1) a pool of SARS-CoV-2-specific T cells that will respond quickly to the infection, (2) a pool of cells for patients with severe disease presenting with lymphopenia, and (3) a pool of specific memory T cells for other pathogens from the donors encountered during their life, which are vital for eliminating other secondary infections that usually develop in patients hospitalized with COVID-19 (Kim et al., 2020;Zhu et al., 2020). DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Hospital Ramon y Cajal, Madrid, Spain. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS CF, BS, and AP-M designed the study. CF, BP-M, and CM-D performed the in vitro experiments. JV, AB, and FG-S performed the HLA typing and TCR spectratyping. RD and AM performed the non-mobilized apheresis and cryopreservation of the cells. CM-C performed the statistics. CF, BP-M, BS, and AP-M wrote the first draft of the manuscript. All authors revised the manuscript, participated in the interpretation of the data and the approval and submission of the manuscript.
2021-02-25T14:11:34.756Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "8a31c340bfdb005b3b9c5d0cb4f3da5d543d8696", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.620730/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a31c340bfdb005b3b9c5d0cb4f3da5d543d8696", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237384092
pes2o/s2orc
v3-fos-license
Ecological aspects of a population of Phrynops geoffroanus (Schweigger, 1812) in a semi-arid area of Northeastern Brazil Phrynops geoffroanus is a testudine of the family Chelidae that has a wide distribution. However, there are gaps in the knowledge of its biology. This study aimed to characterize demographically and morphometrically a population of Phrynops geoffroanus in an ephemeral water reservoir in a semi-arid area of Paraíba, in the period from April 2016 to March 2017. The individuals were captured manually and by hoop-net trap. Data on size distribution were described by mean and standard deviation. Size and weight were compared between sexes and capture methods using a MANOVA. Sex ratio was compared between capture methods using Pearson's chi-squared test. Population density and biomass were calculated. The number of animals captured was grouped into two shifts and compared. Throughout the year, 113 individuals of P. geoffroanus were captured in the reservoir, with a population of 43.4% males, 47.8% females and 8.8% juveniles, with a density of at least 41.8 individuals / ha and biomass of 33.05kg / ha. There is no significant relationship between the amount of animals captured and the amount of rainfall during the period sampled. Due mainly to the ephemerality of the aquatic environments of the Caatinga and the unpredictability of rainfall in this Research, Society and Development, v. 10, n. 7, e9510715154, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i7.15154 2 biome, the populations of aquatic species show large variations in population and in their biological activities. Further studies are needed to fill several gaps in the knowledge of the natural history of Caatinga testudines. Introduction Studies on the population structure of any taxon play an important role in establishing species conservation strategies (Primack, 2012). Thus, information on population structure and population size of turtles are tools to assess the status of various populations (Brito et al., 2009). However, some aspects of turtle life history, such as late sexual maturity and long life cycles, make it difficult to understand their population dynamics (Rueda-Almonacid et al., 2007). Although these studies are important, there are few studies dealing with Caatinga testudine species (Moura et al., 2014;Moura et al., 2015;Rodrigues & Silva, 2015), unlike chelonian species from other regions of Brazil (Fachín-Terán et al., 2003, 2004Batistella et al., 2008;Brito et al., 2009;Miorando et al., 2015). This biome, which occupies much of northeastern Brazil, is characterized by remarkable seasonality, featuring a prolonged dry season and an extremely irregular rainy season (Moro et al., 2016). As the activity patterns of chelonians are directly associated with precipitation rates and air temperature (Souza, 2004), this irregularity in Caatinga rainfall patterns directly affects the behavior and the way species interact with this environment, thus generating different patterns associated with the geographic distribution of species (Souza & Molina, 2007). Phrynops geoffroanus (Schweigger, 1812) is a species of the family Chelidae, popularly known as the Brazilian Research, Society andDevelopment, v. 10, n. 7, e9510715154, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i7.15154 3 terrapin, has a wide distribution, occurring in the most diverse environments, with influence from the Cerrado, Amazon Rainforest, Caatinga and even highly anthropized and polluted environments (Ferrara et al, 2017). This wide distribution has generated a number of gaps in the knowledge of the biology of this species; it is not known, for example, how this wide distribution may be influencing body size patterns, or even litter size (Souza & Molina, 2007). Associated with this lack of knowledge, Costa-Neto & Alves (2010) report the use of this species for medicinal purposes by some communities in Northeastern Brazil and this species is also used as a food resource by local populations in Paraíba State (Alves et al., 2002). Due to the lack of knowledge about the population patterns of P. geoffroanus in Caatinga, the objective of this study was to describe the morphological aspects, population structure and density, activity patterns and sex ratio of P. geoffroanus in a Caatinga ephemeral water reservoir in a semi-arid region of Paraíba. Study Area The study was carried out in an area of Caatinga in the interior of the Depression of the State of Paraíba, in the Fazenda Tamanduá Private Natural Heritage Reserve (RPPN) (Figure 1). Fazenda Tamanduá (-7.011111S and -36.369444W) has a total area of 3,073 ha, with predominantly dense shrubby-arboreal caatinga vegetation (Passos Filho et al., 2015) in good condition, and the predominant soils are eutrophic litholitic with rocky outcrops (Embrapa, 2006). The climate is characteristic of semi-arid tropical regions (BSh), with low annual rainfall, concentrated in a short period of time (January to May), followed by a long period of drought. The individuals were collected from an ephemeral reservoir used to supply the farm's cattle herd. This reservoir has a total area of 27,352m 2 (27.35 ha). The vegetation at the edge of the reservoir is composed of shrubs and aquatic plants. Research, Society and Development, v. 10, n. 7, e9510715154, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i7.15154 Methodological Procedures Specimens were captured from April 2016 to March 2017, with monthly expeditions, during 48 hours each. The turtles were captured manually and using a hoop-net trap. These traps were placed on the banks of the reservoir and arranged submerged, and meat and/or fish were used as bait. The traps were reviewed daily, with a 4-hour interval between each review, remaining in the water during the entire sampling period. Each captured specimen was weighed and some measurements were taken: Carapace length (MCL); Carapace width (MCW); Head width (HW); Head length (HL); Mouth width (MW); Plastron width (MPW); Plastron length (MPL); Distance between the base of the tail and the cloaca (DTC); Anal plastron plate opening (AAP); Carapace curvilinear length (CCL) and carapace curvilinear width (CCW) were obtained using a plastic measuring tape on a millimeter scale. Captured animals were marked according to the method suggested by Cagle (1939) and had the sexes identified according to secondary sexual characteristics such as tail length and plastron concavity (Martins & Souza, 2008;Brito et al., 2009;Forero-Medina et al., 2013;Marques et al., 2013;Moura et al., 2015;Santana et al., 2012;Santana et al., 2016;Perry & Mitchell, 2016). We considered as adults individuals with more than 120mm of LCM (Santana et al., 2012), individuals that did not show evident secondary sexual characteristics were considered juveniles. All animal procedures were performed according to international care practices, under the control of the Internal Ethics Committee of the Federal University of Campina Grande (CEUA-UFCG 100/2016). It was also authorized by the Brazilian Institute of the Environment (ICMBIO / SISBIO, No. 53670-1). Data Analysis Data on size distribution (length and width) of males, females and juveniles were described by mean and standard deviation. Morphometric characteristics were compared between sexes using a MANOVA test to identify possible differences between adult males and females. Sex ratio was compared between capture methods using Pearson's Chi-squared Test (Zar, 2010). Population density was calculated using the total number of individuals collected divided by the total area of the reservoir. Biomass was calculated by summing the mass values of all individuals collected divided by the total area of the reservoir. To identify in which period of the day specimens were captured more, the number of animals captured was grouped in two shifts (6 am to 5.59 pm and 6 pm to 5.59 am) compared between shifts and sexes using a two-factor ANOVA (using only specimens captured by the Covo trap). To assess the relationship between precipitation and captures and recaptures, a simple linear regression. Pearson's Chisquare tests, as well as PCA, regression analysis and T-test were performed in Past 3.26b software. Two-way ANOVA and MANOVA were performed in SPSS. All tests were performed using covo bay trap of captured specimens. Results and Discussion A total of 113 individuals of P. geofroannus were captured in the reservoir throughout the year, with a population size of 43.4% males (n=49), 47.8% females (n=54) and 8.8% juveniles (n=10) ( Table 1). Of the total, 47 were captured manually, buried in the mud on the banks of the reservoir as it was drying out. However, the sampling methods used did not influence the sex or CML of the captured individuals (Willk's Lambda=0.985; F(4,90)=0.759; p=0.471). Variables Males ( buried in the mud. There is no significant relationship between the amount of animals caught and the amount of rainfall (R 2 =-0.09; F=0.0016; p=0.97) during the period sampled ( Figure 3). Regarding the time of capture, there was no significant difference between individuals (total or by classes) captured during the day (from 6 am to 5.59 pm) or at night (6 pm to 5.59 am) (Figure 4). Most of the captured individuals, both males and females and juveniles were captured during the day, although there was no significant difference between capture shifts (F(1,13) = 0,198; p = 0,663). Phrynops geoffroanus is a diurnal species and its activity pattern is influenced by air temperature (Souza, 2004). However, water temperature, especially in warmer areas, can be an important predictor of activity of this species, as found by Lescano et al. (2008) for Hydromedusa tectifera. This statement may explain why there is no difference between the number of individuals captured during day or night. In natural populations of testudines, differences between the number of male and female individuals are common (Gibbons, 1990;Bujes et al., 2011;Rodrigues & Silva, 2015). However, the Phrynops geoffroanus population studied did not show a significant difference in sex ratio. Similar results were recorded for this species by Moura et al., (2015) in another area of Caatinga and by Souza & Abe (2001) in southeastern Brazil. Several authors argue that these differences in the proportion of males and females are usually associated with inequality in activity patterns between the sexes, differences in mortality rates between males and females, changes in egg incubation temperature influencing sex determination in some species, or even reflect the use of biased sampling techniques (Bury, 1979;Gibbons, 1990;Vogt, 1980;Fachin-Téran et al., 2003, Pezzuti et al., 2010Famelli et al., 2011). However, Rueda-Amonacid et al., (2007) mentions that P. geoffroanus populations have a 1: 1 sex ratio as a striking characteristic for the species and that it has genetic determination for sex. The sampled turtle population is composed mainly of adult individuals. According to Conway-Gomméz (2007) the greater the hunting pressure suffered by testudines living near human communities. This distribution of size classes in the sample population suggests that hunting may not be a depressing element in the study population, but related studies are needed for confirmation. A low proportion of juveniles was also recorded for P. tuberosus in the work conducted by Rodrigues & Silva (2015). These authors determine that factors such as low adult mortality, high investment in individual growth, as well as high predation rates of juveniles (Verdon & Donnely, 2005) may explain this distribution. In southeastern Brazil, P. geoffroanus roosts between February and August (Rueda-Amonacid et al., 2007), and in captivity, births of neonates occur within 115-186 days (Lisboa et al., 2004). However, Vogt (2008) suggests that the incubation period in natural environments should be controlled by ambient temperature and humidity, coordinating diapause and embryonic aestivation (Doody et al., 2001), allowing for a shorter incubation time. Souza (2004) mentions that hatchlings of P. geoffroanus occur during the rainy seasons in southeastern Brazil (December and January). The presence of favorable environmental conditions in this period (greater number of pools, small lakes and temporary rivers) may explain this synchronized behavior between nesting and hatching (Souza, 2004). For species that breed in the Caatinga, strategies like these may be important reproductive mechanisms, allowing eggs to live in warmer and drier environments, accelerating their development. Although this may explain the appearance of young individuals at the end of the dry season and beginning of the rainy season in the studied area, the reproduction of this species in Caatinga areas is scarce. Sexual dimorphism can be understood as the result of evolutionary pressures exerted in different ways between the sexes. Morphometric analyses confirmed sexual dimorphism, cited for the species by Ferrara et al. (2017). PCA and MANOVA showed differences in carapace and plastron size and curvature, showing a sexual dimorphism directed towards larger and heavier females than males. Males, on the other hand, have larger tails when compared to females. This size difference between the sexes is generally attributed to sexual selection (Perry & Mitchell, 2016). The presence of larger females may be a reflection of fertility selection, as larger females may have greater reproductive potential, either in the production of larger litters or the ability to breed more frequently in the same breeding season (Stephens & Wiens, 2009). On the other hand, this difference may be related to the lack of fighting territory for males (Berry & Shine, 1980), favoring smaller and more agile males (Perry & Mitchell, 2016). Tail length and width are the most constant characteristics for differentiating males and females among turtles (Rueda-Amonacid et al., 2007). Males have increasingly larger tails than females, probably to house the penis. However, larger tails may promote reproductive advantages for males during copulation (Moll, 1980). The estimated density and biomass of Phrynops geoffroanus at Fazenda Tamanduá suggest that this species plays an important role in the aquatic ecosystems of the area studied. The population density and biomass recorded were higher than those found, also in Caatinga, by Moura et al., (2015) (only 9 ind / ha). However, in this study three species were recorded in the area (Mesoclemmys tuberculata, Kinosternon scorpioides and Phrynops geoffroanus). As reported for Hydromedusa tectifera in Argentina (Lescano et al., 2008), at Fazenda Tamanduá, P. geoffroanus is the only turtle species recorded in the reservoir studied. This lack of competitors may provide a larger population size in the study area, as intraspecific interactions can act as important population drivers in sympatric species (Dreslik et al., 2005;Lescano et al., 2008). Furthermore, the low recapture rate suggests that the population of this species at Fazenda Tamanduá may be larger than estimated. On the other hand, in Caatinga areas the temporality of water bodies can influence the density of aquatic species. Although there are other reservoirs in the farm area, the water body studied is the one with the highest capacity and, therefore, the highest hydroperiod. In general, hydroperiod affects the stability and complexity of the environment (Werner et al., 2007), providing a greater supply of refugia and food, and may contribute to the juvenile development and adult survival of P. geoffroanus. It is possible that as the smaller bodies dried up, the individuals left these reservoirs in search of new favorable environments, changing the population density and biomass of the sampled reservoir, especially during the drier periods of the year. Due mainly to the ephemerality of the Caatinga aquatic environments and the unpredictability of rainfall in this biome, the populations of aquatic species show large variations in population and in their biological activities. The behavior of burrowing in the mud found in this study raises a hypothesis that these animals are estivating. Aestivation behavior has been described for Hydromedusa tectifera, another species in the family Chelidae. However, further studies are needed to test this hypothesis raised in this study. Final considerations The population studied shows sexual dimorphism and sex ratio expected for the species. However, this is only a one-off study. Detailed long-term studies are fundamental to the understanding of these variations in these populations, especially habitat use. Studies addressing reproductive strategies and other ecological aspects of habitat use in free-living individuals can fill several gaps in the natural history of Caatinga testudines.
2021-09-01T15:19:31.397Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b592623ff8680881a48bdc4b4339c828e8e9bd62", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/15154/14574", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f6e1b91e2f6ca6b44aa39e6329dfe404df89ed62", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
225202601
pes2o/s2orc
v3-fos-license
Awareness of Aromatherapy in Healing Stress and Body Pain among Dental Students Aromatherapy is a style of practice in medicine where essential oils or other scents are used. These are either applied or directly inhaled to attain therapeutic benefit. Also they are used either by distillation with water or steam, or from the epicarp of citrus fruits by a mechanical process or by dry distillation. The mechanism of action in aromatherapy is unknown, but recent studies have shown that aromatherapy is also beneficial for a few health problems. A range of essential oils are found to possess various degrees of antimicrobial activity and are believed to own antiviral, nematicidal, antifungal, insecticidal, and antioxidant properties. Hence this treatment is also known as essential oil therapy. At the current situation of COVID -19 there is a prevalence of stress due to lockdown. This mental stress can adversely affect the physical and mental well-being of each and every individual. In order to overcome this stress this aroma therapy can be used. The main aim of the study is to assess the awareness of the use of aromatherapy in healing stress and body pain. Original Research Article Muralidharan et al.; JPRI, 32(20): 69-78, 2020; Article no.JPRI.59705 70 This was a cross sectional study conducted among the dental students through a questionnaire. The questionnaire consisted of 10 questions and was circulated among the student population. The statistical analysis was done with the help of SPSS software version 2.0. The results concludes that most of the participants are aware of aromatherapy and the found it beneficial in healing stress and body pain. INTRODUCTION Aromatherapy also known as essential oil therapy is a holistic healing treatment that uses natural plant extracts to promote health and wellbeing. This treatment involves the use of aromatic essential oils medicinally to improve the health of the body, mind, and spirit as it enhances both physical and emotional health. Aromatherapy is most commonly applied topically, or by making the person inhalant. More than 40 plant derivatives have been identified for therapeutic use, lavender, eucalyptus, rosemary, chamomile, and peppermint are the most frequently utilized extracts. Nowadays, use of other and complementary therapies with mainstream medicine has gained the momentum. Aromatherapy is one among the complementary therapies which use essential oils because the major therapeutic agents to treat several diseases. The essential or volatile oils are extracted from the flowers, barks, stem, leaves, roots, fruits and other parts of the plant by various methods. It came into existence after the scientists deciphered the antiseptic and skin permeability properties of essential oils [1]. Inhalation, local application and baths are the main methods utilized in aromatherapy that utilize these oils to penetrate the human skin surface with marked aura. Once the oils are within the system, they remodulate themselves and add a friendly manner at the location of malfunction or at the affected area. This sort of therapy utilizes various permutations and combinations to induce relief from numerous ailments like depression, indigestion, headache, insomnia, muscular pain, respiratory problems, skin ailments, swollen joints, urine associated complications etc. [2]. The essential oils are found to be more beneficial when other aspects of life and diet are given due consideration. Aromatherapy is obtained from the natural source so it is considered that Aromatherapy was not harmful to the human population. Aromatherapy is used in various forms for medicinal uses. The medical field has done research on the medicinal value. They are also other uses of Aromatherapy like fighting infection. This most important factor is that blood pressure can be lowered because of Aromatherapy. This important factor is because of the presence of the antioxidant compound that can be linked to lower blood pressure [3]. These studies have also been proved, for the human that has been reducing blood pressure by using Aromatherapy. Aromatherapy applications include massage, topical applications, and inhalation. However, users should remember that "natural" products also are chemicals, and that they will be hazardous if utilized in the incorrect way [3,4]. It's important to follow the recommendation of a trained professional when using essential oils. The present pandemic condition, COVID-19 has left a great impact both physically and mentaly. The prolonged lockdown condition creates a stress among the minds of the people. Previously our team had conducted numerous clinical trials [5][6][7][8][9][10][11] and lab animal studies [12][13][14][15][16] and in vitro studies [17][18][19] and reviews on upcoming topics. The idea for this survey stemmed from the current interest in our community. Hence the main aim of the survey is to create an awareness on the healing aromatherapy in reducing stress caused due to lockdown and also its therapeutic use in reducing body pain [1]. MATERIALS AND METHODS This was a cross sectional study conducted among the common people through a questionnaire. The questionnaire consisted of 10 questions and was circulated among the first year students of saveetha dental college. The sample size of the study was 100 and the results were tabulated accordingly. The survey was conducted via google forms. Chi square analysis was done with the help of SPSS software version 20. The obtained results are tabulated and represented in the form of pie charts and bar graphs. RESULTS AND DISCUSSION Pie chart 1 representing the awareness of aromatherapy among participants. 81.82% were aware about aroma therapy and 18.18% were not aware about it. In Pie chart 2 70.71% responded that they had previously gone for aromatherapy and 29.29% have not gone for aromatherapy. Similarly in Pie chart 3. 62.63% of the participants agree that aromatherapy can heal body pain and 37.37% of the participants do not agree that aromatherapy can heal body pain. From the pie chart it is evident that most of them agree that aromatherapy can heal body pain. Pie chart 4 represents the number of participants who agree on the fact that aromatherapy can relieve stress and headache. 74.75% of the participants agree that aromatherapy can release stress and headache and 25.26% of the participants do not agree that aromatherapy can release stress and headache. Pie chart 5 representing the knowledge of participants regarding the side effects of using aroma. 78.79% agree that direct use of intense aroma oils can cause side effects and 21.21% do not agree that direct use of intense aroma oils can cause side effects. In Pie chart 6 54.55% of the participants prefer to go for aromatherapy and 45.45% of the participants do not prefer to go for aromatherapy. In the present study, 74.75% of the participants agree that aromatherapy can relieve stress and headache and 25.26% of the participants do not agree that aromatherapy can relieve stress and headache. According to the study in 2014 by American College Health Association National College Health Assessment II, 57.1% report a more than average level of stress , 30% of freshman females at the University of Montana report an average level of stress and 12.9% aware about aroma therapy and 18.18% were chart 2 70.71% responded that they had previously gone for aromatherapy and 29.29% have not gone for chart 3. 62.63% of the participants agree that aromatherapy can heal body pain and 37.37% of the participants do hat aromatherapy can heal body pain. chart it is evident that most of them agree that aromatherapy can heal body pain. Pie chart 4 represents the number of participants who agree on the fact that aromatherapy can 4.75% of the participants agree that aromatherapy can release stress and headache and 25.26% of the participants do not agree that aromatherapy can release stress and headache. Pie chart 5 representing the knowledge of participants ts of using aroma. 78.79% agree that direct use of intense aroma oils can cause side effects and 21.21% do not agree that direct use of intense aroma oils can chart 6 54.55% of the participants prefer to go for aromatherapy and 5.45% of the participants do not prefer to go for In the present study, 74.75% of the participants agree that aromatherapy can relieve stress and headache and 25.26% of the participants do not rapy can relieve stress and headache. According to the study in 2014 by American College Health Association National College Health Assessment II, 57.1% report a more than average level of stress , 30% of freshman females at the University of Montana rt an average level of stress and 12.9% report a tremendous level of stress in the last 12 months. In addition, 31.4% of UM freshman females reported that they earn lower grade in exam or project due to stress [20] studies that were conducted, it was found that 56% of the participants responded that aromatherapy gave quicker response and 44% of them through antidepressant pills [21] to studies conducted, it was found that there is an increased trend nowadays to use this therapy in the treatment of sleep disorder and cancer [22,23]. The essential oils have gained their importance in cosmetic, therapeutic, fragrant, aromatic and spiritual uses [24]. In the present study, it was found that 81.82% were aware about aroma therapy and 18.18% were not aware about aroma therapy. In the previous studies conducted, it was found that only 7. medical students and 15.1% of nursing students reported having enough knowledge about aromatherapy, while 88.9% of medical students and 96.3% of nursing students considered it useful or were undecided [1]. In present study it was found that 62.63% a aromatherapy is useful in healing body pain and 37.37% agreed that aromatherapy is not useful in healing body pain. In a study conducted in Swiss, it was estimated that 10.7% of patients with chronic low back pain used aromatherapy and rated it a 4.2 /10 for its usefulness [25]. In the present study 70.71% have previously gone for aromatherapy and 29.29% have not gone for aromatherapy. In a study conducted in the United States, it was estimated that 62% of adults have previously used some form of complementary or alternative therapy in the last 12 [26]. report a tremendous level of stress in the last 12 months. In addition, 31.4% of UM freshman females reported that they earn lower grade in [20]. In previous studies that were conducted, it was found that 56% of the participants responded that aromatherapy gave quicker response and 44% of [21]. According to studies conducted, it was found that there is an increased trend nowadays to use this therapy in the treatment of sleep disorder and cancer . The essential oils have gained their in cosmetic, therapeutic, fragrant, aromatic and spiritual uses [24]. In the present study, it was found that 81.82% were aware about aroma therapy and 18.18% were not aware about aroma therapy. In the previous studies conducted, it was found that only 7.1% of medical students and 15.1% of nursing students reported having enough knowledge about aromatherapy, while 88.9% of medical students and 96.3% of nursing students considered it useful or were undecided [1]. In present study it was found that 62.63% agreed that aromatherapy is useful in healing body pain and 37.37% agreed that aromatherapy is not useful in healing body pain. In a study conducted in Swiss, it was estimated that 10.7% of patients with chronic low back pain used aromatherapy and a 4.2 /10 for its usefulness [25]. In the present study 70.71% have previously gone for aromatherapy and 29.29% have not gone for aromatherapy. In a study conducted in the United States, it was estimated that 62% of adults have complementary or alternative therapy in the last 12 months Pie chart representing the number of participants who prefer to go for aromatherapy. 54.55% of the participants prefer to go for aromatherapy and 45.45% of the participants do not r aromatherapy. Blue Bar graph represents the association between gender and awareness of aromatherapy among participants. X axis represents gender and Y axis represents the number of (41.41%) are aware about aroma therapy. Blue denotes yes nd red denotes no. Chi Square test was done and association was statistically significant. (pearson chi square value 103.120, p=0) s the association between gender and awareness of participants about the use of aromatherapy. X axis represents gender and Y axis represents the number of participants. Majority of males (36.36%) agree that aromatherapy can heal body pain. Blue and red denotes no. Chi Square test was done and association was statistically insignificant. However, the difference is statistically significant (pearson chi square value Bar graph represents the association between gender and participants who agree on the fact that aromatherapy can relieve stress and headache. X axis represents gender and Y axis represents the number of participants. Majority of the males (39.39%) agree that aromatherapy can heal stress and headache. Blue denotes yes and red denotes no. Chi Square test was done and association was statistically insignificant. However, the difference is hence statistically Fig. 10. Bar graph represents the association between gender and knowledge of participants regarding the side effects of using aroma. X axis represents gender and Y axis represents the number of participants. Majority of the males (42.42%) responded that direct us aroma oils can cause side effects. Blue denotes yes and red denotes no. Chi Square test was done and association was statistically significant. However, the difference is statistically significant (Pearson chi square value 100.014, CONCLUSION The present study concludes that many of the participants were aware of this practice of aromatherapy. Hence, the use of aromatherapy should be practiced and it should be used effectively to overcome both physical and mental stress which is caused during this lockdown due to COVID 19 pandemic. Also this technique of aromatherapy must be known to all people who are suffering from mental stress as it can be substituted for other drugs and psychiatric therapies. CONSENT As per international standard or university standard, patients' written consent has been collected and preserved by the authors. ETHICAL APPROVAL As per international standard or university standard written ethical approval has been collected and preserved by the authors Bar graph represents the association between gender and knowledge of participants regarding the side effects of using aroma. X axis represents gender and Y axis represents the number of participants. Majority of the males (42.42%) responded that direct usage of intense aroma oils can cause side effects. Blue denotes yes and red denotes no. Chi Square test was done and association was statistically significant. However, the difference is statistically significant (Pearson chi square value 100.014, p=0) (>0.05), hence statistically significant The present study concludes that many of the participants were aware of this practice of aromatherapy. Hence, the use of aromatherapy should be practiced and it should be used effectively to overcome both physical and mental stress which is caused during this lockdown due to COVID 19 pandemic. Also this technique of aromatherapy must be known to all people who are suffering from mental stress as it can be rugs and psychiatric As per international standard or university standard, patients' written consent has been collected and preserved by the authors. As per international standard or university ethical approval has been d and preserved by the authors. ACKNOWLEDGEMENT I sincerely thank Saveetha Dental College for their constant support to carry out this study.
2020-09-03T09:03:59.846Z
2020-08-27T00:00:00.000
{ "year": 2020, "sha1": "4725b15c3604966e369eb7d4f40f8c305ed0a2c2", "oa_license": "CCBY", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/30730/57654", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f77d4817837dd4ccd1e4c83c98a19b207f15bce6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
104324358
pes2o/s2orc
v3-fos-license
Surface Modification of Gentamicin-loaded Polylactic Acid (PLA) Microsphere Using Double Emulsion and Solvent Evaporation: Effect on Protein Adsorption and Drug Release Behaviour Polylactic acid (PLA) microsphere as a drug carrier has been extensively investigated for drug delivery systems. However, due to its limitation of surface hydrophobicity, surface modifications have been studied to improve its utilisation in tissue engineering applications. In the present study, PLA microsphere loaded with gentamicin (GENMS) was modified to enhance its hydrophilicity by surface treatment with additional of ethanol. Ethanol was applied as a co-treating medium in alkaline hydrolysis of NaOH to assist the hydroxide nucleophilic attack on the ester bond of PLA. Alkaline concentrations of NaOH and NaOH/ethanol was set at 0.15 M, 0.25 M and 0.35 M. After surface treatment, hydrophilicity of GENMS surface were improved significantly whereby contact angle was reduced for about 23.1% and 26.8% for modification using NaOH and NaOH/ethanol, respectively, compared with the neat GENMS. Obvious surface roughness presented by NaOH/ethanol modification improved hydrophilicity of GENMS. As a result, protein adsorption on the GENMS surface treated by NaOH/ethanol were reduced than NaOH modification. Moreover, the highest encapsulation efficiency by NaOH/ethanol modification provided an advantage of co-treating by ethanol and has a greater drug release compared with NaOH modification. INTRODUCTION Local antibiotic release is aimed to prevent implant-associated infections by reducing the bacteria adherence at the implantation site. The utilisation of carriers for local antibiotic release is essential to control drug release at predetermined amount of drug in a predictable manner over a specified time. Microspheres have been extensively studied for the past few decades as a targeted drug delivery device in tissue engineering applications. The use of biocompatible and biodegradable polymers as microspheres have been widely used in drug delivery systems. [1][2][3] The most commonly used biodegradable polymers were polylactic acid (PLA) and poly(D, L-lactide-co-glycolide) (PLGA) and poly(e-caprolactone) (PCL). 4,5 Many studies indicate that PLA formulations containing therapeutic agents exhibit no adverse tissue reaction, either locally or systemically when used in therapeutic applications. 6,7 Generally, biodegradable polymeric carriers can be degraded via chemical hydrolysis and easily resorbed or eliminated. PLA is an aliphatic polyester. According to Da Silva et al., PLA is considered biocompatible since there was no toxic or carcinogenic substances release to biological environment during their bulk degradation. 8 However, the consideration on its surface biocompatibility is very important since the microspheres deal with the interfaces between implanted biomaterial and host environment. It is well known that the surface properties of PLA are relatively hydrophobic resulted to ineffective to interact specifically with cells. 4 It also does not possess any functional groups for the attachment of biologically active molecules. Thus, these shortcomings restricted the application of PLA in bone tissue engineering. Hydrophobic surfaces are having higher adsorption of proteins and denaturation of proteins at the surface. This leads to exposure of new epitopes which are believed to be a cause of immune reactions towards hydrophobic materials. 9 On the other hand, a highly hydrophilic surface may expel any protein molecules and inhibit protein adsorption. Hydrophilic surfaces are therefore preferred for microspheres aiming on cell interaction in the host implantation. Many surface modification techniques, such as silanisation, radiation and photo-grafting techniques, and alkali hydrolysis treatment have been developed for improving the cell affinity of polymers. 10-12 Among them, alkali hydrolysis treatment is a feasible and convenient method. After surface hydrolysis of aliphatic polyester, the hydrophilic carboxyl and hydroxyl could be produced with cleavage of the ester bonds. However, strong alkali treatment is accompanied by extended bulk degradation of the polyester and it was shown that a mild alkali treatment at concentration 0.5 M and above could not break the ester bonds effectively in a short time. 13 Previous study reported that a mixture of sodium hydroxide (NaOH) and acetonitrile can be applied to modify the surface properties of poly(ethylene terephthalate) films and membranes in which acetonitrile was used as a co-treating medium. 14 However, acetonitrile is expensive, toxic and pollution of the environment cannot be neglected. Study by Yang et al. showed that the hydrophilicity of poly (l-lactic acid) PLLA was improved by indicating of contact angle that lowered about 39° after treating with additional of ethanol in NaOH. 13 In addition, changes in the bulk and surface of microsphere caused by hydrolysis will not only affect the bulk physical properties of the microsphere, but also release the encapsulated drug in the microsphere via diffusion. 7 Considering mild concentration of aqueous NaOH solution used in previous study, present study is aimed to use low concentration and mixture of aqueous NaOH and ethanol to modify the surface properties of gentamicin GEN-loaded PLA microsphere (GENMS). Here, non-toxic and cheap ethanol was used as co-treating medium. GEN was used in the study due to its most common antibiotics for bone replacement and provides the wide antibacterial spectrum. PLA microspheres were fabricated using single emulsion and solvent evaporation (ESE) technique. The GENMS were produced by double ESE method since this technique can produce microspheres with controlled-release profile using different biocompatible waterinsoluble polymers. 15 The changes of surface properties and morphology were investigated by scanning electron microscopy (SEM), water contact angle and surface energy, protein adsorption and drug release profile. Materials In fabrication of drug-loaded microsphere, PLA microspheres was fabricated by PLA pellet, purchased from Nature Works. Dichloromethane (DCM) was purchased from Merck Millipore and poly(vinyl) alcohol (PVA, 80% hydrolysis) was acquired from Sigma Aldrich. Ethanol (95%), sodium hydroxide (NaOH) and hydrochloric acid (HCl, 37%, fuming acid) from Sigma Aldrich were used to modify the surface of fabricated drug-loaded PLA microspheres. Gentamicin (GEN) reagent solution (10 mg ml -1 ) as encapsulated drug was purchased from Gibco, Life Technologies. Distilled water was used as liquid medium. Bovine serum albumin (BSA, A2058-1G, 40 mg ml -1 water soluble) for protein adsorption test was purchased from Sigma Aldrich. Fabrication of Gentamicin-loaded PLA Microsphere PLA microspheres were fabricated using single emulsion and solvent evaporation (ESE) technique while for GENMS, double ESE was used. In this study, the dispersed phase volume ratio of 1:3 (PLA: PVA) was constructed in fabricating PLA microsphere. Firstly, 2.7 g of PLA pellets was dissolved in 30 ml DCM and followed by dispersion of 1 ml GEN solution (with concentration of 10 mg ml -1 or 10000 ppm). This solution is subjected to vigorous homogenisation to yield the primary emulsion. Then the primary emulsion was immediately emulsified into 90 ml PVA solution. The mixtures were stirred at ~1250 rpm for 3 min to form secondary emulsion at room temperature. Then, the speed of the stirrer was decreased to ~250 rpm for overnight to allow the evaporation of DCM. The particles of the PLA were formed at the bottom of the flask and was washed, filtered and dried overnight at room temperature before the fabricated microspheres were collected. Figure 1 shows double ESE process to produce GENMS. Surface Modification by Alkaline Hydrolysis After preparation of PLA microspheres, surface hydrolysis treatment was performed to modify surface and introduce functionality on PLA microsphere. The microspheres were immersed in NaOH or NaOH/ethanol solutions for 24 h with 0.15 M, 0.25 M and 0.35 M concentration each. After immersion in NaOH or NaOH/ethanol, neutralisation was done by immersing treated microspheres in HCl for 2 h followed by repeated washing with 500 ml distilled water before being dried for overnight. In this study unmodified PLA microsphere was denoted as neat GENMS while modified PLA microsphere was denoted as modified GENMS (with NaOH or NaOH/ethanol). Protein Adsorption on Neat GENMS and Modified GENMS using BSA The protein solutions were prepared by directly dissolving BSA into deionised water with pH 7.4. The prepared concentration of BSA was 0.5 mg ml -1 . Adsorption analysis were carried out by contacting 0.08 g of microspheres (neat GENMS and modified GENMS) with a 10 ml solution of 0.5 mg ml -1 BSA concentration in glass vial. After 40 min whereby, the microspheres were pulled down at the bottom of bottle, C initial (C i ) of each sample was measured using UV-VIS at 279 nm of BSA absorbance intensity. After that, the mixtures were left for 24 h for C equilibrium (C e ). The readings of concentration were based on standard curve plotted as shown in Figure 2. The adsorption amount (q, mg g -1 ) was calculated based on Equation 1: 16 where C i and C e (mg ml -1 ) are the initial concentration of protein and the concentration of protein at adsorption equilibrium, respectively, V (ml) is the volume of protein solution and W (g) is the weight of microspheres. Percentage of Encapsulation Efficiency and Drug Loading of Neat GENMS and Modified GENMS In order to determine the encapsulation efficiency, PLA microspheres loaded with gentamicin at weight 40 mg were fully degraded in 5 ml of 1 M NaOH solution. The samples were left for overnight until it was fully degraded and ultraviolet (UV) visible spectroscopy was conducted to determine the percentage of encapsulation efficiency and drug loading of the GEN. Through a scanning of absorbance intensity of GEN at 195 nm wavelength, a standard curve was plotted with known concentration between 10-400 ppm ( Figure 3). The calculation of EE% and DL% of the GEN were calculated using Equations 2, 3 and 4. Drug Release Assessment The drug release of neat and modified GENMS was measured by dispersion of 30 mg PLA microsphere in 0.1 M of 10 ml PBS in glass vial. Then, the glass vials were placed in shaker with 60 rpm at constant temperature of 37°C. At pre-determined time intervals, aliquots of 3 ml from each sample were extracted and replenished with fresh PBS solution. This is to maintain the total volume of 10 ml. The level of GEN in the elution was detected by UV-Vis spectrophotometer at a wavelength of 195 nm. Morphology Before observation, samples were coated with gold (Au). Surface morphology of PLA microspheres before and after surface modification were evaluated using scanning electron microscope (SEM, Zeiss Supra 55VP, Germany). Contact angle measurement Contact angle, θ, is a quantitative measure of wetting of a solid by a liquid. This test is used to determine hydrophobicity or hydrophilicity of the neat GENMS and modified GENMS by using ramé-hart instrument co. with DROPimage software. Distilled water was used as a contact medium. Figure 4 shows the bulk shape morphology of neat GENMS fabricated by double ESE technique. The images show that the method is able to produce almost perfect spherical microspheres with a uniform surface morphology. The wettability of a solid surface is usually expressed by the contact angle and surface energy and it is closely related to the surface morphology. 17 Water contact angle measured the hydrophobicity or hydrophilicity of neat GENMS and modified GENMS, with more hydrophilic GENMS having smaller water contact angles. Figure 6 shows the drop profiles of water on the surfaces of modified GENMS with NaOH and NaOH/ethanol while neat GENMS is used as a reference. Table 1 represents the contact angle measurement and surface energy of modified GENMS. Modified GENMS with NaOH/ethanol indicated that contact angle was lower (more hydrophilic) than that of NaOH. The contact angles were reduced by 26.8% and 23.1% after treated using NaOH and NaOH/ethanol, respectively comparing to neat GENMS. It indicated that the hydrophilicity of GENMS was enhanced by treatment with NaOH/ethanol mixture due to modification of the surface by increasing the roughness and introducing pores. This is because ethanol was found to assist the hydroxide nucleophilic attack on PLA's ester bonds. 13 A lower water contact angle of modified GENM with NaOH/ethanol was also supported by its high surface energy (51-58 mJ m -2 ) than those with NaOH (49-52 mJ m -2 ). The surface energy of all modified GENMS increased in range of 18.2% to 39.2%compared to the neat GENMS. The water contact angle of modified GENMS with NaOH/ethanol and higher concentration for both NaOH and NaOH/ethanol (from 0.15 to 0.35), show lower contact angle owing to the enriched hydrophilic polar of hydroxyl (OH) and carboxylic acid (COOH) terminal groups. In addition, the improvement of surface hydrophilicity and surface energy may be attributed to the increase of the surface roughness as discussed in morphology part. Thus, it was noted that additional ethanol in alkaline hydrolysis treatment improved the hydrophilicity of the GENMS surfaces. Figure 7 shows the results of protein adsorption on neat GENMS and modified GENMS. It could be observed that modified GENMS with NaOH showed higher ability to bind BSA molecules than modified GENMS with NaOH/ethanol while neat GENMS has the highest protein adsorption. Less hydrophilicity of modified GENMS with NaOH played a major role in more protein adsorption at the interface. RESULTS AND DISCUSSION It is generally understood that hydrophilic surfaces are more resistant to proteins compared to hydrophobic surfaces. 9,18 Therefore, more hydrophilicity presented by GENMS with NaOH/ethanol proved that the existing functional group repelled the protein adsorption and consequently a low degree of denaturation obtained. NaOH NaOH NaOH NaOH/ Ethanol The percentage of encapsulation efficiency and drug loading of GEN in GENMS was determined based on standard curve of gentamicin concentration (10-400 ppm) as shown in Figure 3. Table 2 presents the encapsulation efficiency (%) and drug loading (%) of neat GENMS and modified GENMS. From the calculation, the average encapsulation efficiency and drug loading of GENMS were 13.970% ± 0.311 and 0.028 % ± 0.001, respectively. Encapsulation efficiency can be defined as the percentage of the ratio of mass drug encapsulated to the mass of drug loaded in the emulsion. The drug loading is related to the drug contained in certain mass of microspheres. Since GEN is a very hydrophilic drug, it tends to come out into water phase when microsphere are fabricated using ESE method which probably made the obtained encapsulation low. 19 In overall, higher encapsulation efficiency and drug loading is observed for the modified GENMS with NaOH/ethanol compared to modified GENMS with NaOH. As expected, both modifications show lower encapsulation efficiency and drug loading than neat GENMS. The differences of encapsulation efficiency and drug loading between modification with NaOH/ethanol and NaOH were determined and shown by percentage value in Table 2. Interestingly, even though more hydrophilicity was created by NaOH/ethanol during hydrolysis, the encapsulation efficiencies of modified GENMS were not reduced. For example, encapsulation efficiency and drug loading of GENMS with 0.35 M NaOH/ethanol increased 3.98% and 4.37%, respectively compared to NaOH modification. This is probably due to hydrolysis with 0.15-0.35 M NaOH/ethanol that changed the GENMS by surface erosion reaction without bulk degradation. This is supported by rougher surfaces of modified GENMS with NaOH/ethanol as shown in Figure 5 which demonstrated the occurrence of surface erosion. It is well known that, bulk degradation on microspheres during alkaline hydrolysis was not preferable in surface modification because certain amount of GEN might be loss in this process. 7 NaOH/ethanol provided rougher surface of GENMS in order to increase encapsulation efficiency. ( )* % is obtained from the difference between GENMS modified with NaOH/ethanol and NaOH In the present study, low alkaline concentrations of 0.15 M to 0.35 M can be applied to avoid severe bulk degradation beside of improving its hydrophilicity and cell affinity. 13 The degradation might be very fast at highly basic and highly acidic mediums (as compared to neutral conditions). 20 Therefore, surface modification using NaOH/ethanol brings out an advantage over modified GENMS since higher encapsulation efficiency is a desired goal for controlled drug release studies. 21 The cumulative GEN release amount by difference encapsulation efficiency which measured by UV-Vis spectrophotometry is shown in Figure 8. The GEN release in PBS solution was measured for over 10 days, which the GEN release behaviour from neat GENMS was used as comparison. However, modification with higher concentration of 0.35 M NaOH and 0.35 M NaOH/ethanol presented that the lowest initial burst happened, 12%-19% within 7 h. This is due to low encapsulation efficiency, which had been lost during hydrolysis process. It might also be that the surface associated GEN was diminished using this concentration as alkaline hydrolysis treatment. Even though initial high burst release rate may cause unfavourable roles, i.e., may lead to drug concentrations near or above the toxic level, excreted without being effectively utilised and wasted, there are still have favourable perspective. 22 Initial burst releases can provide immediate relief such as those used at the beginning of wound healing followed by prolonged release to promote gradual healing and has ability to localise delivery to the specific site of implantation. 24 Relating to higher encapsulation efficiency by modified GENMS with NaOH/ ethanol, GEN release rate increased compared with modified GENMS with NaOH. Additionally, more hydrophilicity of modified GENMS with NaOH/ ethanol was also attributed to the increased in GEN release. 25 This is due to the diffusion path of the surface had been reduced which the drug molecules have to cross. However, based on modification using both 0.15 M and 0.25 M, GEN release 0.15 M NaOH/ethanol and 0.25 M NaOH/ethanol were higher than neat GENMS and those 0.15 M NaOH and 0.25 M NaOH, respectively. This is probably the presence of surface associated GEN to diffuse easily than 0.15 M NaOH and 0.25 M NaOH and assisted with their higher encapsulation efficiency. In contrast, modification with 0.35 M showed both 0.35 M NaOH/ethanol and 0.35 M NaOH have the release rate were lower than neat GENMS. This is might be due to these modification concentrations had diffused out the encapsulated GEN in modified GENMS during hydrolysis process. Therefore, the alkaline hydrolysis does not necessarily lead to improve hydrophilicity, in fact co-treating by ethanol had maximised the encapsulation efficiency, thus be beneficial in controlling the release profile. It can be suggested that an optimise concentration using alkaline hydrolysis is 0.25 M NaOH/ethanol for 24 h treatment. CONCLUSION The introduction of ethanol in alkaline treatment assisted the hydrolysis using NaOH. Ethanol acts as a co-treating medium in 0.15 M, 0.25 M and 0.35 M of NaOH. Therefore, the present study concludes that: (a) Ethanol in NaOH treatment facilitated the hydroxide nucleophilic attack on the ester bonds and avoiding severe bulk degradation. (b) Surface roughness of the modified GENMS by NaOH/ethanol led to the improvement of surface hydrophilicity by 4% reduction of contact angle compare to NaOH modification. (c) Hydrophilicity by NaOH/ethanol contributed to the low degree of protein adsorption on the GENMS surfaces compare to NaOH modification. (d) Encapsulated drug was not reduced even though the hydrophilicity was improved by NaOH/ethanol. The encapsulation efficiencies increased up to 6% compared to that modification done by NaOH. (e) 0.25 M NaOH/ethanol was suggested as a suitable mixture for alkaline treatment due to the difference on encapsulation efficiency (%) and drug loading (%) comparing to 0.25 M NaOH as well as greater drug release rate.
2019-04-10T13:13:12.144Z
2019-02-15T00:00:00.000
{ "year": 2019, "sha1": "b938072159a56566edd0d091ecc45a9a6570b7bd", "oa_license": "CCBY", "oa_url": "http://jps.usm.my/wp-content/uploads/2019/02/JPS-30Supp1_Art7-109-124.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9d288729ba6f39f329451fd6c64de4740138de82", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
245348189
pes2o/s2orc
v3-fos-license
Prematurity and the Risk of Development of Childhood Obesity: Piecing Together the Pathophysiological Puzzle. A Literature Review One of the most devastating public health challenges in the twenty-first century is childhood obesity, and its prevalence is growing at a frightening rate. Premature infants have a greater likelihood of childhood obesity at age six to 16 compared to term infants. This study aims to explore the underlying mechanism of developing childhood obesity in this high-risk group. There are most likely multiple interconnected and supporting mechanisms that put this vulnerable population at risk of childhood obesity. Inflammation is a possible root cause. Prenatal causes included epigenetic changes as well as placental inflammation. Disturbances in hormonal pathways and elevated levels of serum bilirubin are possible explanations. Furthermore, preventable factors in the postnatal period were identified, such as weight gain and exclusive breastfeeding. The prevalence of childhood obesity in preterm infants is high; thus, it is essential to understand the pathophysiology and address any preventable factors to decrease this disease burden. Introduction And Background Annually, 15 million babies are born prematurely, defined by the World Health Organization (WHO) as born before 37 completed gestation weeks. This number equates to more than one in 10 babies worldwide. Alarmingly, this number is rising. According to the WHO, the rate of preterm deliveries ranges from 5% to 18%. The prevalence is approximately 12% in lower-income countries and 9% in higher-income countries [1]. The multiple causes of premature births can be found in Table 1. Despite the advances in medical technology, which results in increased survival of preterm infants, many health problems are associated with this group of newborns. Problems include metabolic disease and obesity that continue well into adulthood [2]. morbidity of obesity-related conditions and medical treatment in later life contribute to high health care costs [5]. There is a higher prevalence of childhood obesity in preterm infants [2]. Thus, emphasizing the importance of understanding the preventable mechanisms that can be addressed to limit the spread of this rising public health crisis. There are many attributable factors to developing childhood obesity; however, this article will explore the link between premature infants and an increased prevalence of childhood obesity in this population. Previous studies have found that obesity is a state of chronic low-grade inflammation [6]. A large study including 882 premature infants concluded that neonatal inflammation preceded the onset of obesity, suggesting that inflammation can contribute to the development of obesity [7]. The initiation of metabolic inflammation is thought to be as early as in the prenatal period, which can be influenced by multiple factors from either maternal or paternal environments [8]. McEwen and Seeman emphasized that stress can alter the hypothalamic-pituitary-adrenal (HPA) axis ( Figure 1). It is the primary mechanism that plays a role in adverse health consequences such as obesity in later life due to increased circulatory stress hormones such as cortisol [9]. FIGURE 1: Hypothalamic pituitary adrenal axis dysfunction promoting obesity Any additional stress impacts the HPA axis and causes a rise in cortisol which promotes inflammation [9]. CRH: corticotropin-releasing hormone; ACTH: adrenocorticotropic hormone; HPA: hypothalamic-pituitary-adrenal A recent study done in 2019 explored the idea of elevated neonatal bilirubin levels as a proposed mechanism. Neonatal bilirubin levels had a positive trend of association with childhood obesity in preterm infants. However, the data collected was completed over 50 years ago and may not reflect the current trend of obesity and is the main limitation of this study [5]. Parental feeding patterns and introducing solid foods after six months of corrected gestational age are reported in some studies to potentially lower the risk of unfavoured weight trajectories into childhood [10]. This article will further explore feeding habits concerning accelerated weight gain or catch-up growth. The review aims to highlight preventable mechanisms of childhood obesity in premature infants. Addressing these factors effectively will help decrease the burden of childhood obesity and its devastating complications. Childhood obesity is a worldwide burden, and this article can benefit both developed and developing countries. Review Despite the rapid increase in prevalence and interest in childhood obesity over the last few years, the exact mechanism remains uncertain in the previously born premature population. The causes of childhood obesity can be examined through a developmental framework examining the role of metabolic inflammation. A host's inflammatory response can be acute to remove the negative stimulus and return the body to the state before the injury. Chronic inflammation can be seen in which rapid clearance mechanisms fail, are incomplete, or in the case of obesity, where gradual or repeated alterations occur to normal physiology [8]. Many initiating events can cause the expansion of adipose tissue and trigger chronic systemic inflammation. These are seen in Table 2 and can be divided into prenatal, perinatal, and childhood factors. Prenatal factors are a programmed inheritance, including placental inflammation and epigenetic changes in paternal germ cells. Environmental factors include endotoxemia or microbiome alterations throughout the life course. Intrinsic growth rates are also crucial in the perinatal period as either intrauterine growth restriction or rapid growth. An essential factor in adipose tissue metabolism during adolescence and physical activity and dietary habits [8]. Prenatal Factors Perinatal Factors Childhood Factors Placental inflammation Intra-uterine growth restriction Inactivity/poor diet A meta-analysis was done in 2020 examining prematurity and risk of childhood obesity showed that premature newborns had a higher likelihood of childhood obesity at the age of six to 16 compared to term infants [11]. No difference in childhood obesity was seen in preterm infants that were described as either small for gestational age (SGA) or appropriate for gestational age (AGA). Furthermore, prematurity increases the risk by 1.2 to develop a higher body mass index [11]. These findings were consistent with many previous studies done [12,13]. Prenatal factors The Barker hypothesis and thrifty gene hypothesis claims that exposure in the prenatal and perinatal period increases the likelihood of preterm infants storing extra fat and energy and can increase the prevalence of chronic diseases later in life [11]. It suggests that inflammation can contribute to the development of obesity. The prenatal period can be influenced by both the maternal and paternal environment [8]. Animal studies revealed that paternal obesity also leads to epigenetic changes in sperm and triggers hypothalamic inflammation in the offspring [14]. Studies done in mice show that maternal obesity increases adipose tissue inflammation in the offspring. Placental inflammatory macrophages are elevated in obese mothers and release pro-inflammatory cytokines [15]. Maternal obesity is an additional risk factor for increased serum inflammatory markers in preterm infants and not in term newborns [16]. Researchers have also proposed that leptin plays a role in weight regulation in infancy and reports an association between higher leptin at age three and more significant weight gain and adiposity in later childhood [17]. The hypothalamic-pituitary-adrenal axis and its dysregulation in premature infants leading to obesity The HPA axis is a primary mechanism thought to be closely linked to stress. Exposure to stress in utero and soon after delivery is high in premature infants. The initial maternal separation, the neonatal intensive care experience, and subsequent infections may have disastrous health and developmental challenges which affect the body's stress response. It can also be observed in a dysregulation of the HPA axis [9]. Higher levels of HPA reactivity to stress early on in life and if experienced on a chronic basis throughout early childhood and adolescence are associated with adverse health consequences. This process results in higher levels of stress hormones such as cortisol, leading to cardiovascular and metabolic disease such as obesity, manifesting even in later childhood years or early adolescence [18]. Neonatal serum bilirubin in premature infants leading to obesity Neonatal jaundice is common, and preterm infants are more susceptible to higher serum bilirubin levels than term newborns. As a result of the decreased survival of circulating fetal red blood cells, there is an increase in the bilirubin load in the hepatocytes. This results in the increased enterohepatic circulation of bilirubin. Additionally, there is a decrease in hepatic uptake of bilirubin in conjunction with defective bilirubin conjugation. As a result of increased neonatal red cell production and death, hepatic and gastrointestinal immaturity hyperbilirubinemia in preterm infants is more prevalent, severe and the duration is often protracted compared to term neonates [19]. A large prospective birth cohort was carried out over seven years, and the trend that was observed is higher the level of neonatal serum bilirubin, the higher the risk of childhood obesity [5]. The underlying mechanism suggested is that exposure to extreme levels of bilirubin results in neurotoxicity leading to more neurodevelopmental disabilities. Children with these disabilities often engage in less physical activity and have behavioral problems resulting in dysregulation of food intake and poor sleep. This results in a higher risk of developing childhood obesity [20]. However, this study has some limitations. The main one is that the data was collected more than 50 years ago: their birth weight, socioeconomic status, body composition, and maternal pregnancy factors are different from the preterm infants now. Additionally, genetic disorders which could contribute to jaundice and obesity were not differentiated. Caloric intake and exercise levels during childhood were not evaluated, which could be confounding factors. Multiple randomized trials need to be repeated to confirm if the relationship between bilirubin levels and childhood obesity is still relevant. Accelerated weight gain in premature infants leading to obesity Aggressive nutritional intervention and catch-up growth in preterm infants were previously justified from a neurodevelopmental perspective. However, early weight gain may cause childhood obesity without advancing cognitive intelligence [11]. Weight is closely monitored in preterm infants, and there are usually strict discharge weight criteria. Thus, nutritional management for vulnerable preterm infants needs to be managed by a specialized team. Early nutritional consultation and a healthy catch-up to maintain a normal growth trajectory are key steps to preventing obesity [21]. A meta-analysis that compared 19 studies found that accelerated weight gain was a serious risk factor for developing childhood obesity. Accelerated weight gain significantly increased the risk of obesity by 2.69 times in children between eight to 11 years old [11]. This is a consistent finding, as other studies have shown that rapid postnatal growth in the first six to 12 months of life is a strong risk factor for metabolic disease [22]. Postnatal overfeeding has also been studied in rodents showing that it causes hypothalamic inflammation as early as 14 days after birth, leading to dysregulation of this axis even into adulthood. This suggests that the inflammatory set point of the brain is permanently changed by accelerated early life growth [23,24]. Feeding patterns in premature infants leading to obesity Infancy (the first year of life) can be targeted as a critical period to prevent childhood obesity. Breastfeeding has been shown as a protective factor [25]. A meta-analysis showed that breastfeeding reduced the odds of being overweight and obese by 13% [26]. Another study found that each additional month of breastfeeding further reduced the prevalence of being overweight by 4% [27]. A higher prevalence of obesity is seen among all children who were never breastfed or who were breastfed for less than six months compared to those who were breastfed for more than six months-suggesting that breastfeeding is a preventative factor in developing childhood obesity [25]. Exclusively breastfeeding also prevents the early introduction of complementary foods that could lead to excessive weight gain. Studies have also shown that protein and total energy intake are lower in breastfed infants than formula-fed ones. Human milk is rich in Bifidobacteria, which is found to a lesser extent in obese children's guts [28]. Furthermore, breast milk regulates food intake and energy balance due to the hormonal and biological factors it contains; this may help shape long-term physiological processes responsible for maintaining weight and preventing obesity. By promoting healthy weight gain in infancy, breastfeeding can potentially program an individual to lower the risk of obesity later in life [28]. Additionally, evidence shows that formula-fed infants have a higher plasma insulin level than breastfed infants, which stimulates fat deposition and early development of adipocytes [29]. Formula-fed neonates also tend to have higher body weight, which may suggest that both the higher protein intake and the weight gain in early life contribute to developing childhood obesity [30]. A large cohort study done in 2020 suggested that the most important predictor of childhood obesity is the trajectory of the body mass index z score (adjusted for the child's age and sex). Introducing solid foods after six months of corrected gestational age could lower the risk of the unfavorable course of childhood obesity [10]. The postnatal period is a sensitive window for patterning both nutritional and inflammatory responses, which can correlate with long-lasting and devastating metabolic complications, such as obesity. Limitations and recommendations The study's main limitation is that risk factors such as socio-economic status of the families, complications at birth, and genetic disorders were not factored into all the studies selected and reported, which could negatively skew the results. Additionally, some studies used data collected from 50 years ago which may not still be relevant to today's population. Future recommendations include forming extensive sample-sized prospective birth cohort studies with a long follow-up period to accurately understand the underlying mechanisms that cause childhood obesity in the preterm population. Studies should also use standardized measurements detectable early in life, which can be repeated and measured later in childhood. By understanding the critical factors leading to childhood obesity, targeted management can help decrease the burden of obesity and metabolic comorbidities. Conclusions Premature infants are at increased risk of developing disastrous metabolic complications such as childhood obesity. The complex underlying mechanisms that can cause childhood obesity in infants born prematurely were explored. There are most likely multiple interconnected and supporting mechanisms that put this vulnerable population at risk of childhood obesity. Inflammation is explored as a possible root cause, along with prenatal factors, HPA axis dysregulation, and levels of neonatal serum bilirubin. Furthermore, preventable factors in the postnatal period were identified, such as weight gain and exclusive breastfeeding. Childhood obesity is increasing globally at an alarming rate, and this paper adds to the understanding of the critical factors that play a role in its mechanism in children born preterm. Future research needs to focus on creating long-term prospective cohort studies to find a temporal relationship between preterm infants and the causal link to childhood obesity. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-12-21T16:07:09.314Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "7b718382663d9e73eaf484aeabc174979d62d834", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/77978-prematurity-and-the-risk-of-development-of-childhood-obesity-piecing-together-the-pathophysiological-puzzle-a-literature-review.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd81c354f8a578709fd42e55cb2103bca2712ce3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15207162
pes2o/s2orc
v3-fos-license
Wave-Processing of Long-Scale Information by Neuronal Chains Investigation of mechanisms of information handling in neural assemblies involved in computational and cognitive tasks is a challenging problem. Synergetic cooperation of neurons in time domain, through synchronization of firing of multiple spatially distant neurons, has been widely spread as the main paradigm. Complementary, the brain may also employ information coding and processing in spatial dimension. Then, the result of computation depends also on the spatial distribution of long-scale information. The latter bi-dimensional alternative is notably less explored in the literature. Here, we propose and theoretically illustrate a concept of spatiotemporal representation and processing of long-scale information in laminar neural structures. We argue that relevant information may be hidden in self-sustained traveling waves of neuronal activity and then their nonlinear interaction yields efficient wave-processing of spatiotemporal information. Using as a testbed a chain of FitzHugh-Nagumo neurons, we show that the wave-processing can be achieved by incorporating into the single-neuron dynamics an additional voltage-gated membrane current. This local mechanism provides a chain of such neurons with new emergent network properties. In particular, nonlinear waves as a carrier of long-scale information exhibit a variety of functionally different regimes of interaction: from complete or asymmetric annihilation to transparent crossing. Thus neuronal chains can work as computational units performing different operations over spatiotemporal information. Exploiting complexity resonance these composite units can discard stimuli of too high or too low frequencies, while selectively compress those in the natural frequency range. We also show how neuronal chains can contextually interpret raw wave information. The same stimulus can be processed differently or identically according to the context set by a periodic wave train injected at the opposite end of the chain. Introduction Distributed spatiotemporal processing of neural information is widely recognized as the basis for binding and generation of ultimate cognitive abilities in the brain [1,2]. Gamma waves have been postulated as a carrier of such high order functions [3,4]. Recently the propagation of solitary waves in two-dimensional neuronal structures has been proposed as a mean for generation of compact internal representations of external dynamic situations [5,6]. Thus growing evidence suggests that neurons can participate in a collective processing of long-scale information, relevant part of which is shared over all neurons but not concentrated at the single neuron level. In this context we define wave-processing of information as a computation (in terms of modification of global information contained in neuronal structure) mediated by nontrivial interaction of waves propagating over neuronal tissue. Thus the brain may actively work not only in time domain but also effectively use spatial dimension for information processing. Despite wide consensus on significant relevance of long-scale waves for information processing, neurophysiological and biophysical bases of their origin and interaction are largely unknown. Indeed, in the vast majority of experimental and theoretical models, waves traveling over dissipative excitable media (including neuronal structures) vanish at collision (see e.g. [7][8][9]). For example, refractory period behind traveling waves of spreading depression forces their annihilation after collision [10,11]. Obviously complete destruction of neuronal excitation caused by the interaction of waves cannot contribute to effective and versatile processing of information. A remarkable exception is the backpropagation of action potentials in dendrites involved in plasticity mechanisms and stimulus selection [12]. Recent experimental and modeling results show that annihilation of colliding dendritic spikes, far to be a residual phenomenon, could be crucial for information processing in active dendrites [13,14]. At the mesoscopic level, recent studies of local field potentials created by synaptic currents in dendrites revealed nontrivial interaction of the confluent inputs to populations of target cells [15][16][17]. Particularly, it has been found that Schaffer input to the CA1 region of the hippocampus is composed of wave trains in the gamma band. Then the coordinated activity of CA3 pyramidal neurons increases information flux in this pathway. Another handicap for spreading the concept of wave-processing is its scant experimental support due to significant difficulties in detection of macroscopic waves in multi-electrode data and their functional interpretation [18]. Most of the waves described in the literature have pathologic nature and hardly participate in information processing. Examples are large-range epileptic waves, spreading depression of Leao, spiral waves in hart tissue, etc. [11,[19][20][21]. Nevertheless, importance of self-sustained waves propagating and interacting throughout the intricate neuron morphology has been recently put in evidence [4,[22][23][24]. For example, it has been found that sniffing an odor induces three waves at different locations of turtle olfactory bulb [22]. These waves then interact in a complex way. When consecutive odor stimulations are presented, one of the waves is enhanced if the odorants are the same but suppressed if they are different. This finding suggests that waves may carry information about previous olfactory experience and process it appropriately. Thus investigation of mechanisms allowing neuronal structures to represent and process information in a significantly spatiotemporal way is a challenging theoretical and experimental problem with vital impact in different fields of Neuroscience, Medicine, and Nonlinear Dynamics. One of the most successful approaches for dealing with processing of long scale information uses the FitzHugh-Nagumo (FN) paradigm, which under simple mathematical assumptions captures essential functional features exhibited by neurons. The FN-model has been widely used to describe biological neural networks, interaction and propagation of waves, and processing of information (see e.g. [25][26][27] and references therein). Nonetheless these works assume that neurons locally create information, which is then transmitted, shared, and processed at the network level. We, however, shall demonstrate that nonlinear interaction of selfsustained waves, as carrier of information, can be implemented in classical chains of coupled FN-like neurons. Then such chains modeling laminar neuronal structures acquire ability of waveprocessing of long-scale information. Head-on collision of self-sustained waves in classical FN-chains leads to their complete annihilation. Such monostable interaction offers little, if any, computational capacity, whereas versatile waveprocessing of information requires bistable interaction of waves. Thus, simultaneously with wave annihilation the network dynamics has to admit at least one more significantly different response to the input stimuli, i.e. traveling waves should be able to cross each other. Transparent crossing of self-sustained waves has been known for a long time. In the last decades it has been shown that such behavior is not exclusive attribute of solitons, but a generic property observed experimentally [28,29] and numerically [25,[30][31][32][33][34]. The mechanism of crossing of self-sustained waves has been attributed to different nonlocal properties of the medium as e.g. cross-diffusion [34]. In this work we show that versatile wave-processing of long-scale information in laminar neural structures, described within the FNparadigm, can be achieved by introducing into the single-neuron dynamics an additional voltage-gated membrane current. This local mechanism, ubiquitous in real neurons [35], provides a chain of such neurons with new emergent network properties. In particular, nonlinear waves as a carrier of long-scale information exhibit a variety of functionally different regimes of interactions from complete or partial annihilation to transparent crossing. Thus neuronal chains can work as computational units performing different operations over spatiotemporal information. To further illustrate the great potential of the concept we show that neuronal chains can ''discard'' stimuli of too high or too low frequencies, while selectively compress those in the ''natural'' frequency range, i.e. we observe the phenomenon of complexity resonance. We also show how raw wave information can be contextually ''interpreted'' by a neuronal chain, i.e. the chain can process the same stimulus differently or identically according to the context set by a periodic wave train injected at the opposite end. Interaction of Waves in Chains of Coupled Neurons We shall illustrate the concept of the information waveprocessing by using a one-dimensional chain of FN-like neurons: where u j and v j are the so-called membrane potential and recovering variable of the jth neuron, respectively; 0v%1 is the smallness parameter; and f (u,v) accounts for nonlinear kinetics of the transmembrane currents. Finally aw0, bw0; and the parameter d §0 accounts for the strength of couplings between neighboring neurons. The chain (1) is considered with Dirichlet boundary conditions: u 0~uNz1~u à , where N is the total number of neurons in the chain and u à is the resting potential. FitzHugh-Nagumo Dynamics In the original FN-neuron the membrane kinetics is given by: Setting in (1) a~1:3 and b~0:273 (~0:09, d~0) we ensure that single FN-neuron has a unique attractor, a stable steady state, given by where u à &{1:12 a.u. defines the resting potential. Any perturbation of the neuronal state decays to the steady state, however, small but finite excitation can lead to a large excursion in the phase plane, i.e. to a spike (Fig. 1A). Voltage-gated Depolarizing High-threshold Current Let us now introduce into the neuron's kinetics an additional voltage-gated high-threshold current, e.g. due to Ca 2z conductance. where H( : ) denotes a Heaviside-like step function (we assume H[C(R,½0,1)), u th is the voltage threshold (we set u th~1 :7 in numerical simulations), and c describes the magnitude of the additional current. We note that the extended neuron model with the kinetics (3) reduces to the classical FN-neuron at c~0. For u th big enough (u th w2 for ?0) the neuron conserves FNintrinsic excitable property and can generate spikes similarly to the FN-neuron (Figs. 1A and 1B, blue curves). By rising c above the critical value: c Ã~( u th zb)=azu 3 th =3{u th a pair of additional steady states appears on the phase plane of single neuron through a fold bifurcation. Thus the neuron becomes bistable and can stay at rest either in ''down'' or ''up'' states, whereas a saddle point separates their basins of attraction. Strong enough perturbations can switch the neuron between down and up states whereas at the down state it can also generate spikes (Fig. 1B). The bistable property of the neuron together with excitability makes the collective dynamics of a chain of such neurons (e.g. interaction of waves) nontrivial. Role of Depolarizing Current in Head on Collision of Waves in Neuronal Chains Classical excitable FN-chain (1), (2) for strong enough coupling, d, admits self-sustained pulse-like running waves. Figure 2A illustrates head-on collision of such waves, which leads to their annihilation. As mentioned above such behavior is typical for waves with refractory period (see e.g. [8] for general discussion and [10,19] for electrophisiological and theoretical examples). Thus only trivial wave-processing of information, i.e. its annihilation, can be achieved in this chain. To cope with this restriction, above we extended the FN-model (1), (3). Figure 2B shows the wave behavior in the chain of bistableexcitable neurons. At the beginning the wave dynamics repeats the classical FN-chain (snapshot t 1 ). Indeed, in standard conditions of waves propagation the membrane potential u j (t) does not reach the threshold u th and the extra membrane current in (3) is negligible. Hence no difference exists between the wave behavior of the classical chain and the chain of bistable-excitable neurons. However, when the waves collide, the membrane potential in the collision region overcomes u th and appearing extra membrane current changes their dynamics (Fig. 2B, snapshot t 2 ). Balance between the depolarizing membrane current and the axial (along the chain) diffusive current creates a new quasi-stable structure, wave generator (Fig. 2B, snapshot t 2 ). The drive exerted by the wave generator transiently avoids collapsing of the chain excitation and emits two new waves propagating in opposite directions (Fig. 2B, snapshots t 3 , t 4 ). Finally, when the newly created waves run away, the balance between the excitatory and dissipative currents breaks and the wave generator collapses (Fig. 2B, snapshots t 5 ). Thus the relation between the magnitude of the voltage-gated excitatory current controlled by c and the axial (coupling) current controlled by d defines the functional regime of the wave collisions. As we shall see below the chain (1), (3) can exhibit a rich repertoire of behaviors and unexpected computational capabilities, which stem from the possibility of waves to cross each other. It is also worth noting that for small enough interneuronal coupling d the chain possesses several stationary or quasi-stationary behaviors including variants of spatial chaos (see for details e.g. [36,37]). We, however, concentrate here on the wave behavior and hence below restrict to the case d §1. Bases of Information Wave-processing As we shall see further the computational abilities of neuronal chains are based on coexistence of significantly different scenarios of wave collisions. In other words, for effective information processing the chain must admit at least two collision scenarios for the same parameter values. Above (Fig. 2B) we observed one scenario, the wave-crossing, which (in some extent) conserves the information in the chain. Let us now show that the dynamics of the chain of bistable-excitable neurons can be even more complex. Collision scenarios. First, we assume that colliding pulses are stationary waves, i.e. all transient processes of the wave formation have vanished and waves are given by u j (t)~ũ u(j+ct)vu th whereũ u( : ) is a pulse-like function and c is the wave velocity. Figures 3A-3D show the spatiotemporal evolution of two symmetric colliding waves for different values of the magnitude of additional excitatory membrane current (controlled by c). For small enough c two colliding waves annihilate as it typically happens in the FN-chain in particular and in reaction-diffusion systems in general (Fig. 3A). For moderate values of c the waves cross each other enabling transparent transmission of waveinformation (Fig. 3B). We notice a positive phase-shift at the collision, i.e. delay in the wave reemission. For even higher c the neurons involved in collision are switched to the up-state and form a pacemaker that emits periodic sequence of waves (Fig. 3C), i.e. a new source of wave-information emerges in the chain at the place of spatial coincidence of waves. Finally for high enough c the upstate becomes dominating and two phase waves emerging at the collision switch the chain from down to up-state (Fig. 3D). Such behavior is similar to waves of spreading depression in the hippocampus [19]. We note that the phase transition is ''supersonic'', i.e. it propagates faster than subthreshold ''sound'' waves. Second, we consider asymmetric collisions of a stationary traveling wave with a wave newly excited by a stimulus applied near the place of future collision. In general, asymmetric collisions lead to asymmetry in the wave creation. For moderate c we observe selective annihilation of a part of the information (Fig. 3E vs 3B). Such behavior is untypical for solitons and for traveling waves in most of the reaction-diffusion systems (including the classical FN-chain). We also note that the behaviors shown in Figures 3B and 3E correspond to the same parameter values, i.e. the chain exhibits bistable interaction of waves, condition required for effective wave-processing of information. For slightly higher c the waves cross each other as in Figure 3B but now the newly created waves are desynchronized, i.e. they receive different phase shifts (Fig. 3F). For the value of c corresponding to the formation of a pacemaker the released waves again have different phase shifts ( Fig. 3C vs 3G). Similarly, in the phase wave regime the wave emitted to the right has lower phase shift (Fig. 3H vs 3D). Bifurcation analysis of wave-processing. The numerically found different collisions' scenarios ( Fig. 3) correspond to functionally different states of the information processing in the chain. In order to gain insight into the dynamics of wave interaction we studied bifurcations occurring in the system. The stationary solutions of Eqs. (1), (3) are given by the 2D map: The map admits three constant solutions (fixed points): which correspond to the steady states of a single neuron (Fig. 3B), for example, s 1~u à &{1:12 a.u. is the down-state. The fixed point p 1~( s 1 ,s 1 ) is of a saddle type. There exist variety of orbits homoclinic to p 1 . Figure 4A shows stable, W s (p 1 ), and unstable, W u (p 1 ), manifolds and their intersections define homoclinic orbits. Several spatial profiles of the homoclinics are shown in Fig. 4B. They differ by the width of the stationary solution and one of them (green in Fig. 4B) corresponds to the width of the wave generator transiently formed during the wave collision (Fig. 4B, t~t 2 ). Following Ref. [25] we call such orbit (spatial profile) a nucleating solution (NS). To describe bifurcations of the homoclinics we introduce the integral characteristics: Then using one of the orbits provided by the intersection of manifolds W u (p 1 ) and W s (p 1 ) as initial point we continued the homoclinics over the control parameter c (Fig. 4C). This analysis shows that there is a critical value of c below which there is no nucleation and hence colliding waves annihilate (Fig. 3A). For nontrivial collisions (Figs. 3B-3H) the existence of an NS is a prerequisite. Under collision trajectory in the phase space of the chain (1), (3) passes nearby the steady state corresponding to NS, which guides the further scenarios of the wave behavior. We then linearized the system (1), (3) in a vicinity of this steady state, which turned to be a saddle. Indeed, its spectrum has one zero-eigenvalue, corresponding to the translation symmetry in the chain, and two pairs of complex eigenvalues with real positive parts (Fig. 4D). Figure 4E shows the corresponding eigenvectors that describe scenarios of the development of instability. Both unstable directions have the same exponent, and hence their winner is determined by how the trajectory enters the saddle region, i.e. by initial perturbation created at the wave collision. At symmetric collisions (Figs. 3B-3D) the perturbation is also symmetric going along the symmetric eigenvector e 2 (j) (Fig. 4E). This leads to generation of a pair of symmetric pulses at the tails of the NS. Asymmetric collisions brake the symmetry and the NS will be asymmetrically perturbed, i.e. the initial conditions are shifted to the asymmetric eigenvector e 1 (j). Then we have opposite drive in the tails of the NS, which is the origin of asymmetry in the forming structure. After the first local separation over the unstable manifold, the following behavior of the chain is nonlocal and depends on the controlling parameters. Figure 5 shows complete bifurcation diagram of the neuronal chain (for d §1). It has four domains with qualitatively different behaviors. In the region of wave annihilation NS does not exist and independently on the collision symmetry the initial perturbations go straight to the down-state, which corresponds to the scenario A in Figure 3. In the remaining domains the NS separates trajectory flows, which gives rise to symmetric and asymmetric scenarios. In the wave crossing domain the unstable manifold of NS pushes the trajectory outside to a big excursion, which results in reemission of two symmetric waves or one single wave or two asymmetric waves (scenarios B, E, and F in Fig. 3, respectively). In the pacemaker domain a limit cycle is born from a saddle-node type bifurcation, which results in emission of periodic waves of finite amplitude (scenarios C and G). Finally in the phase wave domain the trajectories are redirected to the up-state, and hence the chain is switched dynamically to the up-state (scenarios D and H). Finally we note that one of the most interesting regions, the wave-crossing, extends over quite a big area in the parameter space (Fig. 5). Thus the observed phenomena of the wave-crossing (Figs. 3B and 3E) are robust to variation of e.g. the wave velocity (controlled by d) and amplitude. Wave-processing of Long-scale Information As mentioned above, different functional regimes in neuronal chains can be achieved by proper adjustment of the coupling strength between neurons and the membrane voltage-gated current (Fig. 5). One of the most interesting regimes, the wave crossing, occurs for intermediate values of both parameters. In this section we study what computational abilities such functional state may offer. Concurrence of periodic wave trains: Four types of waveprocessing. The real potential of the wave-processing of neural information arises in realistic biological contexts. For example, interaction of coordinated inputs from the lateral and medial entorhinal cortex to the laminar structure of the hippocampus participates in consolidation of memory [38]. Let us now simulate concurrence of two coordinated inputs to a spatially extended laminar neuronal structure. We shall model the information content by two periodic wave trains injected into a chain of bistable-excitable neurons from opposite ends (Fig. 6A). After nonlinear interaction, in general, wave trains change their internal structure and we get two emergent output trains carrying out the processed information. Figure 6B.1 shows spatiotemporal evolution of two colliding identical periodic trains. Since the chain is in the wave crossing regime (Fig. 5) two collision scenarios are possible: transparent wave crossing with phase shift (Fig. 3B) and annihilation of one of the waves (Fig. 3E). Which of the scenarios is realized in each collision depends on a number of factors, e.g. on the time passed from the previous collision. Indeed, when the spatial period between waves is small enough the newly created waves have no room to stabilize and one of them dies. In contrast, sparse waves (i.e. long time between interactions) cross each other transparently. Thus the proper combination of symmetric and asymmetric crossings is behind the generation of new aperiodic wave patterns at the output. In Figure 6B.1 every odd wave propagates to the output. Thus we can speak about a kind of decimating processing. However, different waves receive different phase shifts in collisions and consequently the structure of the output trains is more complex (aperiodic). To get deeper insight into the wave-processing we injected into the chain two periodic wave trains as above, but with different inter-wave periods. The train's asymmetry leads to different dynamic processing of each train and generation of new trains with complex inter-wave structures. Figure 6B.2 shows a representative example of such experiments. Both trains initially had 10 periodic waves spaced by 65 (train #1) and 30 (train #2) neurons. Four waves from the train #1 and three from the train #2 survived at the output. These were number 1, 5, 7, 9 and 1, 7, 10 for the trains #1 and #2, respectively. We also notice significantly different phase shift obtained by each wave, which finally codifies the number of collisions and their frequencies. Thus the neuronal chain can perform nontrivial information processing beyond decimating. It can dynamically select and precisely position in time only ''desired'' waves from a raw message, which finally convey mutual information in ''compressed'' form. In order to quantify the outcome of the wave-processing we introduce an entropic measure. Wave trains at the input and output were converted into binary vectors with ones corresponding to wave crests separated by blocks of zeros (silences). The bin size was equal to the spatial refractory period (20 neurons). Then we evaluated the block entropy [39] over a set of words obtained by sliding a window of 10 symbols over the input and output vectors: where p i is the relative frequency of the ith word. Although this measure for finite trains may underestimate the real train entropy it suits well for our purpose of quantification of the observed information compression. Finally we evaluated the relative variation of the information content before and after waveprocessing as: As we expected, during the wave-processing the information contained in wave trains grew significantly (Fig. 6C). The mean growth was about 75% in experiments with identical trains with spatial period varying from 30 to 100 neurons. High variability of the information increment (std &55) indicates strong dependence of the wave-processing on the inter-wave period. Collision of wave trains with different periods (Fig. 6B.2) leads to different entropy increments. The train #1, the spatial period of which was kept constant, got 100% mean increment (std &20), while the train #2, the period of which was changed in the range ½30,100 neurons, received 75% increase with std &58. Thus overall characteristics of the wave-processing of the 2nd train were similar to the case of identical trains (Fig. 6C). Surprising relatively low variability of the train #1 (std &20) suggests that the informational outcome of the wave-processing of a train depends strongly on its own period but only slightly on the period of the other colliding train. Thus the chain can process information in different spatiotemporal domains, effectively reducing the number For colliding trains with large periods there is room for symmetric wave crossing and no annihilation occurs. Then the output trains are identical to the input ones, i.e. trains transparently cross each other receiving global phase shift (Fig. 6D, red area: transparent propagation). For shorter spatial periods some asymmetric wave crossings appear, which decreases the number of waves propagating to the output (Fig. 6D, yellow area: soft processing). In soft processing at least one train conserves most of the input waves. For intermediate periods of both input trains the wave-processing, denominated as hard processing (Fig. 6D, green area), leads to annihilation of the majority of input waves. Finally for really short periods (Fig. 6D, blue area: dark collision) annihilation dominates the wave-processing and only few (usually only the first) waves propagate to the output. Transparent propagation does not alter the complexity measure (6) and hence d~0. Soft and hard processing regimes increase significantly the informational content at the output, i.e. d is high, whereas dark collision leads again to d&0. Thus we have a kind of band-pass filtering of periodic waves, but instead of simple reduction of the train period we have changes in the train complexity. For intermediate spatial periods the information is maximal and then decreases for long and short periods. Such complexity resonance is reminiscent of the rate-temporal coding problem (see e.g. [40]). Indeed, our neuronal structure can ''ignore'' stimuli of too low frequency and ''annihilate'' those of too high frequency, while selectively process stimuli in the ''natural'' frequency range. The processed stimuli are compressed and get higher train complexity at the output. Context dependent information processing. A remarkable quality of evolved living beings is their ability to interpret information according to circumstances. Response of an organism to the same stimulus can depend on, for example, its internal state or external situation. Then the context acts like a framework for such high-level functions as learning, memory, understanding, etc. [6,41]. The proposed concept of wave-processing of information also includes contextualization as one of its central features. To illustrate how contextualization of raw long-scale information can be implemented in neuronal structure, we used again the two-inputs paradigm (Fig. 7A). Left end of the neuronal chain has been designated as an informative input, i.e. it receives information or stimulus to be processed by the chain. The purpose of the right end is dual. It is used: i) as an input for contextual trains and ii) for readout of the computation results. While the informative train can have rather complex aperiodic structure and consequently high entropy, the contextual train may be quit simple. In all experiments we employed the same informative train shown in Figure 7A (raw information), whereas for setting different contexts we used periodic wave trains with different number of waves and inter-wave periods (Fig. 7B, left trains). In general, interaction of the informative train with different contextual trains leads to different output trains (Fig. 7B, red trains). The output trains convey information coded in the raw stimulus but modulated by the context. Thus the output message is a contextualized variant of the input information. Although different contexts usually yield different outputs, we notice that the same output may also occur (Fig. 7B, black trains). Such simultaneous divergence/convergence of the contextual information processing is also known in the Nature. Indeed, organisms may act differently or identically to the same stimulus in different circumstances. In order to illustrate the great potential of the contextual waveprocessing of information we performed the following experiments. Using the same input stimulus with high entropy (Fig. 7A, raw information) we tested contextual trains of different spatial length with three different spatial periods: 30, 65, and 100 neurons. To quantify changes in the output wave train we employed two measures: i) the spatial length (related to compression) and ii) the relative entropy (related to complexity). Figure 7C summarizes our results. We found that the length of the output train changes practically linearly with increase of the length of the contextual train. The slope of the least-squares linear regression strongly depends on the period of the contextual train. Contextual trains with the shortest spatial period of 30 neurons (Fig. 7C, red triangles) exert strongest impact on the length of the output (processed) train, whereas trains with longest spatial period of 100 neurons have little effect on the output train (Fig. 7C, black squares). To confirm this observation we also evaluated the relative entropy (7). Since the input stimulus (raw information) has high entropy, in this case the waveprocessing led to entropy decrease (Fig. 7C, inset), i.e. the waveprocessing selects only a part of input information. In agreement with previous results, we observed that the variability of the output information is maximal for contextual trains with short period (30 neurons) and minimal for trains with long period (100 neurons). Thus the neuronal chain offers effective mechanism for contextualization of the input information. We can easily control characteristics of the processing by changing the length of the contextual wave-train and tune the sensitivity to the context by changing its period. Discussion Questions ''How information is represented in the brain?'' and ''What are the principles of its processing?'' are the most challenging in contemporary Neuroscience. It is now well accepted that different brain nuclei use different strategies for information handling. At the initial processing levels, primary brain nuclei codify sensory information in the form of spike trains. At this stage variants of the rate and time coding schemes are largely employed (see e.g. [40,42,43] and references therein). However, at upper levels the situation becomes much more complicated. Highly evolved nuclei involve distributed parallel processing of multimodal and multiscale information. Complex networks made up of proximal and distant heterogenous couplings coordinate neural activity at different sites [44]. Then the synchronization concept standing on correlated firing of multiple spatially distant neurons (see e.g. [3]) has been widely spread as a paradigm for computational and cognitive tasks. Although this hypothesis received strong experimental and theoretical support, not all experimental facts can be easily fitted in the paradigm. It seems that besides synergetic cooperation of neurons in time domain, e.g. through synchronization of spikes in different time windows, the brain may also employ information coding and processing in spatial dimension. For example waves of neural activity, functionally related to behaviors and global dynamics, have been found in visual, sensory-motor, auditory, and olfactory cortices (see [24] for a review). In this work we proposed and theoretically illustrated a novel concept of significantly spatiotemporal representation and processing of long-scale information in laminar neuronal structures. We argued that relevant longscale information may be hidden in spatiotemporal waves, abundant in different brain structures, and then nonlinear interaction of such waves yields efficient information processing, which we called wave-processing. We note that the discussed wave-processing cannot be reduced to the synchronization paradigm since it occurs in two dimensions: space and time, i.e. the result of computation depends significantly on the spatial distribution of information. To implement wave-processing in a mathematical model we proposed a mechanism that relays on local single neuron dynamics. We incorporated into the classical FitzHugh-Nagumo neuron an additional membrane current accounting for the dynamics of voltage gated high threshold ionic channels. Then a chain of such neurons acquires new emergent properties. Namely, we have shown that nonlinear self-sustained waves can exhibit a variety of functionally different regimes of interactions from complete or partial annihilation to transparent crossing. We provided a rigorous description of the bifurcations in the phase space of the corresponding dynamical system leading to different collision scenarios. It is worth noting that the model incorporates two types of multistability: of a single neuron (Fig. 1B) and of the wave collision (Figs. 3B, 3E). The existence of the former is not essential, i.e. the main results can be reproduced with a monostable single-neuron model (without up state). However, the additional high-threshold conductance is a must for the multistable wave interaction. We have shown that the latter multistability, as basic computational requisite at the network level, is governed by a special nucleating solution of a saddle type with two generic routs leading to different scenarios of wave interaction. Thus besides symmetric transparent wave crossing the neuronal chain simultaneously admits asymmetric wave interaction, an asset for wave-processing. This regime of wave interaction occurs for intermediate (biologically plausible) values of the coupling strength between neurons and the amount of the additional membrane current. We have shown that neuronal chains can exhibit nontrivial computational abilities mimicking different physiological processes in the brain. In particular we described the phenomenon of complexity resonance and classified four available types of processing of wave information: Transparent propagation, Soft and Hard processing, and Dark collision. Using these ''computational tools'' a laminar neuronal structure can ''ignore'' stimuli of too high or too low frequencies (or spatial scales), while selectively process those in the ''natural'' frequency range. Input stimuli are compressed and receive higher complexity at the output thus effectively codifying raw information. We have also shown that the concept of wave-processing naturally offers an effective mechanism for contextual computations, i.e for interpretation of raw information according to circumstances or context that acts like a framework for highlevel functions. We illustrated contextualization of raw longscale information using a complex stimulus as input information and periodic wave trains modeling different contexts. We have shown that the content of the output wave train linearly depends on the length of the contextual train and the sensitivity to the context is controlled by the context frequency. As it happens in the Nature contextualization of information obeys divergence/convergence properties. The neuronal chain can process stimulus differently or identically in different circumstances. Thus neuronal chains can work as computational units performing different operations over spatiotemporal information. Both the biophysical basis of the model and its revealed computational features make it suitable for functional description of global and sparse information processing in real neural networks. We expect that the concept of wave-processing could be involved in such high-level brain functions as path-planning and decision making. Indeed, to behave efficiently and actively in complex environments, evolved organisms create in the brain a model of the external world. Then this model is used to perform mental ''computations'' and test in parallel different decision alternatives (see e.g. [6] and references therein). To perform this task the brain should be able to map 4dimensional space-time structure of the external world into the internal neuronal space. Then it seems reasonable to hypothesize that laminar brain structures (like e.g. cerebral cortex) may naturally serve as a container for the information mapping, while neural waves may perform parallel computations over such space-time information. Wave-Processing of Long-Scale Information PLOS ONE | www.plosone.org
2016-04-30T06:45:01.844Z
2013-02-27T00:00:00.000
{ "year": 2013, "sha1": "9112cfe61258e59d9e2fe12b318e7b53471c98fa", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0057440&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92d6cedf2a071e9df2cbf14148ee2b2c21ef2e11", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
18012404
pes2o/s2orc
v3-fos-license
The Metallicity Evolution of Star-Forming Galaxies from Redshift 0 to 3: Combining Magnitude Limited Survey with Gravitational Lensing We present a comprehensive observational study of the gas phase metallicity of star-forming galaxies from z ~ 0 ->3. We combine our new sample of gravitationally lensed galaxies with existing lensed and non-lensed samples to conduct a large investigation into the mass-metallicity (MZ) relation at z>1. We apply a self-consistent metallicity calibration scheme to investigate the metallicity evolution of star-forming galaxies as a function of redshift. The lensing magnification ensures that our sample spans an unprecedented range of stellar mass (3*10^{7}-6*10^{10} M_sun). We find that at the median redshift of z=2.07, the median metallicity of the lensed sample is 0.35 dex lower than the local SDSS star-forming galaxies and 0.18 dex lower than the z ~ 0.8 DEEP2 galaxies. We also present the z ~ 2 MZ relation using 19 lensed galaxies. A more rapid evolution is seen between z ~ 1->3 than z ~ 0 ->1 for the high-mass galaxies (10^{9.5-11} M_sun), with almost twice as much enrichment between z ~ 1 ->3 than between z ~ 1 ->0. We compare this evolution with the most recent cosmological hydrodynamic simulations with momentum driven winds. We find that the model metallicity is consistent with the observed metallicity within the observational error for the low mass bins. However, for higher masses, the model over-predicts the metallicity at all redshifts. The over-prediction is most significant in the highest mass bin of 10^{10-11} M_sun. INTRODUCTION Soon after the pristine clouds of primordial gas collapsed to assemble a protogalaxy, star formation ensued, leading to the production of heavy elements (metals). Metals were synthesized exclusively in stars, and were ejected into the interstellar medium (ISM) through stellar winds or supernovae explosions. Tracing the heavy element abundance (metallicity) in star-forming galaxies provides a "fossil record" of galaxy formation and evolution. When considered as a closed system, the metal content of a galaxy is directly related to the yield and gas fraction (Searle & Sargent 1972;Pagel & Patchett 1975;Pagel & Edmunds 1981;Edmunds 1990). In reality, a galaxy interacts with its surrounding intergalactic medium (IGM), hence both the overall and local metallicity distribution of a galaxy is modified by feedback processes such as galactic winds, inflows, and gas accretions (e.g., Lacey & Fall 1985;Edmunds & Greenhow 1995;Köppen & Edmunds 1999;Dalcanton 2007). Therefore, observations of the chemical abundances in galaxies offer crucial constraints on the star formation history and various mechanisms responsible for galactic inflows and outflows. The well-known correlation between galaxy mass (luminosity) and metallicity was first proposed by Lequeux et al. (1979). Subsequent studies confirmed the existence of the luminosity-metallicity (LZ) relation (e.g., Rubin et al. 1984;Skillman et al. 1989;Zaritsky et al. 1994;Garnett 2002). Luminosity was used as a proxy for stellar mass in these studies as luminosity is a direct observable. Aided by new sophisticated stellar population models, stellar mass can be robustly calculated and a tighter correlation is found in the mass-metallicity (MZ) relation. Tremonti et al. (2004) have established the MZ relation for local star-forming galaxies based on ∼ 5×10 5 Sloan Digital Sky Survey (SDSS) galaxies. At intermediate redshifts (0.4 < z < 1), the MZ relation has also been observed for a large number of galaxies (>100) (e.g., Savaglio et al. 2005;Cowie & Barger 2008;Lamareille et al. 2009). Zahid et al. (2011) derived the MZ relation for ∼ 10 3 galaxies from the Deep Extragalactic Evolutionary Probe 2 (DEEP2) survey, validating the MZ relation on a statistically significant level at z ∼ 0.8. Current cosmological hydrodynamic simulations and semianalytical models can predict the metallicity history of galaxies on a cosmic timescale (Nagamine et al. 2001;De Lucia et al. 2004;Bertone et al. 2007;Brooks et al. 2007;Davé & Oppenheimer 2007;Davé et al. 2011a,b). These models show that the shape of the MZ relation is particularly sensitive to the adopted feedback mechanisms. The cosmological hydrodynamic simulations with momentum-driven winds models provide better match with observations than energy-driven wind models (Oppenheimer & Davé 2008;Finlator & Davé 2008;Davé et al. 2011a). However, these models have not been tested thoroughly in observations, especially at high redshifts (z > 1), where the MZ relation is still largely uncertain. As we move to higher redshifts, selection effects and small number statistics haunt observational metallicity history studies. The difficulty becomes more severe in the so-called "redshift desert" (1 z 3), where the metallicity sensitive optical emission lines have shifted to the sky-background dominated near infrared (NIR). Ironically, this redshift range harbors the richest information about galaxy evolution. It is during this redshift period (∼ 2−6 Gyrs after the Big Bang) that the first massive structures condensed; the star formation rate (SFR), major merger activity, and black hole accretion rate peaked; much of today's stellar mass was assembled, and heavy elements were produced (Fan et al. 2001;Dickinson et al. 2003;Chapman et al. 2005;Hopkins & Beacom 2006;Grazian et al. 2007;Conselice et al. 2007;Reddy et al. 2008). It is therefore of crucial importance to explore NIR spectra for galaxies in this redshift range. Many spectroscopic redshift surveys have been carried out to study star-forming galaxies at z >1 in recent years (e.g., Steidel et al. 2004;Law et al. 2009). However, due to the low efficiency in the NIR, those spectroscopic surveys almost inevitably have to rely on color-selection criteria and the biases in UV-selected galaxies tend to select the most massive and less dusty systems (e.g., Capak et al. 2004;Steidel et al. 2004;Reddy et al. 2006). Space telescopes can observe much deeper in the NIR and are able to probe a wider mass range. For example, the narrow-band Hα surveys based on the new WFC3 camera aboard the Hubble Space Telescope (HST) have located hundreds of Hα emitters up to z = 2.23, finding much fainter systems than observed from the ground (Sobral et al. 2009). However, the low-resolution spectra from the narrow band filters forbid derivations of physical properties such as metallicities that can only currently be acquired from ground-based spectral analysis. Thanks to the advent of long-slit/multi-slit NIR spectrographs on 8−10 meter class telescopes, enormous progress has been made in the last decade to capture galaxies in the redshift desert. For chemical abundance studies, a full coverage of rest-frame optical spectra (4000−9000Å) is usually mandatory for the most robust diagnostic analysis. For 1.5 z 3, the rest-frame optical spectra have shifted into the J, H, and K bands. It remains challenging and observationally expensive to obtain high signal-to-noise (S/N ) NIR spectra from the ground, especially for "typical" targets at high-z that are less massive than conventional color-selected galaxies. Therefore, previous investigations into the metallicity properties between 1 z 3 focused on stacked spectra, samples of massive luminous individual galaxies, or very small numbers of lower-mass galaxies (e.g., Erb et al. 2006;Förster Schreiber et al. 2006;Law et al. 2009;Erb et al. 2010;Yabe et al. 2012). The first mass-metallicity (MZ) relation for galaxies at z ∼ 2 was found by Erb et al. (2006) using the stacked spectra of 87 UV selected galaxies divided into 6 mass bins. Subsequently, mass and metallicity measurements have been reported for numerous individual galaxies at 1.5 < z < 3 (Förster Schreiber et al. 2006;Genzel et al. 2008;Hayashi et al. 2009;Law et al. 2009;Erb et al. 2010). These galaxies are selected using broadband colors in the UV (Lyman Break technique; Steidel et al. 1996Steidel et al. , 2003 or using B, z, and Kband colors (BzK selection; Daddi et al. 2004). The Lyman break and BzK selection techniques favor galaxies that are luminous in the UV or blue and may therefore be biased against low luminosity (low-metallicity) galaxies, and dusty (potentially metal-rich) galaxies. Because of these biases, galaxies selected in this way may not sample the full range in metallicity at redshift z >1. A powerful alternative method to avoid these selection effects is to use strong gravitationally lensed galaxies. In the case of galaxy cluster lensing, the total luminosity and area of the background sources can easily be boosted by ∼ 10 − 50 times, providing invaluable opportunities to obtain high S/N spectra and probe intrinsically fainter systems within a reasonable amount of telescope time. In some cases, sufficient S/N can even be obtained for spatially resolved pixels to study the resolved metallicity of high-z galaxies (Swinbank et al. 2009;Jones et al. 2010;Yuan et al. 2011;Jones et al. 2012). Before 2011, metallicities have been reported for a handful of individually lensed galaxies using optical emission lines at 1.5 < z < 3 (Pettini et al. 2001;Lemoine-Busserolle et al. 2003;Stark et al. 2008;Quider et al. 2009;Yuan & Kewley 2009;Jones et al. 2010). Fortunately, lensed galaxy samples with metallicity measurements have increased significantly thanks to reliable lensing mass modeling and larger dedicated spectroscopic surveys of lensed galaxies on 8-10 meter telescopes Wuyts et al. 2012;Christensen et al. 2012). In 2008, we began a spectroscopic observational survey designed specifically to capture metallicity sensitive lines for lensed galaxies. Taking advantage of the multi-object cryogenic NIR spectrograph (MOIRCS) on Subaru, we targeted well-known strong lensing galaxy clusters to obtain metallicities for galaxies between 0.8 < z < 3. In this paper, we present the first metallicity measurement results from our survey. Combining our new data with existing data from the literature, we present a coherent observational picture of the metallicity history and mass-metallicity evolution of star-forming galaxies from z ∼ 0 to z ∼ 3. Kewley & Ellison (2008) have shown that the metallicity offsets in the diagnostic methods can easily exceed the intrinsic trends. It is of paramount importance to make sure that relative metallicities are compared on the same metallicity calibration scale. In MZ relation studies, the methods used to derive the stellar mass can also cause systematic offsets (Zahid et al. 2011). Different SED fitting codes can yield a non-negligible mass offset, hence mimicking or hiding evolution in the MZ relation. In this paper, we derive the mass and metallicity of all samples using the same methods, ensuring that the observational data are compared in a self-consistent way. We compare our observed metallicity history with the latest prediction from cosmological hydrodynamical simulations. The paper is organized as follows: Section 2 describes our lensed sample survey and observations. Data reduction and analysis are summarized in Section 3. Section 4 presents an overview of all the samples we use in this study. Section 5 describes the methodology of derived quantities. The metallicity evolution of star-forming galaxies with redshift is presented in Section 6. Section 7 presents the mass-metallicity relation for our lensed galaxies. Section 8 compares our results with previous work in literature. Section 9 summarizes our results. In the Appendix, we show the morphology, slit layout, and reduced 1D spectra for the lensed galaxies reported in our survey. THE LEGMS SURVEY AND OBSERVATIONS 2.1. The Lensed Emission-Line Galaxy Metallicity Survey (LEGMS) Our survey (LEGMS) aims to obtain oxygen abundance of lensed galaxies at 0.8<z<3. LEGMS has taken enormous advantage of the state-of-the-art instruments on Mauna Kea. Four instruments have been utilized so far: (1) the Multi-Object InfraRed Camera and Spectrograph (MOIRCS; Observations for other clusters are ongoing and will be presented in future papers. The first step to construct a lensed sample for slit spectroscopy is to find the lensed candidates (arcs) that have spectroscopic redshifts from optical surveys. The number of known spectroscopically identified lensed galaxies at z > 1 is still on the order of a few tens. The limited number of lensed candidates makes it impractical to build a sample that is complete and well defined in mass. A mass complete sample is the future goal of this project. Our strategy for now is to observe as many arcs with known redshifts as possible. If we assume the AGN fraction is similar to local star-forming galaxies, then we expect ∼ 10% of our targets to be AGN dominated . Naturally, lensed sample is biased towards highly magnified sources. However, because the largest magnifications are not biased towards intrinsically bright targets, lensed samples are less biased towards the intrinsically most luminous galaxies. Abell 1689 is chosen as the primary target for MOIRCS observations because it has the largest number (∼ 100 arcs, or ∼ 30 source galaxies) of spectroscopically identified lensed arcs (Broadhurst et al. 2005;Frye et al. 2007;Limousin et al. 2007). Multi-slit spectroscopy of NIR lensing surveys greatly enhances the efficiency of spectroscopy of lensed galaxies in clusters. Theoretically, ∼ 40 slits can be observed simultaneously on the two chips of MOIRCS with a total field of view (FOV) of 4 ′ × 7 ′ . In practice, the number of lensed targets on the slits is restricted by the strong lensing area, slit orientations, and spectral coverage. For A1689, the lensed candidates cover an area of ∼2 ′ × 2 ′ , well within the FOV of one chip. We design slit masks for chip 2, which has better sensitivity and less bad pixels than chip 1. There are ∼ 40 lensed images (∼ 25 individual galaxies) that fall in the range of 1.5 z 3 in our slit masks. We use the MOIRCS low-resolution (R∼ 500) grisms which have a spectral coverage of 0.9 -1.78 µm in ZJ and 1.3-2.5 µm in HK. To maximize the detection efficiency, we give priority to targets with the specific redshift range such that all the strong emission lines from [O II] λ3727 to [N II] λ6584 can be captured in one grism configuration. For instance, the redshift range of 2.5 z 3 is optimized for the HK500 grism, and 1.5 z 1.7 is optimized for the ZJ500 grism. From UT March 2008 to UT April 2010, we used 8 MOIRCS nights (6 usable nights) with 4 position angles (PAs) and 6 masks to observe 25 galaxies. Metallicity quality spectra were obtained for 12 of the 25 targets. We also include one z > 1.5 galaxy from our observations of Abell 68 5 . The PA is chosen to optimize the slit orientation along the targeted arcs' elongated directions. For arcs that are not oriented to match the PA, the slits are configured to center on the brightest knots of the arcs. We use slit widths of 0.8 ′′ and 1.0 ′′ , with a variety of slit lengths for each lensed arc. For each mask, a bright galaxy/star is placed on one of the slits to trace the slit curvature and determine the offsets among individual exposures. Typical integrations for individual frames are 400 s, 600 s, and 900 s, depending on levels of skyline saturation. We use an ABBA dithering sequence along the slit direction, with a dithering length of 2. ′′ 5. The observational logs are summarized in Table 1. 3. DATA REDUCTION AND ANALYSIS 3.1. Reduce 1D spectrum The data reduction procedures from the raw mask data to the final wavelength and flux calibrated 1D spectra were realized by a set of IDL codes called MOIRCSMOSRED. The codes were scripted originally by Youichi Ohyama. T.-T Yuan extended the code to incorporate new skyline subtraction (e.g., Henry et al. 2010, for a description of utilizing MOIRCSMOSRED). We use the newest version (Apr, 2011) of MOIRC-SMOSRED to reduce the data in this work. The sky subtraction is optimized as follows. For each A i frame, we subtract a sky frame denoted as α((B i−1 +B i+1 )/2), where B i−1 and B i+1 are the science frames before and after the A i exposure. The scale parameter α is obtained by searching through a parameter range of 0.5-2.0, with an increment of 0.0001. The best α is obtained where the root mean square (RMS) of the residual R= A i -α((B i−1 +B i+1 )/2) is minimal for a user defined wavelength region λ 1 and λ 2 . We find that this sky subtraction method yields smaller sky OH line residuals (∼ 20%) than conventional A-B methods. We also compare with other skyline subtraction methods in literature (Kelson 2003;Davies 2007). We find the sky residuals from our method are comparable to those from the Kelson (2003) and Davies (2007) methods within 5% in general cases. However, in cases where the emission line falls on top of a strong skyline, our method is more stable and improves the skyline residual by ∼ 10% than the other two methods. Wavelength calibration is carried out by identifying skylines for the ZJ grism. For the HK grism, we use argon lines to calibrate the wavelength since only a few skylines are available in the HK band. The argon-line calibrated wavelength is then re-calibrated with the available skylines in HK to determine the instrumentation shifts between lamp and science exposures. Note that the RMS of the wavelength calibration using a 3rd order polynomial fitting is ∼ 10-20Å, corresponding to a systematic redshift uncertainty of 0.006. A sample of A0 stars selected from the UKIRT photometric standards were observed at similar airmass as the targets. These stars were used for both telluric absorption corrections and flux calibrations. We use the prescriptions of Erb et al. (2003) for flux calibration. As noted in Erb et al. (2003), the absolute flux calibration in the NIR is difficult with typical uncertainties of ∼20%. We note that this uncertainty is even larger for lensed samples observed in multi-slits because of the complicated aperture effects. The uncertainties in the flux calibration are not a concern for our metallicity analysis where only line ratios are involved. However, these errors are a major concern for calculating SFRs. The uncertainties from the multi-slit aperture effects can cause the SFRs to change by a factor of 2-3. For this reason, we refrain from any quantitative analysis of SFRs in this work. Line Fitting The emission lines are fitted with Gaussian profiles. For the spatially unresolved spectra, the aperture used to extract the spectrum is determined by measuring the Gaussian profile of the wavelength collapsed spectrum. Some of the lensed targets (∼ 10%) are elongated and spatially resolved in the slit spectra, however, because of the low surface brightness and thus very low S/N per pixel, we are unable to obtain usable spatially resolved spectra. For those targets, we make an initial guess for the width of the spatial profile and force a Gaussian fit, then we extract the integrated spectrum using the aperture determined from the FWHM of the Gaussian profile. For widely separated lines such as [O II] λ3727, Hβ λ4861, single Gaussian functions are fitted with 4 free parameters: the centroid (or the redshift), the line width, the line flux, and the continuum. The doublet [O III] λ λ4959,5007 are initially fitted as a double Gaussian function with 6 free parameters: the centroids 1 and 2 , line widths 1 and 2, fluxes 1 and 2, and the continuum. In cases where the [O III] λ4959 line is too weak, its centroid and line velocity width are fixed to be the same as [O III] λ5007 and the flux is fixed to be 1/3 of the [O III] λ5007 line (Osterbrock 1989). A triple-Gaussian function is fitted simultaneously to the three adjacent emission lines: [N II] λ6548, 6583 and Hα. The centroid and velocity width of [N II] λ6548, 6583 lines are constrained by the velocity width of Hα λ6563, and the ratio of [N II] λ6548 and [N II] λ6583 is constrained to be the theoretical value of 1/3 given in Osterbrock (1989). The line profile fitting is conducted using a χ 2 minimization procedure which uses the in-verse of the sky OH emission as the weighting function. The S/N per pixel is calculated from the χ 2 of the fitting. The measured emission line fluxes and line ratios are listed in Table 4. The final reduced 1D spectra are shown in the Appendix. Lensing Magnification Because the lensing magnification (µ) is not a direct function of wavelength, line ratio measurements do not require pre-knowledge of the lensing magnification. However, µ is needed for inferring other physical properties such as the intrinsic fluxes, masses and source morphologies. Parametric models of the mass distribution in the clusters Abell 68 and Abell 1689 were constructed using the Lenstool software Lenstool 6 (Kneib et al. 1993;Jullo et al. 2007). The bestfit models have been previously published in Richard et al. (2007) and Limousin et al. (2007). As detailed in Limousin et al. (2007), Lenstool uses Bayesian optimization with a Monte-Carlo Markov Chain (MCMC) sampler which provides a family of best models sampling the posterior probability distribution of each parameter. In particular, we use this family of best models to derive the magnification and relative error on magnification µ associated to each lensed source. Typical errors on µ are ∼10% for Abell 1689 and Abell 68. Photometry We determine the photometry for the lensed galaxies in A1689 using 4-band HST imaging data, 1-band MOIRCS imaging data, and 2-channel Spitzer IRAC data at 3.6 and 4.5 µm. We obtained a 5,000 s image exposure for A1689 on the MOIRCS K s filter, at a depth of 24 mag, using a scale of 0.117 ′′ per pixel. The image was reduced using MCSRED in IRAF written by the MOIRCS supporting astronomer Ichi Tanaka 7 . The photometry is calibrated using the 2MASS stars located in the field. The ACS F475W, F625W, F775W, F850LP data are obtained from the HST archive. The HST photometry are determined using SExtractor (Bertin & Arnouts 1996) with parameters adjusted to detect the faint background sources. The F775W filter is used as the detection image using a 1. ′′ 0 aperture. The IRAC data are obtained from the Spitzer archive and are reduced and drizzled to a pixel scale of 0. ′′ 6 pixel −1 . In order to include the IRAC photometry, we convolved the HST and MOIRCS images with the IRAC point spread functions (PSFs) derived from unsaturated stars. All photometric data are measured using a 3. ′′ 0 radius aperture. Note that we only consider sources that are not contaminated by nearby bright galaxies: ∼ 70% of our sources have IRAC photometry (Table 5). Typical errors for the IRAC band photometry are 0.3 mag, with uncertainties mainly from the aperture correction and contamination of neighboring galaxies. Typical errors for the ACS and MOIRCS bands are 0.15 mag, with uncertainties mainly from the Poisson noise and absolute zero-point uncertainties (Wuyts et al. 2012). We refer to Richard et al. (2012, in prep) for the full catalog of the lensing magnification and photometry of the lensed sources in Abell 1689. SUPPLEMENTARY SAMPLES In addition to our lensed targets observed in LEGMS, we also include literature data for complementary lensed and non-lensed samples at both local and high-z. The observational data for individually measured metallicities at z > 1.5 are still scarce and caution needs to be taken when using them for comparison. The different metallicity and mass derivation methods used in different samples can give large systematic discrepancies and provide misleading results. For this reason, we only include the literature data that have robust measurements and sufficient data for consistently recalculating the stellar mass and metallicities using our own methods. Thus, in general, stacked data, objects with lower/upper limits in either line ratios or masses are not chosen. The one exception is the stacked data of Erb et al. (2006), as it is the most widely used comparison sample at z ∼ 2. The samples used in this work are: (1) The Sloan Digital Sky Survey (SDSS) sample (z ∼ 0.07). We use the SDSS sample (Abazajian et al. 2009, http://www.mpa-garching.mpg.de/SDSS/DR7/) defined by Zahid et al. (2011). The mass derivation method used in Zahid et al. (2011) is the same as we use in this work. All SDSS metallicities are recalculated using the PP04N2 method, which uses an empirical fit to the [N II] and Hα line ratios of H II regions (Pettini & Pagel 2004). (3) The UV-selected sample (z ∼ 2). We use the stacked data of Erb et al. (2006). The metallicity diagnostic used by Erb et al. (2006) is the PP04N2 method and no recalculation is needed. We offset the stellar mass scale of Erb et al. (2006) by -0.3 dex to match the mass derivation method used in this work (Zahid et al. 2012). This offset accounts for the different initial mass function (IMF) and stellar evolution model parameters applied by Erb et al. (2006). (4) The lensed sample (1 < z < 3). Besides the 11 lensed galaxies from our LEGMS survey in Abell 1689, we include 1 lensed source (z =1.762) from our MOIRCS data on Abell 68 and 1 lensed spiral (z =1.49) from Yuan et al. (2011). We also include 10 lensed galaxies from Wuyts et al. (2012) and 3 lensed galaxies from Richard et al. (2011), since these 13 galaxies have [N II] and Hα measurements, as well as photometric data for recalculating stellar masses. We require all emission lines from literature to have S/N > 3 for quantifying the metallicity of 1 < z < 3 galaxies. Upper-limit metallicities are found for 6 of the lensed targets from our LEGMS survey. Altogether, the lensed sample is composed of 25 sources, 12 (6/12 upper limits) of which are new observations from this work. Upper-limit metallicities are not used in our quantitative analysis. The methods used to derive stellar mass and metallicity are discussed in detail in Section 5. Optical Classification We use the standard optical diagnostic diagram (BPT) to exclude targets that are dominated by AGN (Baldwin et al. 1981; Veilleux & Osterbrock 1987;Kewley et al. 2006). For all 26 lensed targets in our LEGMS sample, we find 1 target that could be contaminated by AGN (B8.2). The fraction of AGN in our sample is therefore ∼8%, which is similar to the fraction (∼7%) of the local SDSS sample (Kewley et al. 2006). We also find that the line ratios of the high-z lensed sample has a systematic offset on the BPT diagram, as found in Shapley et al. (2005) Richard et al. (2011). The redshift evolution of the BPT diagram will be reported in Kewley et al (2013, in preparation). Stellar Masses We use the software LE PHARE 8 (Ilbert et al. 2009) to determine the stellar mass. LE PHARE is a photometric redshift and simulation package based on the population synthesis models of Bruzual & Charlot (2003). If the redshift is known and held fixed, LE PHARE finds the best fitted SED on a χ 2 minimization process and returns physical parameters such as stellar mass, SFR and extinction. We choose the initial mass function (IMF) by Chabrier (2003) and the Calzetti et al. (2000) attenuation law, with E(B − V) ranging from 0 to 2 and an exponentially decreasing SFR (SFR ∝ e −t/τ ) with τ varying between 0 and 13 Gyrs. The errors caused by emission line contamination are taken into account by manually increasing the uncertainties in the photometric bands where emission lines are located. The uncertainties are scaled according to the emission line fluxes measured by MOIRCS. The stellar masses derived from the emission line corrected photometry are consistent with those without emission line correction, albeit with larger errors in a few cases (∼ 0.1 dex in log space). We use the emission-line corrected photometric stellar masses in the following analysis. Metallicity Diagnostics The abundance of oxygen (12 + log(O/H)) is used as a proxy for the overall metallicity of H II regions in galaxies. The oxygen abundance can be inferred from the strong re-FIG. 2.-The Zz plot: metallicity history of star-forming galaxies from redshift 0 to 3. The SDSS and DEEP2 samples (black dots) are taken from Zahid et al. (2011). The SDSS data are plotted in bins to reduce visual crowdedness. The lensed galaxies are plotted in blue (upper-limit objects in green arrows), with different lensed samples showing in different symbols (see Figure 6 for the legends of the different lensed samples). The purple "bowties" show the bootstrapping mean (filled symbol) and median (empty symbol) metallicities and the 1σ standard deviation of the mean and median, whereas the orange dashed error bars show the 1σ scatter of the data. For the SDSS and DEEP2 samples, the 1σ errors of the median metallicities are 0.001 and 0.006 (indiscernible from the figure), whereas for the lensed sample the 1σ scatter of the median metallicity is 0.067. Upper limits are excluded from the median and error calculations. For comparison, we also show the mean metallicity of the UV-selected galaxies from Erb et al. (2006) (symbol: the black bowtie). The 6 panels show samples in different mass ranges. The red dotted and dashed lines are the model predicted median and 1σ scatter (defined as including 68% of the data) of the SFR-weighted gas metallicity in simulated galaxies (Davé et al. 2011b). combination lines of hydrogen atoms and collisionally excited metal lines (e.g., Kewley & Dopita 2002). Before doing any metallicity comparisons across different samples and redshifts, it is essential to convert all metallicities to the same base calibration. The discrepancy among different diagnostics can be as large as 0.7 dex for a given mass, large enough to mimic or hide any intrinsic observational trends. Kewley & Ellison (2008) (KE08) have shown that both the shape and the amplitude of the MZ relation change substantially with different diagnostics. For this work, we convert all metallicities to the PP04N2 method using the prescriptions from KE08. For our lensed targets with only [N II] and Hα, we use the N2 = log([N II] λ6583/Hα) index, as calibrated by Pettini & Pagel (2004) is not sufficient or available to break the degeneracy, we calculate both the upper and lower branch metallicities and assign the statistical errors of the metallicities as the range of the upper and lower branches. The KK04 R 23 metallicity is then converted to the PP04N2 method using the KE08 prescriptions. The line fluxes and metallicity are listed in Table 4. For the literature data, we have recalculated the metallicities in the PP04N2 scheme. The statistical metallicity uncertainties are calculated by propagating the flux errors of the [N II] and Hα lines. The metallicity calibration of the PP04N2 method itself has a 1σ dispersion of 0.18 dex (Pettini & Pagel 2004;Erb et al. 2006). Therefore, for individual galaxies that have statistical metallicity uncertainties of less than 0.18 dex, we assign errors of 0.18 dex. Note that we are not comparing absolute metallicities between galaxies as they depend on the accuracy of the calibration methods. However, by re-calculating all metallicities to the same calibration diagnostic, relative metallicities can be compared reliably. The systematic error of relative metallicities is < 0.07 dex for strong-line methods (Kewley & Ellison 2008). THE COSMIC EVOLUTION OF METALLICITY FOR STAR-FORMING GALAXIES 6.1. The Zz Relation In this section, we present the observational investigation into the cosmic evolution of metallicity for star-forming galaxies from redshift 0 to 3. The metallicity in the local universe is represented by the SDSS sample (20577 objects, z = 0.072 ± 0.016). The metallicity in the intermediateredshift universe is represented by the DEEP2 sample (1635 objects, z = 0.78 ± 0.02). For redshift 1 z 3, we use 19 lensed galaxies (plus 6 upper limit measurements) ( z = 1.91 ± 0.61) to infer the metallicity range. The redshift distributions for the SDSS and DEEP2 samples are very narrow (∆z ∼ 0.02), and the mean and median redshifts are identical within 0.001 dex. Whereas for the lensed sample, the median redshift is 2.07, and is 0.16 dex higher than the mean redshift. There are two z ∼ 0.9 objects in the lensed sample, and if these two objects are excluded, the mean and median redshifts for the lensed sample are z = 2.03 ± 0.54, z median = 2.09 (see Table 2). The overall metallicity distributions of the SDSS, DEEP2, and lensed samples are shown in Figure 1. Since the z > 1 sample size is 2-3 orders of magnitude smaller than the z < 1 samples, we use a bootstrapping process to derive the mean and median metallicities of each sample. Assuming the measured metallicity distribution of each sample is representative of their parent population, we draw from the initial sample a random subset and repeat the process for 50000 times. We use the 50000 replicated samples to measure the mean, median and standard deviations of the initial sample. This method prevents artifacts from small-number statistics and provides robust estimation of the median, mean and errors, especially for the high-z lensed sample. The fraction of low-mass (M ⋆ <10 9 M ⊙ ) galaxies is largest (31%) in the lensed sample, compared to 9% and 5% in the SDSS and DEEP2 samples respectively. Excluding the lowmass galaxies does not notably change the median metallicity of the SDSS and DEEP2 samples (∼ 0.01 dex), while it increases the median metallicity of the lensed sample by ∼ 0.05 dex. To investigate whether the metallicity evolution is different for various stellar mass ranges, we separate the samples in different mass ranges and derive the mean and median metallicities ( Table 2). The mass bins of 10 9 M ⊙ <M ⋆ <10 9.5 M ⊙ and 10 9.5 M ⊙ <M ⋆ <10 11 M ⊙ are chosen such that there are similar number of lensed galaxies in each bin. Alternatively, the mass bins of 10 9 M ⊙ <M ⋆ <10 10 M ⊙ and 10 10 M ⊙ <M ⋆ <10 11 M ⊙ are chosen to span equal mass scales. We plot the metallicity (Z) of all samples as a function of redshift z in Figure 2 (dubbed the Zz plot hereafter). The first panel shows the complete observational data used in this study. The following three panels show the data and model predictions in different mass ranges. The samples at local and intermediate redshifts are large enough such that the 1σ errors of the mean and median metallicity are smaller than the symbol sizes on the Zz plot (0.001-0.006 dex). Although the z > 1 samples are still composed of a relatively small number of objects, we suggest that the lensed galaxies and their bootstrapped mean and median values more closely represent the average metallicities of star-forming galaxies at z > 1 than Lyman break, or B-band magnitude limited samples because the lensed galaxies are selected based on magnification rather than colors. Although we do note that there is still a magnitude limit and flux limit for each lensed galaxy. We derive the metallicity evolution in units of "dex per redshift" and "dex per Gyr" using both the mean and median values. The metallicity evolution can be characterized by the slope ( dZ dz ) of the Zz plot. We compute dZ dz for two redshift ranges: z ∼ 0 → 0.8 (SDSS to DEEP2) and z ∼ 0.8 → ∼2.5 (DEEP2 to Lensed galaxies). As a comparison, we also derive dZ dz from z ∼ 0.8 to 2.5 using the DEEP2 and the Erb06 samples (yellow circles/lines in Figure 3). We derive separate evolutions for different mass bins. We show our result in Figure 3. A positive metallicity evolution, i.e., metals enrich galaxies from high-z to the local universe, is robustly found in all mass bins from z ∼ 0.8 → 0. This positive evolution is indicated by the negative values of dZ dz (or dZ dz(Gyr) ) in Figure 3. The negative signs (both mean and median) of dZ dz are significant at >5 σ of the measurement errors from z ∼ 0.8 → 0. From z ∼ 2.5 to 0.8, however, dZ dz is marginally smaller than zero at the ∼1 σ level from the Lensed → DEEP2 samples. If using the Erb06 → DEEP2 samples, the metallicity evolution ( dZ dz ) from z ∼ 2.5 to 0.8 is consistent with zero within ∼1 σ of the measurement errors. The reason that there is no metallicity evolution from the z ∼ 2 Erb06 → z ∼ 0.8 DEEP2 samples may be due to the UV-selected sample of Erb06 being biased towards more metal-rich galaxies. The right column of Figure 3 is used to interpret the deceleration/acceleration in metal enrichment. Deceleration means the metal enrichment rate ( dZ dz(Gyr) =∆ dex Gyr −1 ) is dropping from high-z to low-z. Using our lensed galaxies, the mean rise in metallicity is 0.055 ± 0.014 dex Gyr −1 for z ∼ 2.5 → 0.8, and 0.022 ± 0.001 dex Gyr −1 for z ∼ 0.8 → 0. The Mann-Whitney test shows that the mean rises in metallicity are larger for z ∼ 2.5 → 0.8 than for z ∼ 0.8 → 0 at a significance level of 95% for the high mass bins (10 9.5 M ⊙ <M ⋆ <10 11 M ⊙ ). For lower mass bins, the hypothesis that the metal enrichment rates are the same for z ∼ 2.5 → 0.8 and z ∼ 0.8 → 0 can not be rejected at the 95% confidence level, i.e, there is no difference in the metal enrichment rates for the lower mass bin. Interestingly, if the Erb06 sample is used instead of the lensed sample, the hypothesis that the metal enrichment rates are the same for z ∼ 2.5 → 0.8 and z ∼ 0.8 → 0 can not be rejected at the 95% confidence level for all mass bins. This means that statistically, the metal enrichment rates are the same for z ∼ 2.5 → 0.8 and z ∼ 0.8 → 0 for all mass bins from the Erb06 → DEEP2 → SDSS samples. The clear trend of the average/median metallicity in galaxies rising from high-redshift to the local universe is not surprising. Observations based on absorption lines have shown a continuing fall in metallicity using the damped Lyα absorption (DLA) galaxies at higher redshifts (z ∼ 2 − 5) (e.g., Songaila & Cowie 2002;Rafelski et al. 2012). There are several physical reasons to expect that high-z galaxies are less metal-enriched: (1) high-z galaxies are younger, have higher gas fractions, and have gone through less generations of star formation than local galaxies; (2) high-z galaxies may be still accreting a large amount of metal-poor pristine gas from the environment, hence have lower average metallicities; (3) high-z galaxies may have more powerful outflows that drive the metals out of the galaxy. It is likely that all of these mechanisms have played a role in diluting the metal content at high redshifts. Comparison between the Zz Relation and Theory We compare our observations with model predictions from the cosmological hydrodynamic simulations of Davé et al. (2011a,b). These models are built within a canonical hierarchical structure formation context. The models take into account the important feedback of outflows by implementing an observation-motivated momentum-driven wind model (Oppenheimer & Davé 2008). The effect of inflows and mergers are included in the hierarchical structure formation of the simulations. Galactic outflows are dealt specifically in the momentum-driven wind models. Davé & Oppenheimer (2007) found that the outflows are key to regulating metallicity, while inflows play a second-order regulation role. The model of Davé et al. (2011a) focuses on the metal content of star-forming galaxies. Compared with the previous work of Davé & Oppenheimer (2007), the new simulations employ the most up-to-date treatment for supernova and AGB star enrichment, and include an improved version of the momentum-driven wind models (the vzw model) where the wind properties are derived based on host galaxy masses (Oppenheimer & Davé 2008). The model metallicity in Davé et al. (2011a) is defined as the SFR-weighted metallicity of all gas particles in the identified simulated galaxies. This model metallicity can be compared directly with the metallicity we observe in star-forming galaxies after a constant offset normalization to account for the uncertainty in the absolute metallicity scale (Kewley & Ellison 2008). The offset is obtained by matching the model metallicity with the SDSS metallicity. Note that the model has a galaxy mass resolution limit of M ⋆ ∼10 9 M ⊙ . For the Zz plot, we normalize the model metallicity with the median SDSS metallicity computed from all SDSS galaxies >10 9 M ⊙ . For the MZ relation in Section 7, we normalize the model metallicity with the SDSS metallicity at the stellar mass of 10 10 M ⊙ . We compute the median metallicities of the Davé et al. (2011a) model outputs in redshift bins from z = 0 to z = 3 with an increment of 0.1. The median metallicities with 1σ spread (defined as including 68% of the data) of the model at each redshift are overlaid on the observational data in the Zz plot. We compare our observations with the model prediction in 3 ways: (1) We compare the observed median metallicity with the model median metallicity. We see that for the lower mass bins (10 9−9.5 , 10 9−10 M ⊙ ), the median of the model metallicity is consistent with the median of the observed metallicity within the observational errors. However, for higher mass bins, the model over-predicts the metallicity at all redshifts. The over-prediction is most significant in the highest mass bin of 10 10−11 M ⊙ , where the Student's t-statistic shows that the model distributions have significantly different means than the observational data at all redshifts, with a probability of being a chance difference of < 10 −8 , < 10 −8 , 1.7%, 5.7% for SDSS, DEEP2, the Lensed, and the Erb06 samples respectively. For the alternative high-mass bin of 10 9.5−11 M ⊙ , the model also over-predicts the observed metallicity except for the Erb06 sample, with a chance difference between the model and ob- in unit of ∆dex per redshift whereas the right coloumn is in unit of ∆dex per Gyr. We derive dZ dz for the SDSS to the DEEP2 (z ∼ 0 to 0.8), and the DEEP2 to the Lensed (z ∼ 0.8 to 2.0) samples respectively (black squares/lines). As a comparison, we also derive dZ dz from z ∼ 0.8 to 2.0 using the DEEP2 and the Erb06 samples (yellow circles/lines). Filled and empty squares are results from the mean and median quantities. The model prediction (using median) from the cosmological hydrodynamical simulation of Davé et al. (2011a) is shown in red stars. The second to fifth rows show dZ dz in different mass ranges. The first row illustrates the interpretation of the dZ dz in redshift and cosmic time frames. A negative value of dZ dz means a positive metal enrichment from high-redshift to local universe. The negative slope of dZ dz versus cosmic time (right column) indicates a deceleration in metal enrichment from from high-z to low-z. servations of < 10 −8 , < 10 −8 , 1.7%, 8.9%, 93% for SDSS, DEEP2, the Lensed, and the Erb06 samples respectively. (2) We compare the scatter of the observed metallicity (orange error bars on Zz plot) with the scatter of the models (red dashed lines). For all the samples, the 1σ scatter of the data from the SDSS (z ∼ 0), DEEP2(z ∼ 0.8), and the Lensed sample (z ∼ 2) are: 0.13, 0.15, and 0.15 dex; whereas the 1σ model scatter is 0.23, 0.19, and 0.14 dex. We find that the observed metallicity scatter is increasing systematically as a function of redshift for the high mass bins whereas the model does not predict such a trend: 0.10, 0.14, 0.17 dex c.f. model 0.17, 0.15, 0.12 dex; 10 9.5−11 M ⊙ and 0.07, 0.12, 0.18 dex c.f. model 0.12, 0.11, 0.10 dex ; 10 10−11 M ⊙ from SDSS → DEEP2 → the Lensed sample. Our observed scatter is in tune with the work of Nagamine et al. (2001) in which the predicted stellar metallicity scatter increases with redshift. Note that our lensed samples are still small and have large measurement errors in metallicity (∼ 0.2 dex). The discrepancy between the observed scatter and models needs to be further confirmed with a larger sample. (3) We compare the observed slope ( dZ dz ) of the Zz plot with the model predictions ( Figure 3). We find the observed dZ dz is consistent with the model prediction within the observational errors for the undivided sample of all masses >10 9.0 M ⊙ . However, when divided into mass bins, the model predicts a slower enrichment than observations from z ∼ 0 → 0.8 for the lower mass bin of 10 9.0−9.5 M ⊙ , and from z ∼ 0.8 → 2.5 for the higher mass bin of 10 9.5−11 M ⊙ at a 95% significance level. Dave et al. (2011) showed that their models over-predict the metallicities for the highest mass galaxies in the SDSS. They suggested that either (1) an additional feedback mechanism might be needed to suppress star formation in the most massive galaxies; or (2) wind recycling may be bringing in highly enriched material that elevates the galaxy metallicities. It is unclear from our data which (if any) of these interpretations is correct. Additional theoretical investigations specifically focusing on metallicities in the most massive active galaxies are needed to determine the true nature of this discrepancy. The Observational Limit of the Mass-Metallicity Relation For the N2 based metallicity, there is a limiting metallicity below which the [N II] line is too weak to be detected. Since [N II] is the weakest of the Hα +[N II] lines, it is therefore the flux of [N II] that drives the metallicity detection limit. Thus, for a given instrument sensitivity, there is a region on the mass-metallicity relation that is observationally unobtainable. Based on a few simple assumptions, we can derive the boundary of this region as follows. Observations have shown that there is a positive correlation between the stellar mass M ⋆ and SF R (Noeske et al. 2007b;Elbaz et al. 2011;Wuyts et al. 2011). One explanation for the M ⋆ vs. SFR relation is that more massive galaxies have earlier onset of initial star formation with shorter timescales of exponential decay (Noeske et al. 2007a;Zahid et al. 2012). The shape and amplitude of the SFR vs. M ⋆ relation at different redshift z can be characterized by two parameters δ(z) and γ(z), where δ(z) is the logarithm of the SFR at 10 10 M ⋆ and γ(z) is the power law index (Zahid et al. 2012). Table 3). Back dots are the lensed sample used in this work. The SFR for the lensed sample is derived from the Hα flux with dust extinction corrected from the SED fitting. The errors on the SFR of the lensed sample are statistical errors of the Hα fluxes. Systematic errors of the SFR can be large (a factor of 2-3) for our lensed galaxies due to complicated aperture effects (Section 3.1). The relationship between the SFR and M ⋆ then becomes: As an example, we show in Figure 4 the SFR vs. M ⋆ relation at three redshifts (z ∼ 0, 0.8, 2). The best-fit values of δ(z) and γ(z) are listed in Table 3. The slope of the mass-metallicity detection limit is related to the slope of the SFR-mass relation, whereas the y-intercept of the slope depends on the instrument flux limit (and flux magnification for gravitational lensing), redshift, and the yintercept of the SFR-mass relation. Note that the exact location of the boundary depends on the input parameters of Equation 8. As an example, we use the δ(z) and γ(z) values of the Erb06 and Lensed samples respectively (Table 3) Figure 4; Table 3). The parameters adopted for the instrument flux limit are given in Section 7.1. The lensing magnification (µ) are fixed at 1.0 (i.e., non-lensing cases) for Subaru/MOIRCS (blue lines) and JWST/NIRSpec (light blue). The red lines show the detection limits for KECK/NIRSPEC with different magnifications. Black filled triangles show the Erb et al. (2006) sample. We show that stacking and/or lensing magnification can help to push the observational boundary of the MZ relation to lower mass and metallicity regions. For example, Erb et al. (2006) used stacked NIRSPEC spectra with N ∼ 15 spectra in each mass bin. The effect of stacking (N ∼ 15 per bin) is similar to observing with a lensing magnification of µ ∼ 4. NOTE. -The SFR vs. stellar mass relations at different redshifts can be characterize by two parameters δ(z) and γ(z), where δ(z) is the logarithm of the SFR at 10 10 M⋆, and γ(z) is the power law index. The best fits for the nonlensed samples are adopted from Zahid et al. (2012). The best fits for the lensed sample are calculated for the Wuyts et al. (2012) sample and the whole lensed sample separately. detection limit is based on background limited estimation in 10 5 seconds (flux in units of 10 −18 ergs s −1 cm −2 below). For Subaru/MOIRCS (low resolution mode, HK500), we adopt f inst = 23.0 based on the 1σ uncertainty of our MOIRCS spectrum (flux=4.6 in 10 hours), scaled to 3σ in 10 5 seconds. For KECK/NIRSPEC, we use f inst = 12.0, based on the 1σ uncertainty of Erb et al. (2006) (flux=3.0 in 15 hours), scaled to 3σ in 10 5 seconds. For JWST/NIRSpec, we use f inst = 0.17, scaled to 3σ in 10 5 seconds 9 . Since lensing flux magnification is equivalent to lowering the instrument flux detection limit, we see that with a lens-9 http://www.stsci.edu/jwst/instruments/nirspec/sensitivity/ ing magnification of ∼55, we reach the sensitivity of JWST using KECK/NIRSPEC. Stacking can also push the observations below the instrument flux limit. For instance, the z ∼ 2 Erb et al. (2006) sample was obtained from stacking the NIR-SPEC spectra of 87 galaxies, with ∼ 15 spectra in each mass bin, thus the Erb06 sample has been able to probe ∼ 4 times deeper than the nominal detection boundary of NIRSPEC. The observational detection limit on the MZ relation is important for understanding the incompleteness and biases of samples due to observational constraints. However, we caution that the relation between Z met and M ⋆ in Equation 4 will have significant intrinsic dispersion due to variations in the observed properties of individual galaxies. This includes scatter in the M ⋆ -SFR relation, the N2 metallicity calibration, the amount of dust extinction, and variable slit losses in spectroscopic observations. For example, a scatter of 0.8 dex in δ for the lensed sample (Table 3) implies a scatter of approximately 0.5 dex in Z met . In addition, Equations 2 and 4 include implicit assumptions of zero dust extinction and no slit loss, such that the derived line flux is overestimated (and Z met is underestimated). Because of the above uncertainties and biases in the assumptions we made, Equation 4 should be used with due caution. Figure 6 shows the mass and metallicity measured from the SDSS, DEEP2, and our lensed samples. The Erb et al. (2006) (Erb06) stacked data are also included for comparison. We highlight a few interesting features in Figure 7: The Evolution of the MZ Relation (1) To first order, the MZ relation still exists at z ∼ 2, i.e., more massive systems are more metal rich. The Pearson correlation coefficient is r = 0.33349, with a probability of being a chance correlation of P = 17%. A simple linear fit to the lensed sample yields a slope of 0.164±0.033, with a y-intercept of 6.8±0.3. (2) All z > 1 samples show evidence of evolution to lower metallicities at fixed stellar masses. At high stellar mass (M ⋆ >10 10 M ⊙ ), the lensed sample has a mean metallicity and a standard deviation of the mean of 8.41±0.05, whereas the mean and standard deviation of the mean for the Erb06 sample is 8.52±0.03. The lensed sample is offset to lower metallicity by 0.11±0.06 dex compared to the Erb06 sample. This slight offset may indicate the selection difference between the UV-selected (potentially more dusty and metal rich) sample and the lensed sample (less biased towards UV bright systems). (3) At lower mass (M ⋆ <10 9.4 M ⊙ ), our lensed sample provides 12 individual metallicity measurements at z > 1. The mean metallicity of the galaxies with M ⋆ <10 9.4 M ⊙ is 8.25±0.05, roughly consistent with the <8.20 upper limit of the stacked metallicity of the lowest mass bin (M ⋆ ∼10 9.1 M ⊙ ) of the Erb06 galaxies. (4) Compared with the Erb06 galaxies, there is a lack of the highest mass galaxies in our lensed sample. We note that there is only 1 object with M ⋆ >10 10.4 M ⊙ among all three lensed samples combined. The lensed sample is less affected by the color selection and may be more representative of the mass distribution of high-z galaxies. In the hierarchical galaxy formation paradigm, galaxies grow their masses with time. The number density of massive galaxies at high redshift is smaller than at z ∼ 0, thus the number of massive lensed galaxies is small. Selection criteria such as the UV-color selection of the Erb06 and SINs ) galaxies can be applied to target the high-mass galaxies on the MZ relation at high redshift. Comparison with Theoretical MZ Relations Understanding the origins of the MZ relation has been the driver of copious theoretical work. Based on the idea that metallicities are mainly driven by an equilibrium among stellar enrichment, infall and outflow, Finlator & Davé (2008) developed smoothed particle hydrodynamic simulations. They found that the inclusion of a momentum-driven wind model (vzw) fits best to the z ∼ 2 MZ relations compared to other outflow/wind models. The updated version of their vzw model is described in detail in Davé et al. (2011a). We overlay the Davé et al. (2011a) vzw model outputs on the MZ relation in Figure 7. We find that the model does not reproduce the MZ redshift evolution seen in our observations. We provide possible explanations as follows. Kewley & Ellison (2008) found that both the shape and scatter of the MZ relation vary significantly among different metallicity diagnostics. This poses a tricky normalization problem when comparing models to observations. For example, a model output may fit the MZ relation slope from one strong-line diagnostic, but fail to fit the MZ relation from another diagnostic, which may have a very different slope. This is exactly what we are seeing on the left panel of Fig-ure 7. Davé et al. (2011a) applied a constant offset of the model metallicities by matching the amplitude of the model MZ relation at z ∼ 0 with the observed local MZ relation of Tremonti et al. (2004, T04) at the stellar mass of 10 10 M ⊙ . Davé et al. (2011a) found that the characteristic shape and scatter of the MZ relation from the vzw model matches the T04 MZ relation between 10 9.0 M ⊙ <M ⋆ <10 11.0 within the 1σ model and observational scatter. However, since both the slope and amplitude of the T04 SDSS MZ relation are significantly larger than the SDSS MZ relation derived using the PP04N2 method (Kewley & Ellison 2008), the PP04N2normalized MZ relation from the model does not recover the local MZ relation within 1σ. In addition, the stellar mass measurements from different methods may cause a systematic offsets in the x-direction of the MZ relation (Zahid et al. 2011). As a result, even though the shape, scatter, and evolution with redshifts are independent predictions from the model, systematic uncertainties in metallicity diagnostics and stellar mass estimates do not allow the shape to be constrained separately. In the right panel of Figure 7, we allow the model slope (α), metallicity amplitude (Z), and stellar mass (M * ) to change slightly so that it fits the local SDSS MZ relation. Assuming that this change in slope (∆α), and x, y amplitudes (∆Z, ∆M * ) are caused by the systematic offsets in observations, then the same ∆α, ∆Z, and ∆M * can be applied to model MZ relations at other redshifts. Although normalizing the model MZ relation in this way will make the model lose prediction power for the shape of the MZ relation, it at least leaves the redshift evolution of the MZ relation as a testable model output. Despite the normalization correction, we see from Figure 7 that the models predict less evolution from z ∼ 2 to z ∼ 0 than the observed MZ relation. To quantify, we divide the model data into two mass bins and derive the mean and 1σ scatter in each mass bin as a function of redshift. We define the "mean evolved metallicity" on the MZ relation as the difference between the mean metallicity at redshift z and the mean metallicity at z ∼ 0 at a fixed stellar mass (log (O/H) [z∼0] − log (O/H) [z∼2]). The "mean evolved metallicity" errors are calculated based on the standard errors of the mean. In Figure 8 we plot the "mean evolved metallicity" as a function of redshift for two mass bins: 10 9.0 M ⊙ <M ⋆ <10 9.5 M ⊙ , 10 9.5 M ⊙ <M ⋆ <10 11 M ⊙ . We calculate the observed "mean evolved metallicity" for DEEP2 and our lensed sample in the same mass bins. We see that the observed mean evolution of the lensed sample are largely uncertain and no conclusion between the model and observational data can be drawn. However, the DEEP2 data are well-constrained and can be compared with the model. We find that at z ∼ 0.8, the mean evolved metallicity of the high-mass galaxies are consistent with the mean evolved metallicity of the models. The observed mean evolved metallicity of the low-mass bin galaxies is ∼ 0.12 dex larger than the mean evolved metallicity of the models in the same mass bins. COMPARE WITH PREVIOUS WORK IN LITERATURE In this Section, we compare our findings with previous work on the evolution of the MZ relation. For low masses (10 9 M ⊙ ), we find a larger enrichment (i.e., smaller decrease in metallicity) between z ∼ 2 → 0 than either the non-lensed sample of Maiolino et al. (2008) Figure 6. The small green and light blue dots are the cosmological hydrodynamic simulations with momentum-conserving wind models from Davé et al. (2011a). The difference between the left and right panels are the different normalization methods used. The left panel normalizes the model metallicity to the observed SDSS values by applying a constant offset at Mstar ∼ 10 10 M ⊙ , whereas the right panel normalizes the model metallicity to the observed SDSS metallicity by allowing a constant shift in the slope, amplitude and stellar mass. Note that the model has a mass cut off at 1.1 × 10 9 M ⊙ . Richard et al. (2011) (0.4 dex). These discrepancies may reflect differences in metallicity calibrations applied. It is clear that a larger sample is required to characterize the true mean and spread in metallicities at intermediate redshift. Note that the lensed samples are still small and have large measurement errors in both stellar masses (0.1 to 0.5 dex) and metallicity (∼ 0.2 dex). We find in Section 6.1 that the deceleration in metal enrichment is significant in the highest mass bin (10 9.5 M ⊙ <M ⋆ <10 11 M ⊙ ) of our samples. The deceleration in metal enrichment from z ∼ 2 → 0.8 to z ∼ 0.8 → 0 is consistent with the picture that the star formation and mass assembly peak between redshift 1 and 3 (Hopkins & Beacom 2006). The deceleration is larger by 0.019±0.013 dex Gyr −2 in the high mass bin, suggesting a possible mass-dependence in chemical enrichment, similar to the "downsizing" massdependent growth of stellar mass (Cowie et al. 1996;Bundy et al. 2006). In the downsizing picture, more massive galaxies formed their stars earlier and on shorter timescales compared with less massive galaxies (Noeske et al. 2007a). Our observation of the chemical downsizing is consistent with previous metallicity evolution work (Panter et al. 2008;Maiolino et al. FIG. 8.-The "mean evolved metallicity" as a function of redshift for two mass bins (indicated by four colors). Dashed lines show the median and 1σ scatter of the model prediction from Davé et al. (2011a). The observed data from DEEP2 and our lensed sample are plotted as filled circles. We find that for higher mass bins, the model of Davé et al. (2011a) over-predicts the metallicity at all redshifts. The over-prediction is most significant in the highest mass bin of 10 10−11 M ⊙ . This conclusion similar to the findings in Davé et al. (2011a,b). In addition, we point out that when comparing the model metallicity with the observed metallicity, there is a normalization problem stemming from the discrepancy among different metallicity calibrations (Section 7.3). We note the evolution of the MZ relation is based on an ensemble of the averaged SFR weighted metallicity of the starforming galaxies at each epoch. The MZ relation does not reflect an evolutionary track of individual galaxies. We are probably seeing a different population of galaxies at each redshift (Brooks et al. 2007;Conroy et al. 2008). For example, a ∼10 10.5 M ⊙ massive galaxy at z ∼2 will most likely evolve into an elliptical galaxy in the local universe and will not appear on the local MZ relation. On the other hand, to trace the progenitor of a ∼10 11 M ⊙ massive galaxy today, we need to observe a ∼10 9.5 M ⊙ galaxy at z ∼2 (Zahid et al. 2012). It is clear that gravitational lensing has the power to probe lower stellar masses than current color selection techniques. Larger lensed samples with high-quality observations are required to reduce the measurement errors. SUMMARY To study the evolution of the overall metallicity and MZ relation as a function of redshift, it is critical to remove the systematics among different redshift samples. The major caveats in current MZ relation studies at z >1 are: (1) metallicity is not based on the same diagnostic method; (2) stellar mass is not derived using the same method; (3) the samples are selected differently and selection effects on mass and metallicity are poorly understood. In this paper, we attempt to minimize these issues by re-calculating the stellar mass and metallicity consistently, and by expanding the lens-selected sample at z >1. We aim to present a reliable observational picture of the metallicity evolution of star forming galaxies as a function of stellar mass between 0 < z < 3. We find that: • There is a clear evolution in the mean and median metallicities of star-forming galaxies as a function of redshift. The mean metallicity falls by ∼ 0.18 dex from redshift 0 to 1 and falls further by ∼ 0.16 dex from redshift 1 to 2. • We compare the metallicity evolution of star-forming galaxies from z = 0 → 3 with the most recent cosmological hydrodynamic simulations. We see that the model metallicity is consistent with the observed metallicity within the observational error for the low mass bins. However, for higher mass bins, the model over-predicts the metallicity at all redshifts. The overprediction is most significant in the highest mass bin of 10 10−11 M ⊙ . Further theoretical investigation into the metallicity of the highest mass galaxies is required to determine the cause of this discrepancy. • The median metallicity of the lensed sample is 0.35±0.06 dex lower than local SDSS galaxies and 0.28±0.06 dex lower than the z ∼ 0.8 DEEP2 galaxies. • Cosmological hydrodynamic simulation (Davé et al. 2011a) does not agree with the evolutions of the observed MZ relation based on the PP04N2 diagnostic. Whether the model fits the slope of the MZ relation depends on the normalization methods used. This study is based on 6 clear nights of observations on a 8-meter telescope, highlighting the efficiency in using lensselected targets. However, the lensed sample at z > 1 is still small. We aim to significantly increase the sample size over the years. We would like to thank the referee for an excellent report that has significantly improved this paper. T.-Y. wants to thank the MOIRCS supporting astronomer Ichi Tanaka and Kentaro Aoki for their enormous support on the MOIRCS observations. We thank Youichi Ohyama for scripting the original MOIRCS data reduction pipeline. We are grateful to Dave Rommel for providing and explaining to us his most recent models. T.-Y. wants to thank Jabran Zahid for the SDSS and DEEP2 data and many insightful discussions. T.-Y. acknowledges a Soroptimist Founder Region Fellowship for Women. L.K. acknowledges a NSF Early CAREER Award AST 0748559 and an ARC Future Fellowship award FT110101052. JR is supported by the Marie Curie Career Integration Grant 294074. We wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. Facilities: Subaru (MOIRCS) APPENDIX SLIT LAYOUT, SPECTRA FOR THE LENSED SAMPLE This section presents the slit layouts, reduced and fitted spectra for the newly observed lensed objects in this work. The line fitting procedure is described in Section 3.2. For each target, the top panel shows the HST ACS 475W broad-band image of the lensed target. The slit layouts with different positional angles are drawn in white boxes. The bottom panel(s) show(s) the final reduced 1D spectrum(a) zoomed in for emission line vicinities. The black line is the observed spectrum for the target. The cyan line is the noise spectrum extracted from object-free pixels of the final 2D spectrum. Tilted grey mesh lines indicate spectral ranges where the sky absorption is severe. Emission lines falling in these spectral windows suffer from large uncertainties in telluric absorption correction. The blue horizontal line is the continuum fit using first order polynomial function after blanking out the severe sky absorption region. The red lines overplotted on the emission lines are the overall Gaussian fit, with the blue lines show individual components of the multiple Gaussian functions. Vertical dashed lines show the center of the Gaussian profile for each emission line. The S/N of each line are marked under the emission line labels. Note that for lines with S/N <3, the fit is rejected and a 3-σ upper limit is derived. Brief remarks on individual objects (see also Table 2 and 3 for more information): • Figure 9 and 10, B11 (888 351) : this is a resolved galaxy with spiral-like structure at z = 2.540 ± 0.006. As reported in Broadhurst et al. (2005), It is likely to be the most distant known spiral galaxy so far. B11 has 3 multiple images. We have observed B11.1, and B11.2, with two slit orientations on each image respectively. Different slit orientation yields very different line ratios, implying possible gradients. Our IFU follow-up observations are in progress to reveal the details of this 2.6-Gyr-old spiral. • Figure 14, B29 (884 331): this is a lensed system with 5 multiple images. We observed B29.3, the brightest of the five images. The overall surface brightness of the B29.3 arc is very low, We have observed a 10-σ Hα and an upper limit for [N II], placing it at z = 2.633 ± 0.010. • Figure 15, G3: this lensed arc with a bright knot has no recorded redshift before this study. It was put on one of the extra slits during mask designing. We have detected a 8-σ [O III] line and determined its redshift to be z = 2.540 ± 0.010. • Figure 16 • Figure 17 and 18, B5 (892 339, 870 346): it has three multiple images, of which we observed B5.1 and B5.3. Two slit orientations were observed for B5.1, the final spectrum for B5.1 has combined the two slit orientations weighted by the S/N of HαṠtrong Hα and upper limit of [N II] were obtained in both images, yielding a redshift of z = 2.636 ± 0.004. • Figure 23, B8: this arc has five multiple images in total, and we observed B8. • B22.3: a three-image lensed system at z = 1.703 ± 0.004, this is the first object reported from our LEGMS program, see Yuan & Kewley (2009).
2012-11-27T20:59:43.000Z
2012-11-01T00:00:00.000
{ "year": 2012, "sha1": "1c31809090f12306ac8803f7385815f25d6db23c", "oa_license": null, "oa_url": "https://openresearch-repository.anu.edu.au/bitstream/1885/70464/2/01_Yuan_The_metallicity_evolution_of_2013.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1c31809090f12306ac8803f7385815f25d6db23c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
253610169
pes2o/s2orc
v3-fos-license
SPATIOTEMPORAL CONVOLUTIONAL LSTM WITH ATTENTION MECHANISM FOR MONTHLY RAINFALL PREDICTION , INTRODUCTION Rainfall forecast information is one of the crucial analyses to help regulate water resources and often involves several variables since rainfall is part of meteorological phenomena. This prediction is more complicated when dealing with the emergence of climate change in tropical areas such as Indonesia, which lies on the equator with implications from North and South. Furthermore, climate change has affected rainfall patterns, causing several natural disasters such as heavy rains that result in flooding or prolonged absence of precipitation that results in droughts [1]. Drought Management Plans (DMPs) are regulatory instruments that establish priorities among different water uses and define more stringent constraints for access to publicly available water during droughts and reduced water supplies because of climate change vulnerability to drought events. To deal with this problem, rainfall prediction with an excellent and accurate method is needed to anticipate it [2]. Precise rainfall forecasts, both short and long term, have significant benefits in water resource management, flood control, disaster reduction, and agricultural management [3]. However, rainfall is a complicated nonlinear atmospheric system that depends on space and time; besides, many factors can influence rain in the area [4]. Therefore, it is not never convenient to realize the complexity and uncertainty of the predictability of rainfall to produce precise and accurate rainfall forecasts [5]. Forecasting rainfall, the beginning of the rainy season, the duration of the rain, and the end of the rainy season are determined by a monthly period, often using a three-month system known as the SPI (Standardized Precipitation Index) method [6]. In addition, monthly rainfall can provide a more accurate 3 MONTHLY RAINFALL PREDICTION distribution of the mean intra-year rain when compared to seasonal rainfall [7]. Hence, it is vital to periodically estimate rainfall on a monthly time scale, in which rainfall predictions are usually made using physical-based models and deep learning methods [8]. Climate Hazards Group Infrared Precipitation with Stations, also known as (CHIRPS), is data obtained with specifications such as environmental records, new quasi-global (50 ° S-50 ° N), high resolution (0.05 °), daily, pentadal, and monthly rainfall datasets. These datasets have the spatial surface of the earth and temporal from 1981 to 2020, which are able to visualize the rainfall condition in every place on the land. Scientists developed CHIRPS from various countries to support the United States Agency for International Development Famine Early Warning Systems Network (FEWS NET) [9]. The approach is built using thermal infrared precipitation (TIR), which has been successful in trials like the National Oceanic and Atmospheric Administration's (NOAA) Rainfall Estimate CHIRPS uses Tropical Rainfall Measuring Mission Multi-Satellite Precipitation Analysis version 7 to calibrate global Cold Cloud Duration (CCD) rainfall forecasts. In addition, CHIRPS also employs the current state-of-the-art interpolation measurement approach using an 'intelligent interpolation' approach that can work with anomalies in high-resolution climatology [10]. CHIRPS is one of the unique spatiotemporal data that require specific consideration when utilizing deep learning predictions such as LSTM and GRU. While the deep learning approaches mostly used temporal data to build the model, spatiotemporal has always had a different way when choosing suitable algorithms. Understanding the features is necessary, which has high-dimensional and temporally correlation, which means the data are indexed by up to two dimensions in space and one in time [11]. In general, spatiotemporal data is spatial of the correlation between nearby locations like a photo with the pixel and temporal correlation between adjacent timestamps [12]. To handle spatiotemporal data, Tao et al. [3] used attention mechanisms to enhance the prediction model, the state of the art of attention mechanism was invented by Bahdanau [13] to improve the accuracy of machine translation algorithms. Another attention model comes from Vaswani et al. [14], which is Multi-Head Attention, and some people make the robust algorithm, namely transformers. Deep Learning fields are always spread over several areas of prediction and classification. In 4 FREDYAN, KUSUMA sequential or time-series data, Recurrent Neural Networks (RNN) and their derivatives maintain some vectors in calculating every neuron using propagated through time [15]. However, RNN has trouble dealing with long data sequences that preside vanishing gradient problems when training using traditional RNN with long legs [16]. LSTM comes with the solution using memory to improve RNN and avoid vanishing gradient, becoming more advanced with modifications such as encoder-decoder, attention mechanism, etc. [17]. In this study, the authors propose Convolutional LSTM with an additional Attention Layer to enhance the accuracy of monthly rainfall prediction using CHIRPS data as a solution to predict rainfall with gridding data to make more accurate forecasting. A hyperparameter is tuned manually with an endless number of models to ensure that it has the same comparison. We compare and analyze each model's loss error and the number of performances according to the evaluation metrics most used in hydrology and deep learning. The results indicate that the proposed Convolutional LSTM-AT model is the best so far. We also analyze the spatial and temporal for interpreting the physical causality of our model. RELATED WORKS In this section, the authors give reviews of relevant research that can inspire the author to construct the Convolutional LSTM-AT model, including several fundamental studies in rainfall forecasting, sequential data using LSTM-based models, and the exciting method of machine translation, which is an attention-based model. Rainfall Forecasting Seasonal prediction models are commonly used for the prediction of rainfall to make an early warning from the tools or agency of government when hydrological extremes come to attack people. Based on climatologists, climate prediction models can be classified into three approaches first physical or numerical approach, the second empirical or statistical approach, and the third mixing between physical-empirical [18]. Still, rainfall depends on numerous lands, large oceans, and the atmosphere in the lop of the processes. On the other hand, Physical models are generally developed based on interpretations of atmospheric processes, but they frequently show weak 5 MONTHLY RAINFALL PREDICTION predictability in providing good information on annual climate variability [19]. In general, physical-empirical prediction models, which are the most used by climatologists, are developed utilizing the traditional statistical approach. For instance, Zhu and Li [20] 2017 applied the regression method to predict the wet season in East Asia. Li and Wang [21] 2018 studied the forecast capability of summertime highly extreme rainfall days in eastern China by utilizing a stepwise regression model. The ability of the traditional regression-based method is likewise inadequate in forecasting highly nonlinear and nonstationary performance. Therefore, the connection of local climate in a specific area with ocean-atmospheric variables such as SST or sea level pressure cannot be described by employing traditional regression models [22]. LSTM-Based Methods Long Short Term Memory (LSTM) is one of the modified versions of recurrent neural networks that have a problem with vanishing gradient, which is designed to resolve the problem of sequential long-distance (time) data reliance by Hochreiter and Schmidhuber [23]. Yuan et al. [24] proposed an LSTM network model to build occupancy by simulating energy, operation, and management. ElSaadani et al. [25] used the LSTM model to predict soil moisture and fill gaps between the observation. Further, Zhou et al. [26] combined the LSTM model and attention mechanism based on machine-translation to recognize skeleton-based abnormal behavior. The conclusions indicated that attention-based LSTM could recognize behavior better than only LSTM Model. Attention-based methods In deep learning, one way to increase accuracy in the model learning process is through attention mechanisms inspired by selective human visuals to choose which information to pay special attention to and which ones to reject. In general, the application of Attention mechanism has been applied in various areas of research and industry, such as machine translation, image captioning, and video motion recognition. Song et al. [27] have a proposal related to an end-toend spatiotemporal attention model to perform recognition and prediction of human action in a video frame. In addition, Chen et al. [28] proposed a model of spatial combined with channel attention and image labeling with an additional convolutional neural network, having a good result in their data set. Ding et al. [29] proposed spatiotemporal LSTM to predict floods in three basins 6 FREDYAN, KUSUMA in China. Tao et al. [3] also proposed LSTM with an attention mechanism to improve monthly rainfall prediction, which performed well in most spatial points. From the above model author was inspired to develop another model, we propose a multi-head attention LSTM to optimize monthly rainfall prediction with spatiotemporal data. STUDY AREA AND DATASET In this study, Kalimantan Timur was selected as the study area to evaluate and compare the performance of several LSTM models in forecasting monthly rainfall. East Kalimantan is located Monthly rainfall data covering January 1980 -December 2020 is CHIRPS data accessible from https://data.chc.ucsb.edu/products/CHIRPS-2.0/. The data for the 40 years January 1980 to December 2020 was used as a dataset of this model, as shown in Figure 1, sampling of December 2020. Rainfall data known as CHIRPS is still in the form of worldwide raster data, where the research only focuses on the Kalimantan Timur region, so the data needs to be split. First, a printout of the Kalimantan Timur area is required from https://tanahair.indonesia.go.id/. Still, combining the data using the ArcGIS application is necessary because the custom is city and district data. Furthermore, after the data for the East Kalimantan region is obtained, splitting the rainfall data worldwide using the SAGA application is needed. It should be noted that the Split process requires degrees of longitude and degrees of latitude and a grid size that must be adapted to raster data worldwide, which is 0.05 o x 0.05 o , the result can be seen in Figure 2. As shown in Figure 2, data visualization has colors black and white which means black has representative sea surface and white island surface. Raster data is one of the best formats of data to represent surface area since raster can keep multi-band of data to create complex spatial conditions. CHIRPS contain a single band to interpret monthly precipitation values without additional variables. It can be seen in Figure 2 that data has three dimensionality of perspective. 8 FREDYAN, KUSUMA As shown in Figure 3, this data includes dimensions 89 x 89 of spatial and 480 of temporal, in this case, monthly data. Having three-dimensional condition make this research more complex since it should be done with a specific method, so the spatial and temporal will not be biased or even removable on that dimension. PROPOSED METHOD Rainfall is critical in supporting human life; besides, various policies often consider rainfall the main factor. Based on rainfall data, climate classification can be done according to the ratio between the average dry months and the average number of wet months. The dry month occurs when the monthly rainfall is less than 60 mm/month, while the wet month occurs when the monthly rainfall is above 100 mm/month. A humid month occurs between the dry and wet months when the monthly rainfall is between 60-100 mm/month Overview Variational data and models increase with many perspectives to understand data to build the best alternative model. The authors have searched for and understood a literature review to know which is the newest and best model or the strengths and weaknesses of those models. Still, the rainfall forecasting model suffers from predicting rainfall accurately and precisely. Hence, the authors built the proposed model Convolutional LSTM-AT as an alternative solution to optimize monthly rainfall prediction with the spatiotemporal dataset. Data Preprocessing The data preprocessing stage is the data selection stage which aims to obtain relevant data for use. In raw data, missing values are often found, not stored values (misrecording), data sampling that is not good enough, and others. However, because this research does not use raw data but secondary data, preprocessing will be done to process spatial and temporal data. In addition, preprocessing will only be conducted to focus on the data on cells with value, so the cells with no data will not be used. 9 MONTHLY RAINFALL PREDICTION FIGURE 4. Illustrated spatiotemporal data using the sliding window in spatial perspective In this study, focal operation theory is implemented, a spatial function to calculate the output value of each cell using neighborhood values, like the nearest neighbors' algorithm (K-NN), a machine learning algorithm, as shown in Figure 4 [30]. In addition, this theory is also commonly used in convolution, kernel, and moving windows in deep learning algorithms such as CNN or RNN. Moving Window can be imagined as an arrangement of square cells with a specific size, which in this study is 3 x 3 in size, which shifts its position with certain steps. As the operation is applied to each cell of the moving window, the values in the raster tend to be smoother. It was adopted in this study to smooth the predictive value in spatial conditions. Spatial-temporal data are generally placed in continuous space, while classical data sets such as images or video data are usually in a discrete area. Spatiotemporal data patterns usually present 10 FREDYAN, KUSUMA very complex spatial and temporal properties, and correlations between data are challenging to explain with traditional methods. Finally, one of the standard statistical assumptions is that the sample is obtained independently. However, this does not apply in spatiotemporal analysis because Spatiotemporal data tend to be highly correlated, so it is impossible to carry out separate studies. As explained earlier, the data used in each time unit (temporal) is 89x89 with a length of 480 temporal, as shown in Figure 4. Hence for modeling, the data is taken spatially with a size of 3x3 for 13 months (temporal), and this data will slice the sliding window along the temporal axis. Moving to the right side with a single step will be implemented in the data, so after the last window on the right area, it will continue by a sliding window in the next row, from left to right. It can be seen in the blue area in Fig. 4 until the end of the spatial data, which is the right bottom side. Data Clustering One of the data mining techniques is clustering to find similarities in character in the group data; this technique is included in traditional machine learning studies and also becoming part of the unsupervised algorithm, which only requires training data without target data [31]. In theory, cluster analysis is one of the tools to group data based on variables or features to maximize the resemblance of characteristics within the cluster and maximize the differences between clusters themselves [32]. The popular algorithm is the K-means clustering algorithm groups data based on the distance between the data and the cluster centroid point obtained through an iterative process [33]. The analysis needs to determine the number of K as input to the algorithm. Following the Eq. (1), is the objective function, is many clusters, is the number of cases, is a case in , and is the centroid for cluster itself. In k-means clustering, this distance can be measured using distances: Euclidean distance, Manhattan distance, A-squared Euclidean distance measure, and Cosine distance measure. The choice of this distance measurement method will affect how the algorithm calculates the similarity in the cluster and shape. Nevertheless, some of the problems come when determining the number of because no theory 11 MONTHLY RAINFALL PREDICTION states how to choose it very well since the number of is very essential to searching the cluster. The researcher solves this problem using the Elbow Method, which is obtained by performing a visual assessment of the line graph where the x-axis is the number of K, and the Y-axis is the Within Cluster Sum Square (WCSS) value. Convolutional LSTM-AT LSTM is derivative from the RNN in sequential data study, having three units of gates such as input, output, and forget gate. It allows the gates of LSTM to store and access information or characteristics of the data over a while, dependence to Hochreiter and Schmidhuber [23], mitigating the vanishing gradient problem. The model parameter including all the input is weight or and the bias term , , , , Ĉ respectively represent input-output forget and memory, the other symbols are ℎ meaning hidden state and sigmoid activation function, but it always depends on the data, sometime can be changed becoming hyperbolic tangent or ReLU [29] [34] [35]. ̃= tanh(ℎ −1 + + ) = * −1 + * ̃ (4) The Attention Mechanism is often used to optimize sequence handling models in some deep attention. Hard attention refers to selecting a single input data feature, which means the attention weight can only be 0 or 1. Soft attention refers to a weight between 0 and 1, and the range of weight selection is more flexible [36]. Since those several models of attention were invented by Bahdanau et al. [13] and Multi-Head Attention by Vaswani et al. [14], empirically, addictive attention can improve the modal and attention layer's performance and make the unit's weight noticed. FIGURE 5. Spatiotemporal using Convolutional LSTM Attention Layer based Modifying the original LSTM with an attention mechanism is necessary to fully utilize the Spatiotemporal input information. The authors take rainfall as input features, and the output of our model is the next n-step rainfall prediction. Spatial and temporal attention weights affect the input and output of LSTM cells [37]. With the help of the Spatiotemporal attention module, the authors were able to dynamically adjust attention weights and improve the performance of LSTM cells 13 MONTHLY RAINFALL PREDICTION [38]. This model uses Adam's algorithm optimizer [39] to train the model, as shown in Figure 5. Before training the model, the step that must be conducted is to determine the network architecture, such as deciding how many layers are used, the number of neurons in each layer used, the activation function used, and other parameter values, it can be seen in Table 1. For the input layers based on the features that will be used, 9 spatial features will be used as input neurons; the number 9 comes from 3 x 3 spatial. Then temporal data have 12 timesteps and one time step as the target. Experimental Design The fully built model uses five models to compare the proposed model to others. Besides, the whole architecture has explained in Table 1. Postprocessing aims to make better rainfall predictions than "raw" (unprocessed) hydrological simulations. For this aim, it is significant to evaluate the model's performance and compare it with each other to conclude which model is the best. Several metrics are used to evaluate predictions for different wait times. Since accurate and reliable predictions are so crucial during rainfall events, the primary accuracy measure for a 14 FREDYAN, KUSUMA deterministic forecast is the root-mean-square error (RMSE) in equation (8): Where denotes the − th timeprediction of daily rainfall, denotes the observed daily, and represents the total number of time-k monthly rainfall predictions. Compared with mean absolute error (MAE) metrics, RMSE penalizes significant errors [40], desirable for high rainfall forecasts. Unlike RMSE, which gives a relatively high weight to significant errors, Mean Absolute Error (MAE), a linear statistical measure, is more applicable when the overall impact of errors is proportionate to the increase in error, MAE can be formulated as [40] in equation (9). RESULTS AND DISCUSSION Seven models have been built to forecast rainfall area in Kalimantan Timur, leading by 12 months' time step to predict one month. Those models are: • RNN: Recurrent Neural Network that allows previous outputs to be used as inputs while having hidden states. • GRU: Gated recurrent unit (GRU) is a gating mechanism implemented in recurrent neural networks. • LSTM: Long Short-Term Memory Network is a famous variant of RNN having three gates. • Convolutional LSTM-AT: Combination of Convolutional and LSTM with attention layer, as shown in Figure 5. Clustering Result Every spatial point has different statistical distribution, and different models should be trained for different clusters of spatial points with similar characteristics. Because of that, we use K-means clustering to cluster the spatial points. We use the Elbow method to find the optimal = √ ∑( − ) 2 (8) 15 MONTHLY RAINFALL PREDICTION cluster. Figure 6 shows the best number of clusters that we choose is 4 as a representation of the maximum number of clusters with a significant distance reduction indicator is Cluster 0, Cluster 1, Cluster 2, and Cluster 3. This paper evaluates all clusters as input candidates to build the proposed model that every cluster has own characteristics to generate which spatial become specific cluster. This work it might be the first way to find another clustering. In the Elbow method, author is varying the number of clusters (K) from 1 -10. For each value of K, author is calculating WCSS (Within-Cluster Sum of Square). WCSS is the sum of squared distance between each point and the centroid in a cluster. When author plot the WCSS with the K value, the plot looks like an Elbow. As the number of clusters increases, the WCSS value will start to decrease. WCSS value is largest when K = 1. When author analyze the graph author can see that the graph will rapidly change at a point and thus creating an elbow shape. From this point, the graph starts to move almost parallel to the X-axis. The K value corresponding to this point is the optimal K value or an optimal number of clusters. Figure 6 shows the best number of clusters that author choose is 4 as a representation of the maximum number of clusters with a significant distance reduction indicator is cluster 0, cluster 1, cluster 2, and cluster 3. As shown in Table. 2, the number of WCSS can be seen in there same as in Figure. 6. Moreover, Table 3 showed result the clustering location in 4 cluster. Convolutional LSTM-AT Result The first step of this experiment is building a proposed method with LSTM with a modification layer such as an attention mechanism. The challenge of building the model is looking for the best hyperparameter to adjust the number. As shown in Table 1, we use a constant hyperparameter and build all models with the same hyperparameter but different architecture. showed that the performance still best than others method in average value of spatial point. The attention-based models are more accurate and robust than the original LSTM model and reducing 18 FREDYAN, KUSUMA the number of errors significantly. This proves that the proposed method is still the stable to get minimum value of spatial point. Leading to smaller output should be this model perform much better-using data CHIRPS. All the model performances were entirely satisfactory when we see the average of MAE since the average is the testing of all spatial data that have different characteristics. The RMSE and MAE of the predictions from models in experiments is shown in Table 4. On the CHIRPS dataset, the proposed Convolutional LSTM-AT model has lowest error even using maximum value of all spatial target. However, we can infer that the dataset is already split by Table 5 For future work, we will be looking another way to reduce error in the result of the model. The direction may include how to preprocess data and train 3-Dimensional data without losing spatial information. Besides, we will investigate much architecture and develop spatiotemporal approaches. We will consider further improving the performance of the model by utilizing the graph information of area that we predict in the data. Moreover, it will be grated to add flood data augmentation and physical interpretation of model to make prediction more closely with the ground truth.
2022-11-18T16:12:33.524Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "defdcb80d36fbd59a51b977d07a648951d39d8b0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.28919/cmbn/7761", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f85a5e1f7b17e2d2bb3bdb094ae65c5edc7d0673", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
18816344
pes2o/s2orc
v3-fos-license
The disabilities of the arm, shoulder and hand (DASH) outcome questionnaire: longitudinal construct validity and measuring self-rated health change after surgery Background The disabilities of the arm, shoulder and hand (DASH) questionnaire is a self-administered region-specific outcome instrument developed as a measure of self-rated upper-extremity disability and symptoms. The DASH consists mainly of a 30-item disability/symptom scale, scored 0 (no disability) to 100. The main purpose of this study was to assess the longitudinal construct validity of the DASH among patients undergoing surgery. The second purpose was to quantify self-rated treatment effectiveness after surgery. Methods The longitudinal construct validity of the DASH was evaluated in 109 patients having surgical treatment for a variety of upper-extremity conditions, by assessing preoperative-to-postoperative (6–21 months) change in DASH score and calculating the effect size and standardized response mean. The magnitude of score change was also analyzed in relation to patients' responses to an item regarding self-perceived change in the status of the arm after surgery. Performance of the DASH as a measure of treatment effectiveness was assessed after surgery for subacromial impingement and carpal tunnel syndrome by calculating the effect size and standardized response mean. Results Among the 109 patients, the mean (SD) DASH score preoperatively was 35 (22) and postoperatively 24 (23) and the mean score change was 15 (13). The effect size was 0.7 and the standardized response mean 1.2. The mean change (95% confidence interval) in DASH score for the patients reporting the status of the arm as "much better" or "much worse" after surgery was 19 (15–23) and for those reporting it as "somewhat better" or "somewhat worse" was 10 (7–14) (p = 0.01). In measuring effectiveness of arthroscopic acromioplasty the effect size was 0.9 and standardized response mean 0.5; for carpal tunnel surgery the effect size was 0.7 and standardized response mean 1.0. Conclusion The DASH can detect and differentiate small and large changes of disability over time after surgery in patients with upper-extremity musculoskeletal disorders. A 10-point difference in mean DASH score may be considered as a minimal important change. The DASH can show treatment effectiveness after surgery for subacromial impingement and carpal tunnel syndrome. The effect size and standardized response mean may yield substantially differing results. Background The disability of the arm, shoulder and hand (DASH) questionnaire is an upper-extremity specific outcome measure that was introduced by the American Academy of Orthopedic Surgeons in collaboration with a number of other organizations [1]. The rationale behind the use of one outcome measure for different upper extremity disorders is that the upper extremity is a functional unit [2]. In this respect, the DASH would be suitable because of its property of being mainly a measure of disability. In addition to decreasing the administrative burden associated with using different disease-specific measures, one of the main concepts behind developing the DASH was to facilitate comparisons among different upper-extremity conditions in terms of health burden [1]. The DASH is now available in several languages http://www.dash.iwh.on.ca, and studies of reliability and validity have been published for the original version [3] as well as for the German [4], Italian [5], Spanish [6] and Swedish [7] versions. In addition, research studies regarding a French [8] and a Dutch [9] version of the DASH have been published. The DASH is being increasingly used in cross-sectional studies. To enhance the use of the DASH in prospective studies (such as assessment of effectiveness of different treatment methods) further studies of the instrument's ability to detect change over time would be helpful both for interpretation of score changes and for sample size calculations. Different aspects of an instrument's ability to measure change have been highlighted including studying changes over time for groups or individuals and comparing groups at one occasion [10]. The analysis of score change is commonly referred to as responsiveness [11][12][13][14][15], but the term longitudinal construct validity has also been used [16] and it has been advocated that responsiveness is a part of the validity analysis [15]. There is no consensus on the nomenclature or the appropriate statistical analysis and different suggestions have been made [12,[17][18][19]. To facilitate prospective research, longitudinal studies of the instrument's ability to detect changes and identify smaller and larger changes in health status as perceived by the patient are needed. We believe that the concept of detecting change over time is part of the validity assessment and therefor may be referred to as longitudinal construct validity. To date, we have found only one published study concerning the longitudinal construct validity of the DASH in a variety of orthopedic disorders of the upper extremity [3]. Considering the nature of the instrument, longitudinal construct validity can be assessed among a group of patients with different upper extremity disorders. In contrast, when using the instrument in patients with a particular diagnosis the effectiveness of a specific treatment can be assessed. To analyze treatment effectiveness the direction of change becomes important, as opposed to analyzing longitudinal construct validity, which concerns the ability to detect change irrespective of whether the change is improvement or worsening. Therefor it would be important to study the longitudinal construct validity of the DASH as well as its performance as a measure of treatment effectiveness. The main purpose of this study was to assess the longitudinal construct validity of the DASH among patients undergoing surgery for a variety of upper extremity disorders. The second purpose was to quantify self-rated treatment effectiveness after surgery for subacromial impingement and carpal tunnel syndrome when using the DASH. To ensure reliability of the DASH in this study we also aimed to determine the internal consistency of the scale in each patient population studied. The DASH questionnaire The main part of the DASH is a 30-item disability/symptom scale concerning the patient's health status during the preceding week [20]. The items ask about the degree of difficulty in performing different physical activities because of the arm, shoulder, or hand problem (21 items), the severity of each of the symptoms of pain, activity-related pain, tingling, weakness and stiffness (5 items), as well as the problem's impact on social activities, work, sleep, and self-image (4 items). Each item has five response options. The scores for all items are then used to calculate a scale score ranging from 0 (no disability) to 100 (most severe disability). The score for the disability/symptom scale is called the DASH score. In this study we used the Swedish version of the DASH [7]. Patients Patients with upper-extremity musculoskeletal conditions planned for surgical treatment at an orthopedic department were considered for inclusion in this study. Exclusion criteria were age below 18 years, symptom duration of less than 2 months, or inability to complete questionnaires due to cognitive impairment or language difficulties. The DASH was completed preoperatively by 118 consecutive eligible patients [7]. Postoperatively, 9 (8%) of the patients did not respond and the remaining 109 patients completed the DASH after a minimum followup time of 6 months ( Table 1). The 2 largest diagnostic groups comprised patients who had undergone arthroscopic acromioplasty because of subacromial impingement and open carpal tunnel release because of carpal tunnel syndrome. Complete followup could be obtained for all patients in these 2 subgroups ( Table 2). The followup questionnaire also included an item regarding change in health status after surgery. It inquired about the status of the operated arm compared to its status preoperatively (5 response options: much better, somewhat better, unchanged, somewhat worse, much worse). This item was accidentally missing in the initially mailed questionnaires and was therefore only completed by the last 83 participants. Analyses To assess one aspect of the reliability of the DASH scale when used in this patient population, the internal consistency was calculated using Cronbach alpha [21] for the total population as well as for the subgroups with subacromial impingement and carpal tunnel syndrome. For each of these populations, preoperative, postoperative and change scores were computed for the DASH. These scores were subjected to the one-sample Kolmogorov-Smirnov test to assess normality of distribution. As a measure of longitudinal construct validity, the effect size and standardized response mean were calculated for the DASH disability/symptom scale. The effect size was calculated as the mean difference between the baseline scores and the followup scores (i.e., mean change scores) divided by the standard deviation of the baseline scores. The standardized response mean was calculated as the mean change scores divided by the standard deviation of the change scores. As external criterion for change in health status after surgery the item regarding how the patient rated the status of the operated arm compared to its status preoperatively was used. Because detecting both improvement and worsening reflect longitudinal construct validity, the preoperative-to-postoperative score differences were considered to be in the same direction and the mean change in DASH score and the 95% confidence interval (CI) was calculated for the patients with the responses of "much better" or "much worse" and those with the responses "somewhat better" or "somewhat worse". The difference in the mean change scores between these two groups was assessed with the t-test. For patients who reported that no change had occurred, the mean change in DASH score and the 95% CI were calculated (scores used in their actual direction). The mean change in DASH score for the patients who did not and those who did receive the transition item regarding change in the status of the operated arm was compared with the t-test. To assess the size of health change after surgery for subacromial impingement and carpal tunnel syndrome (i.e., treatment effectiveness), the change scores were used in their actual direction and the effect size and standardized response mean were calculated. The relationship between the DASH change score and time since surgery (months) was analyzed with the Pearson correlation coefficient (r). Reliability The Cronbach alpha coefficient was above 0.9 for the DASH disability/symptoms scale indicating good internal consistency when used in this patient population (Table 3). Longitudinal construct validity Among the 109 participants the mean (SD) change in DASH score was 15 (13) when all changes in scores (improvement or worsening) were calculated as having the same direction. The effect size was 0.7 and standardized response mean 1.2 (Table 4). Comparison of measures of treatment effectiveness For the group with subacromial impingement treated with arthroscopic acromioplasty, the effect size was 0.9 and the standardized response mean 0.5 (Table 4). For the group with carpal tunnel syndrome treated with open carpal tunnel release, the effect size was 0.7 and the standardized response mean 1.0. Correlation between score change and time since surgery Among all 109 patients, no correlation was found between the DASH change score and time since surgery (r = 0.06, p = 0.56). The correlation was weak-to-moderate but statistically non-significant among the patients treated with arthroscopic acromioplasty (r = 0.29, p = 0.15) and those treated with carpal tunnel release (r = 0.34, p = 0.16). Discussion The importance of monitoring the effectiveness of treatment is well recognized and furthermore is the foundation of evidence-based health care. For this purpose *Higher score (0-100) indicates greater disability † All changes in scores (improvement or worsening) calculated as having the same direction (to assess longitudinal validity of the DASH as opposed to assessing treatment effectiveness) instruments that have the ability to detect changes and can differentiate a small difference from a large difference are needed. In a previous study, the DASH score change was reported for 172 patients with different upper extremity disorders (such as shoulder arthritis and carpal tunnel syndrome). The mean change between baseline and followup scores 12 weeks after treatment was 13 (SD 17), the effect size was 0.6 and the standardized response mean was 0.8 [3]. The changes were also shown for patients rating their problem as better (mean score change 17, effect size 0.75, standardized response mean 1.1) and patients rating their function as better (mean score change 20, effect size 0.8, standardized response mean 1.2). Also, based on the results of the present study, it appears that the DASH has the ability to detect changes on group level corresponding to the patients' perception after surgery in a variety of upper extremity disorders. A significant difference in DASH scores between patients responding "much better/worse" and "somewhat better/worse" was found showing the instruments ability to discriminate between these degrees of change. A mean score change of 19 indicated a change in disability rated as "much better/worse" and a mean score change of 10 as "somewhat better/ worse". It has been suggested that the score change rated as "somewhat changed" could be defined as the limit for minimal important change [18]. This information could then be used for power calculations when planning prospective studies. In a recent study a DASH score change of 15 has been suggested to discriminate between improved and unimproved patients [3]. This was based on the patients' responses to a question about "being able to cope with the problem and do what you would like to do", with a response change from "not being able to cope" before treatment to "being able to cope" at followup considered as criterion for improvement [3]. However, we believe that a change in disability can be important even if the patients are not able to do all what they want to do or, at a particular time, not being able to cope with the problem. Future investigations are needed to determine whether the DASH is sensitive to milder degrees of impact other than that of surgery. The difference noted in the group stating no change (mean score change -0.3) can be seen as the difference that occurred by chance and was similar to the score change previously reported [3,7]. A difference of this size should not be considered as a real change of upper extremity disability. In the analysis of health transition only the last 83 patients were included because the item was accidentally missing in the initially mailed questionnaires. The mean change in DASH score did not significantly differ between the patients who did not receive and those who responded to the transition item suggesting that it is unlikely the missing item could have substantially influenced the results. We chose to use self-rated change of health status in the operated arm as external criterion in order to ensure that it did not capture global health changes not related to the upper extremities. The minimum followup time in the present study was 6 months and the latest response was received 21 months after surgery. The minimum followup time was chosen as it was expected to be sufficient to show improvement after surgery in most disorders. As shown in the correlation analysis time since surgery had, within this followup period, only weak-to-moderate but statistically non-significant association with the change in DASH scores after arthroscopic acromioplasty and carpal tunnel release. However, the difference in followup time is a limitation that can have implication, particularly when interpreting the size of change in DASH score for the assessment of treatment effectiveness. The possible implication of response shift also needs to be evaluated in future studies. In this study the DASH demonstrated high Cronbach alpha values, indicating an excellent internal consistency that is adequate for group as well as for individual comparisons [22]. These results support the use of the DASH to measure changes in upper extremity function also on an individual level. However, for individual patient assessment with the DASH the magnitude of score change has to be studied on individual level [17]. It is important to note that in the present study only longitudinal construct validity on group level has been analyzed. The treatment effectiveness calculations showed that for arthroscopic acromioplasty the effect size was larger than the standardized response mean, while for carpal tunnel release the opposite was shown. This illustrates the difficulties with interpretation of such calculations when only one of the analyses is presented. Since the effect size is dependent on the homogeneity of the group preoperatively and the standardized response mean is dependent on the homogeneity of the change of disability, these calculations will by nature differ in almost any group. Both calculation methods are common; however, little has been discussed about the limitations associated with these analyses, though it has been highlighted [15,18]. The use of the DASH in other populations of similar diagnostic groups and interventions is needed to show the degree of consistency in the estimates of treatment effectiveness. Conclusions The DASH can detect and differentiate small and large changes in disability over time after surgery in patients with upper extremity musculoskeletal disorders. A 10-point difference in mean DASH score might be considered as a minimal important change. The DASH can show selfrated treatment effectiveness after surgery for carpal tunnel syndrome and subacromial impingement. The effect size and standardized response mean (commonly used indices of the magnitude of health change measured by questionnaires) may yield substantially differing results.
2018-04-03T05:01:43.232Z
2003-06-16T00:00:00.000
{ "year": 2003, "sha1": "165307e616ecd0060c84ceb979c2709c55dc469f", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-4-11", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "165307e616ecd0060c84ceb979c2709c55dc469f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266468711
pes2o/s2orc
v3-fos-license
Anaerobic threshold using sweat lactate sensor under hypoxia We aimed to investigate the reliability and validity of sweat lactate threshold (sLT) measurement based on the real-time monitoring of the transition in sweat lactate levels (sLA) under hypoxic exercise. In this cross-sectional study, 20 healthy participants who underwent exercise tests using respiratory gas analysis under hypoxia (fraction of inspired oxygen [FiO2], 15.4 ± 0.8%) in addition to normoxia (FiO2, 20.9%) were included; we simultaneously monitored sLA transition using a wearable lactate sensor. The initial significant elevation in sLA over the baseline was defined as sLT. Under hypoxia, real-time dynamic changes in sLA were successfully visualized, including a rapid, continual rise until volitionary exhaustion and a progressive reduction in the recovery phase. High intra- and inter-evaluator reliability was demonstrated for sLT’s repeat determinations (0.782 [0.607–0.898] and 0.933 [0.841–0.973]) as intraclass correlation coefficients [95% confidence interval]. sLT correlated with ventilatory threshold (VT) (r = 0.70, p < 0.01). A strong agreement was found in the Bland–Altman plot (mean difference/mean average time: − 15.5/550.8 s) under hypoxia. Our wearable device enabled continuous and real-time lactate assessment in sweat under hypoxic conditions in healthy participants with high reliability and validity, providing additional information to detect anaerobic thresholds in hypoxic conditions. It is presumed that hypoxic training helps improve endurance performance in athletes 1 .Traditional high-altitude training refers to a state where atmospheric and oxygen pressure decrease and athletes are exposed to chronic hypobaric hypoxia for many weeks 2 .Recent studies on normobaric hypoxic exercise have investigated the impact of the recently popular live low-train high-altitude interventions on athletes' lifestyles 3,4 , as prolonged exposure to low-pressure conditions is not always feasible (travel time, engagement, and expenses) and can lead to health problems 5,6 .Anaerobic threshold (AT) and peak oxygen uptake (peak VO 2 ) should be routinely assessed in hypoxic conditions to practice efficient fitness training during hypoxia 7 .To date, the ventilatory threshold (VT), calculated as a noninvasive index of metabolic response to incremental exercise, has been used to determine AT 8,9 .The VT assessment method is beneficial; however, VT assessment requires an expensive analyzer and expertise due to the difficulty in confirming VT based on the oscillations in minute ventilation and inconsistencies among several factors 10 .The difference in expertise is reported to worsen the VT determination agreement 11 .This is because the respiratory gas analyzer is not readily available in sports settings.Therefore, there is an urgent need to apply an innovative and simple system to determine AT with high reliability for fitness training under hypoxia. Flexible, wearable sensing devices can yield vital information about the underlying physiology of a human participant in a continuous, real-time, and noninvasive manner 12,13 .Sampling human sweat, rich in physiological information, can enable noninvasive monitoring 14 .We developed a sweat sensor to monitor sweat lactate levels (sLA) in real-time during progressive exercise in the clinical setting, investigating its use in detecting AT in healthy individuals and patients with cardiovascular diseases 15 .sLA has been reported to not reflect blood lactate during exercise 16,17 ; however, our research group has examined sLA transitions during incremental load exercise and reported that the sweat lactate threshold was strongly approximated to AT by focusing on the inflection point where the value increases rapidly during incremental exercise, not the absolute value 15,18 . Our sLA sensor is portable and easy to carry, enabling convenient measurements in various environments, and the continuous collection of only 1 Hz sLA values promises a simpler determination of the inflection point.Moreover, the need for invasive collection methods, including blood collection, is undesirable considering human www.nature.com/scientificreports/resources for multi-measurements, the possibility of any person to evaluate, and acceptance of the evaluation target. Under hypoxia, some researchers previously reported AT evaluation results 7 .However, similar to normoxia, it is problematic that the VT evaluation method was applied to broadly cover the sports setting.Therefore, we aimed to investigate the validity of AT estimation and reliability of the sLA continuously obtained using our sLA sensor during exercise under hypoxia in healthy participants. Results The baseline characteristics of the healthy participants are summarized in Table 1.The participants were males (100%) with a median (IQR) age of 21 (20-21) years.The temperature and humidity were 28 ± 1 °C and 62 ± 6% under hypoxia, respectively. Figures 1 and 2 show the sLA during exercise in hypoxia.During the exercise tests, dynamic changes in the sLA were continuously measured and projected onto the wearable device without delay, even under hypoxia.Because of the lack of sweat, the lactate biosensor measured a negligible current response at the commencement of cycling activity.During exercise, sLA increased drastically, and the sweat rate continuously increased as cycling continued until volitional exhaustion.This drastic sLA increase was not associated with the onset of sweating (Fig. 1). Contrary to sLA, the heart rate and VO 2 gradually increased from incremental-load exercise initiation to its end (Fig. 2).At the end of the exercise period, the sLA continued to decrease relatively slowly, mirroring the decrease in heart rate.The results under normoxia are shown in Supplementary Figs. 1 and 2. We easily identified the conversion from steady low lactate values to a continuous increase under hypoxia.Repeated sLT and VT determinations by the same evaluator demonstrated high intra-evaluator reliability 3, Table 2, Supplementary Fig. 3, and Supplementary Table 1.The relationships between sLT and VT are shown in Fig. 4A and Supplementary Fig. 4A, describing the strong relationships between each threshold (normo, r = 0.69; hypo, r = 0.70).The Bland-Altman plot revealed that the mean difference between each threshold was 4.9 s under normoxia and − 15.5 s under hypoxia, and there was no bias between the mean values, displaying strong agreements between sLT and VT (Fig. 4B and Supplementary Fig. 4B). Discussion The noninvasive sLA sensor enabled continuous and real-time measurement of sLA during an exercise test under hypoxia.Furthermore, sLT determination had high intra-and inter-evaluator reliability, and sLT was strongly correlated with VT.Real-time sweat lactate monitoring could be applied to detect aerobic threshold, even under hypoxia (Fig. 5). Lactate levels are measured to track an individual's performance and exertion level 19,20 .Blood lactate levels are measured by athletes or their supporters 21,22 , but these are not continuous, real-time measurements, limiting their utility to applications where stationary, infrequent tests are sufficient.In particular, applying bLT relies on the measurement's reliability; in this study, the intra-and inter-evaluation reliability for bLT was low.Conversely, even under hypoxia, our devices captured the sLA during fitness in a real-time, noninvasive, and continuous manner at 1 Hz instead of cumulative values as in the conventional method, which detects the "timing of change" in a real-time and sensitive manner.Therefore, it is easy to identify the inflection point (sLT) from the plots of the sLA values.Using sLT demonstrated lower intra-and inter-observer bias and superior determination accuracy.www.nature.com/scientificreports/Another possible explanation to support this positive result is that several operations, including the exchange of the sensor chip, cleaning the upper arm which the sensor fixed, and flushing out any residual sweat from the duct in the perspiration meter, certainly could eliminate the bias due to contaminations from previous experiments or original sweating.sLA has been reported to not reflect blood lactate during exercise 16,17 ; however, our data showed that the AT point coincided with that in the sLA level during progressive exercise, consistent with the finding of the previous report 15,18 .This could be because an increase in lactate production from muscle cells, reflecting LT, may induce a simultaneous rise in sLA levels through changes in autonomic nervous balance, hormones, acid-base equilibrium, and metabolic dynamics 23,24 similar to VT 25 .A previous study has demonstrated a rapid increase in blood catecholamine concentrations during incremental exercise loads 26 .Furthermore, it has also been indicated that sweat gland metabolism is activated by catecholamines 27 .Therefore, we are evaluating the timing of physiological responses to increasing exercise loads using completely different analytes and not estimating the bLA levels by observing the sweat lactate levels. Measuring VT and peak VO 2 with respiratory gas analysis helps in efficient training under hypoxia.However, it is often difficult to determine VT because of inconsistencies among the several factors required for detecting VT, such as the ventilation (VE)/oxygen uptake (VO 2 ) or carbon dioxide production (VCO 2 )/VO 2 slope and oscillations in minute ventilation 10 .Further, a respiratory gas analyzer is unavailable in a small hypoxic booth because of its size.Moreover, using a facemask, respiratory gas cannot be collected under hypoxic exercise.In addition, in a respiratory infection epidemic such as COVID-19, using respiratory gas analyzers has become difficult due to the possibility of cross-infection.Determining sLT using only sweat-based monitoring could overcome these problems, and the newly developed device enables AT measurements in various hypoxic environments (a small private booth and facemask). It has been reported that sweat rate decreases in hypoxia 28 .As our sensor showed non-response in the absence of sweating, evaluating sweat rate is paramount to successfully determining sLT in hypoxia.This study quantified the amount of sweating per unit area near the sensor; the results showed no difference in the local sweat rate during exercise under hypoxia compared with that under normoxia.The relationship between the local sweat rate/response in the sLA sensor, humidity, and temperature during exercise warrants further investigation.www.nature.com/scientificreports/ The device used in our study is suitable for use in remote monitoring or remote training settings during isolation measures, such as those taken during a respiratory infection epidemic.Furthermore, real-time assessments of sLA through a wireless data transfer system can offer a rigorous training menu under hypoxia based on the day-to-day physical conditions of trainees.In addition, exercise under hypoxia has been recognized as a new therapeutic modality for health promotion and disease prevention or treatment, such as for diabetes 29 , cardiovascular diseases 30 , hypertension 31 , obesity 32 , and age-related diseases 33 .Disease prevention and treatment can be more efficiently and safely provided by combining sLA sensors with exercise under hypoxia. The study has some limitations.First, due to the observational study design, we could not exclude the influence of a selection bias.Second, our study had a relatively small number of cases.Third, the current study included healthy college-aged male individuals.Recent findings could be applied to various age groups and genders; however, further research, including females and young athletes, is required considering a sweat functional difference between sexes.Fourth, the sLA sensor used in this study exhibited the current value, not the sLA concentration.Conversion to concentration from the current value is possible; however, it is sufficient to display the current values to determine the inflection point based on the constant value of sLA during exercise.The effect of sLA dilution by high sweat rate on sLT determination is minimal due to the low sweat rate at AT and, therefore, does not negate our study's result.Finally, exercise training has been performed under various hypoxic conditions; however, only a hypoxia of 15.5% was verified.Further verification is required to overcome these limitations. In conclusion, the noninvasive sweat lactate sensor enabled continuous and real-time measurement of sweat lactate during exercise under hypoxia.The sweat lactate threshold can also be reliably determined by nonexperts, even under hypoxia.Real-time sweat lactate monitoring could be used to detect aerobic threshold in a noninvasive and feasible manner under hypoxia and normoxia.It is expected that these findings enhance the effectiveness of exercise under hypoxia.This was the first study to show real-time monitoring of sLA during progressive exercise under hypoxia.Given the difficulty in deciding VT, such as in hypoxia, sLA monitoring could be beneficial in improving VT detection with high reliability. Experimental approach to the problem We conducted a cross-sectional study with 20 healthy participants who underwent exercise tests with respiratory gas analysis under hypoxia or normoxia and simultaneously monitored changes in sLA using a wearable lactate sensor to investigate the capability of sweat lactate sensor to monitor sLA under hypoxia and the relationship between sLT and VT.In addition, Intraclass correlation was determined for the intra-and inter-evaluator reliability of each threshold in this study. Subjects Participants aged 20-80 years were recruited through a web system in June 2021.The exclusion criteria were patients receiving medication, having comorbidities like hypertension, diabetes, and active lung diseases, and having low local sweat rates of < 0.4 mg/cm 2 /min at the upper arm during maximal exercise.This sweat rate threshold was defined based on previous reports 15 and preliminary studies.Twenty healthy participants were enrolled, including athletes and those with a broad spectrum of aerobic capacities and fitness levels.Notably, all participants exercised regularly for more than twice weekly. The study protocol was approved by the Institutional Review Board (IRB) of Keio University School of Medicine (approval number 20190229), and the study was conducted following the principles of the Declaration of Helsinki.Verbal informed consent was obtained from all participants because the IRB approved using verbal consent following the Japanese guidelines for clinical research.Verbal consent was recorded as an experimental note. Procedures The twice exercise tests with a minimum of 2 days intervals were performed using an electromagnetically braked ergometer (POWER MAX V3 Pro, Konami Sports Co., Ltd., Tokyo, Japan) with respiratory gas analysis under hypoxia (hypo; a fraction of inspired oxygen [FiO 2 ], 15.4 ± 0.8% equivalent to a simulated altitude of 2500 m) or normoxia (normo; FiO 2 , 20.9%).Hypoxic conditions were created in an exercise booth with an oxygen filtration hypoxic generator (Hypoxico Everest Summit II; WILL Co., Tokyo, Japan) by insufflating nitrogen as a target of FiO 2 15.5% 34 .During exercise, the sLA was monitored using an sLA sensor (Grace Imaging Inc., Tokyo, Japan) attached to the upper arm, and the local sweat rate was measured at a sampling rate of 1 Hz in the same area as the sLA sensor using a perspiration meter (SKN-2000M; SKINOS Co., Ltd., Nagano, Japan).A perspiration meter ensured the value returned to zero before the new experiment by flushing out any residual sweat from the duct.Heart rate was monitored using Duranta (Zaiken, Tokyo, Japan), and blood lactate levels were measured using a standard enzymatic method on a lactate analyzer (Lactate Pro2 ® , ARKRAY, Kyoto, Japan). On the day of the exercise test, the participants avoided any prior heavy physical activity.The participants performed the test upright on an electronically braked ergometer.Following a 2-min rest to stabilize the heart rate and respiratory condition, the participants performed a 4-min warm-up pedaling at 20 W.Then, they exercised at increasing intensity until they could no longer maintain the pedaling rate (volitional exhaustion).The resistance was increased in 25-W increments from 50-W at 1-min intervals.Once the exercise tests were terminated, the participants were instructed to stop pedaling and remain on the ergometer for 3 min. The expired gas flow collected through the mask was measured using a breath-by-breath automated system (Aeromonitor ® , Minato Medical Science Co., Ltd., Osaka, Japan).This system was subjected to a three-way calibration process involving a flow volume sensor, gas analyzer, and delay time calibration.The gas analyzer was calibrated under hypoxia using 8% O 2 , assuming a minimum oxygen concentration of 8% in exhaled air during www.nature.com/scientificreports/hypoxic exercise.Respiratory gas exchange, including VE, VO 2 , and VCO 2 , was continuously monitored and measured using a 10-s average.VT was determined using the ventilatory equivalent, excess carbon dioxide, and modified V-slope methods 10 through manual operating software.First, two of the three experienced researchers independently and randomly evaluated each participant's VT using the three methods.The researchers used all three methods to assess concurrent breakpoints and eliminate false breakpoints.Second, if the VO 2 values determined by the independent researchers were within 3%, then the VO 2 values from the two investigators were averaged.Third, if the VO 2 values determined by the independent evaluators were not within 3% of one another, a third researcher independently determined VO 2 .The third VO 2 value was then compared with that obtained by the initial investigators.If the adjudicated VO 2 value was within 3% of either of the initial investigators, the two VO 2 values were averaged.Blood lactate values were obtained by auricular pricking and gentle squeezing of the ear lobe to obtain a capillary blood sample at rest, warm-up, and every minute after the start of progressive intensity.The samples were immediately analyzed for whole-blood lactate concentrations (mmol/L). bLT was determined through graphical plots of the bLA value vs. time 8 .Visual interpretation was independently made for each participant by two experienced researchers to locate the first rise from baseline.If the independent determinations of the stage at LT differed between the two researchers, a third researcher adjudicated the difference by independently determining LT.The three researchers then jointly agreed on the LT point. The sLA was measured using a sLA sensor, which quantifies lactate concentration as a current value because it reacts with sLA and generates an electric current 15 .The sLA sensing system comprises a disposable sensor chip and a sensor.The sensor chip generates the current value proportional to the lactate concentration by catalyzing the enzymatic immobilization on its surface to oxidize lactate, which reduces hydrogen peroxide.In addition, a protective film formed by exposure using a UV lamp allows the achievement of immediate responsiveness (response delay < 1 s) and sustainability without the enzyme reacting all at once 15 .The current value can be obtained as continuous data within 0.1-80 μA in 0.1-μA increments.The sLA sensor responded linearly to the lactate concentrations, especially in the 0-5 mmol/L range, which were most significant in determining the LT because the LT had normal lactate values from 2 to 4 mmol/L 15 . Moreover, it is also validated that the sLA values obtained from this sensor can show a significant enough difference to determine the inflection point under various sweat environments 35 .After calibration using saline for 2 or 3 min, the sensor chip connected to the sensor device was attached to the superior right upper limb of the participants and cleaned with an alcohol-free cloth to eliminate the influence of original sweat.In addition, the data were recorded at a sampling frequency of 1 Hz for mobile applications with Bluetooth connection.The recorded data were converted to moving average values over 13-s intervals and underwent zero correction using the baseline value.sLT was defined as the first significant increase in the sLA above baseline based on graphical plots 15 .Three researchers, independent of those who analyzed respiratory gas exchange, agreed on the point of sLT. Statistical analyses The results are represented as mean ± standard deviation for continuous variables and percentages for categorical variables, as appropriate.ICC was determined for intra-and inter-evaluator reliability of each threshold 36 .The intra-evaluator reliability was tested by one of the blinded reviewers.The inter-observer reliability was tested by estimating each threshold using three blinded reviewers.The relationship between exercise time at sLT and VT was investigated using Pearson's correlation coefficient test.In addition, the Bland-Altman technique was applied to verify the similarities among the different methods 37 .The graphical representation of the difference between the methods and the average WAS compared.Statistical significance was set at two-tailed p-values < 0.05.All statistical analyses were performed using IBM SPSS Statistics for Windows, version 27.0 (IBM Corporation, Armonk, NY, USA). Figure 2 . Figure 2. Measured parameters in hypoxia.The graph shows the measured parameters [(a) VO 2 /body weight, (b) Heart rate, (c) Sweat lactate, (d) sweat rate] at rest, warm up, VT, and peak in hypoxia.Data are shown as mean (± standard deviation).VO 2 oxygen uptake, VT ventilatory threshold, HR heart rate, sLA sweat lactate, SR sweat rate. Figure 3 . Figure 3. Reliability testing of the time at sLT determined by the same evaluator in hypoxia.(a) The graph shows the relationship between the repeatedly determined sweat lactate threshold (sLT) by the same evaluator.(b) The graph shows the Bland-Altman plots, which indicate the respective differences between the repeatedly determined sLT by the same evaluator (y-axis) for each individual against the mean of the time at the repeatedly determined sLT (x-axis) in hypoxia.R correlation coefficient, p p-value, VT ventilatory threshold, sLT sweat lactate threshold. Figure 4 . Figure 4. Validity testing of the time at VT and sLT in hypoxia.(a) The graph shows the relationship between the time from the start of the measurement (seconds) at VT and sLT.(b) The graph shows the Bland-Altman plots, which indicate the respective differences between the time from the start of measurement (s) at the VT and sLT (y-axis) for each individual against the mean of the time at the VT and sLT (x-axis) in hypoxia.R correlation coefficient, VT ventilatory threshold, sLT sweat lactate threshold. Table 1 . Baseline characteristics of participants.Data are presented as median (IQR).BMI body mass index. Table 2 . Intra-evaluator reliability of sweat lactate threshold determination in hypoxia.ICC intraclass correlation, sLT sweat lactate threshold, bLT blood lactate threshold, VT ventilatory threshold, SD standard deviation.
2023-12-23T06:17:05.784Z
2023-12-21T00:00:00.000
{ "year": 2023, "sha1": "36a9fc7a97753c3d584df0ac3c2b1588d3bd36b1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "39af516100dac3f905793b937bec3c4fca1138fd", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
251853566
pes2o/s2orc
v3-fos-license
Cloud-based storage and computing for remote sensing big data: a technical review ABSTRACT The rapid growth of remote sensing big data (RSBD) has attracted considerable attention from both academia and industry. Despite the progress of computer technologies, conventional computing implementations have become technically inefficient for processing RSBD. Cloud computing is effective in activating and mining large-scale heterogeneous data and has been widely applied to RSBD over the past years. This study performs a technical review of cloud-based RSBD storage and computing from an interdisciplinary viewpoint of remote sensing and computer science. First, we elaborate on four critical technical challenges resulting from the scale expansion of RSBD applications, i.e. raster storage, metadata management, data homogeneity, and computing paradigms. Second, we introduce state-of-the-art cloud-based data management technologies for RSBD storage. The unit for manipulating remote sensing data has evolved due to the scale expansion and use of novel technologies, which we name the RSBD data model. Four data models are suggested, i.e. scenes, ARD, data cubes, and composite layers. Third, we summarize recent research on the application of various cloud-based parallel computing technologies to RSBD computing implementations. Finally, we categorize the architectures of mainstream RSBD platforms. This research provides a comprehensive review of the fundamental issues of RSBD for computing experts and remote sensing researchers. Introduction The accumulation of historical archives and the advancement of sensors in recent years has led to an explosion of remote sensing datasets (Toth and Jóźków 2016;Zhu et al. 2019), which are often regarded as remote sensing big data (RSBD) (Ma et al. 2015) or big remotely sensed data (Casu et al. 2017). With the launch of Landsat 9 on 27 September 2021, the Landsat series of satellites has been continuously observing Earth for nearly 50 years (Masek et al. 2020;Roy et al. 2014). The Sentinel satellites from the European Space Agency (ESA) had acquired 24.87 petabytes of remote sensing data by the end of 2020 (Drusch et al. 2012). Series of high-resolution remote sensing satellites such as SPOT (French 'Satellite pour l'Observation de la Terre'), Gaofen (Chinese high-resolution Earth imaging satellites), and IRS (Indian Remote Sensing Satellites) have been successively launched for various applications. Satellite data can be better leveraged and explored with the efforts of international organizations such as the Global Earth Observation System of Systems (Mhawish et al. 2021). RSBD has profoundly advanced remote sensing science, enabling a global perspective and a long-term historical view to re-conceptualize Earth . It not only expands the spatiotemporal scope of the study area but also stimulates a revolution in the remote sensing methodology. Over the past several decades, remote sensing research has gradually developed from qualitative remote sensing based on the statistical models of digital signal processing to quantitative remote sensing characterized by the consideration of physical models (Asrar et al. 1985;Liang 2003). Recently, remote sensing research has entered the data-driven era (Hey, Tansley, and Tolle 2009;Zhang et al. 2019), e.g. machine learning (Jordan and Mitchell 2015) and deep learning (Lecun, Bengio, and Hinton 2015). Most of all, with the progress of RSBD, an increasing number of researchers and engineers are working with RSBD, effectively contributing to research on global sustainable development, global climate, food security, natural disasters, agriculture, etc. (Allen et al. 2021;Gray et al. 2020;Moon, Kim, and Chan 2019;Neal et al. 2019). Although RSBD has a promising future, its technical implementation remains difficult, and the identification of current technical challenges facing RSBD remains a broad issue. Yang et al. (2017) identified eleven main challenges for implementing RSBD, including data storage, transmission, analysis, architecture, and quality. In addition, Chi et al. (2016) found three common challenges, including proper data identification and big data computing and collaboration. These challenges have been mainly induced by the dramatic increase in data volume, which far exceeds the capacity of conventional computing technologies. For instance, Hansen et al. (2013) mapped global forest gains and losses from 2000 to 2012 at a 30 m spatial resolution, processing 20 terapixels of data. However, the manipulation of these massive datasets requires abundant human and material resources. As a result, only a few leading research institutions or companies have had access to RSBD, unfortunately leading to difficulty in leveraging RSBD and seriously restricting development. Cloud computing is a big data service delivered through the Internet (Yu et al. 2017;Armbrust et al. 2010). It originated from E-commerce and social networks (Sakr et al. 2011), and has been widely applied to RSBD over the past several years (Varghese and Buyya 2018). These applications include Google Earth Engine (GEE) (Tamiminia et al. 2020), Microsoft Planetary Computer ('Planetary Computer' 2022, and Earth on Amazon Web Services (AWS) ('Data and Information Access Services' 2021). Cloud computing is underpinned by big data technology and mainly consists of five service models, including Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), Data storage as a Service (DaaS), and Function as a Service (FaaS) (Dillon, Wu, and Chang 2010). These services differ from previous computing technologies (e.g. high-performance computing) and make RSBD more accessible to the public. First, cloud computing typically relies on a set of commodity machines, and is usually priced as 'pay-per-use' and supports the elastic expansion of resources on-demand (Wang et al. 2010). As a result, the cost is much lower than sophisticated and expensive high-performance computers Gupta et al. 2013). Second, cloud computing is delivered through the Internet, which helps the open sharing of remote sensing data and research, therefore promoting the progress of FAIR principles (findable, accessible, interoperable, and reusable) (Wilkinson et al. 2016). Third, a robust big data ecology has been formed around cloud computing after years of development, shielding users from the technicalities of massive computing. Thus, cloud computing helps RSBD researchers and engineers focus more on algorithms and analysis rather than being hindered by computer technology (Saxena et al. 2020). Fourth, cloud computing is suitable for data-intensive computing such as RSBD applications (Yang et al. 2019). The combination of cloud computing and remote sensing has proven to facilitate and promote RSBD. This has attracted a growing interest in remote sensing research as a potential solution for large-scale spatiotemporal analysis. Previous studies have evaluated the progress of cloud-based RSBD in terms of the acquisition, storage, computing, analysis, transmission, and visualization (Ma et al. 2015;Chi et al. 2016;Zhang, Zhou, and Luo 2021;Wang and Yan 2020) and in specific applications (Sarker et al. 2020;Balti et al. 2020;Qu et al. 2020). In this research, we performed a broad technical review of cloud-based RSBD storage and computing and summarized the key architectures of cloudbased RSBD platforms. Section 2 discusses four concerns posed by RSBD storage and computing, namely raster storage, metadata management, data homogeneity, and the computing paradigm. Section 3 and 4 review the available cloud-based technologies and our current understanding of RSBD storage and computing. Finally, Section 5 provides the conclusion of the review. Overall, four data models, two computing types, four processing models, and five types of RSBD platform architectures are identified and discussed in this research, which broadly assesses state-of-the-art RSBD technologies and helps readers explore advanced RSBD. Raster storage The volume of remote sensing data long ago stepped into the petabyte era and is moving towards the exabyte and zettabyte era. Conventionally, raster data is stored as arrays in multiple file formats, including the hierarchical data format (HDF) (MODIS), GeoTIFF (Landsat), and JP2000 (Sentinel-2). The size of a single dataset generally ranges from megabytes to terabytes. In addition, some cloud-based platforms store raster data as tiles in lightweight data formats (Yao et al. 2020) following grid discretization with Discrete Global Grid Systems (DGGS) for fast visualization and online computing. These include portable network graphics (PNG) and the Joint Photographic Experts Group (JPEG) image format. The unprecedented increase in remote sensing data poses severe challenges for RSBD raster data storage . First, the volume of RSBD far exceeds the capacity of standalone storage hardware, such as block storage or the redundant array of independent disks (RAID) (Gomes, Queiroz, and Ferreira 2020). Distributed storage systems (DFS, introduced in Section 3.1) can preserve petabytes of data (Lü et al. 2011). However, the storage cost is extremely high for both individuals and government departments. For instance, the United States Geological Survey (USGS) considered charging for access to widely used sources of remote sensing data (e.g. Landsat) in 2018 to recover costs from users (Popkin 2018). Second, the retrieval of array data generates costly operations due to the specificity of remote sensing data structures, leading to a decrease in I/O efficiency and an increase in access latency. These types of operations differ from conventional big data I/O operations and are rarely optimized by existing big data technologies (Zhao et al. 2018). For example, in the case of time-series analysis, remote sensing data are often scattered in several individual files/objects, resulting in numerous random access and expensive data transformations (Extract-Transform-Load). Consequently, data storage schemes need to be further developed to support big data storage technologies and remote sensing data. Metadata management Remote sensing data comes with complex and vital metadata information. The full utilization of metadata is valuable for the reliability and quality of raster data. The management of remote sensing datasets must rely on metadata information, such as the band, resolution, capturing time, etc. In addition, the complete metadata describes essential information for tracing raster data quality, such as cloudiness and solar altitude, and ensuring the robustness and reliability of the subsequent analysis (Barsi et al. 2019). Several bottlenecks limit the use of complete metadata. First, there are substantial metadata entries for remote sensing datasets, and the quantity often exceeds the capacity of conventional metadata management systems. For example, the European Space Agency (ESA) Sentinel-2 product includes hundreds of metadata fields. Second, there are differences in metadata information formats. Generally, the metadata for remote sensing data is stored in the form of key-value pairs. However, some metadata, such as cloud masks and pixel quality assessments, are stored as vectors or rasters. Unfortunately, traditional metadata storage technologies do not support the management of unstructured metadata. Third, the metadata structures of the remote sensing data acquired from different sources are heterogeneous and lack a unified standard, leading to semantic ambiguities between datasets (Closa et al. 2019). Therefore, standards and management systems, such as the National Aeronautics and Space Administration's (NASA) Unified Metadata Model ('NASA Unified Metadata Model' 2022), need to be formulated for metadata to ensure multi-source remote sensing data interoperability. These standards need to be developed and adapted for the novel RSBD applications emerging in the era of cloud computing (e.g. tile data retrieval, metadata queries). Therefore, it is crucial to implement the technology migration from big data to remote sensing metadata (Al-Yadumi et al. 2021). Homogeneity The homogeneity of input data is essential for data mining (Yu et al. 2017), while heterogeneity is a common feature of big data (Wu et al. 2014). In Section 2.2, we mentioned the heterogeneous characteristics of metadata. However, the heterogeneity of raster data is more complex and essential, especially for multisource remote sensing data (Zhan et al. 2018;Pastor-Guzman et al. 2020). Specifically, we assess the homogeneity of remote sensing raster data in terms of two aspects. Homogeneity refers to the identical physical characteristics and quality of remote sensing data, such as the spectral meaning (e.g. central wavelength), processing level, accuracy, resolution, and projection. These characteristics can affect the accuracy and robustness of any subsequent analysis. Typically, heterogeneous data can be fixed during pre-processing (Young et al. 2017). However, pre-processing large amounts of data is difficult because some pre-processing steps still require manual intervention and cannot be executed in a fast and parallelized fashion (Rittger et al. 2021;Wei, Chang, and Bai 2020). Moreover, homogeneity restricts the integrity and continuity of remote sensing data in both the temporal and spatial dimensions. Spatiotemporal continuity is necessary for remote sensing analysis to ensure greater accuracy over a more extensive scale (Kuo et al. 2018). A remote sensing analysis with discontinuous data can lead to inconsistent results . Nevertheless, continuity is commonly inaccessible over a large region of interest due to the specificity of remote sensing data acquisition modes and plans ( Figure 1). It is a challenge to improve the types of homogeneity mentioned above, which relates to remote sensing science and big data processing. Specific data pre-processing theories and algorithms are needed for technical support. Recent research has made great efforts toward improving the homogeneity of remote sensing data, e.g. spatial-temporal data fusion Zhu et al. 2018) and multi-source remote sensing data harmonization (Claverie et al. 2018). Additionally, corresponding computational technologies are expected to implement rapid homogenization for large remote sensing datasets Gao et al. 2022). Parallel computing Parallel computing simultaneously implements computation by dividing the main computation into smaller processes (Almasi and Gottlieb 1989). Parallel remote sensing computing lies at the core of RSBD, and enables the full exploitation of big data's scalable computing capability. Previous reviews have described the processing in remote sensing computing as (Chi et al. 2016): where Y denotes the computing results, X is the input remote sensing dataset, and F is the mapping function that transfers the input datasets into the result. There is no need to consider implementing parallelized computing for a single-threaded application scenario. Therefore, Eq. (1) can be simplified as: (2) where f refers to a single-threaded remote sensing algorithm. Eq. (2) is the most basic case. However, in a big data scenario, the scale of X may be huge and beyond the capacity of a standalone computing node. In this case, the computation needs to be simultaneously processed with more computational resources to reduce the overhead time cost. However, the parallelization execution strategies are largely different among algorithms. Therefore, it is not easy to find a generalized computational framework that can be adapted to all remote sensing analysis algorithms, such as Eq. (1). RSBD analysis can be grouped into two types, data-separable computing and data-inseparable computing, to further decompose the problem and investigate different solutions. Data-inseparable computing cannot be parallelized by partitioning the data. This type of computing requires a large amount of global information from the whole dataset, such as unsupervised classification, principal component analysis (PCA), etc. A parallel processing method that simply divides the dataset will produce side effects due to the tile edges (Lassalle et al. 2015). Parallel computing methods for such analysis are usually individualized. In other words, it is difficult to generalize a parallel computing paradigm for all data-inseparable remote sensing algorithms. Google Earth Engine's 'spatial aggregations data distribution model' pre-implements several data-inseparable computing algorithms. Each algorithm is implemented individually and transparently by Earth Engine using the MapReduce computing paradigm (introduced in Section 4) (Gorelick et al. 2017). On the contrary, data-separable computation can be considered as a series of independent subtasks by partitioning the datasets. In other words, not much external information is needed while processing each partition, and: where f is the processing algorithm for a sub-partition, F is the algorithm for integrating the partitions into a complete output Y, x is a sub-partition of X, and X = {x 0 , x 1 , . . . , x n }. Data-separable computing is supported by cloud computing and is known as embarrassingly parallel or pleasingly parallel computing in computer science (Barcelona-Pons et al. 2019). This type of computing has been widely applied in RSBD using quantitative remote sensing, artificial intelligence, etc. For example, Pekel et al. integrated the computing power of 10,000 computers to map 30 m global water bodies for almost 30 years based on an expert system classifier (Pekel et al. 2016). Ni et al. extracted 10 m rice-growing areas in northeast China using machine learning (Ni et al. 2021). Xie et al. produced 30 m annual irrigation maps based on MODIS and Landsat data for the United States from 1997 to 2017 using a random forest classifier (Xie and Lark 2021). All of the above studies have relied on the parallelization of data-separable computing. Additionally, the studies used Earth Engine's 'image tilling data distribution model' for spatial partitioning and 'streaming collections' for temporal partitioning (Gorelick et al. 2017). Overall, data-inseparable computing requires custom implementation, while data-separable computing can be implemented based on generic processing paradigms. However, despite the similarities in parallel computing paradigms, there are differences in the data partitioning strategies, analysis algorithms, and combination algorithms for each specific analysis. In addition, there are strong relationships between the way the data is partitioned and parallel algorithms. Therefore, a unified framework is urgently needed to regulate and constrain remote sensing algorithms and distributed execution paradigms. Challenges in the DIKW hierarchy This section introduces the four primary challenges facing RSBD, which are expected to be resolved with cloud-based approaches. However, there are significant differences between RSBD and traditional data processing. RSBD involves both remote sensing technology and computer science, which fully illustrates its multidisciplinary nature. Traditionally, remote sensing science has only been applied to limited scales and needs to be re-examined to support large-scale applications. In addition, computer science and technology have been traditionally oriented to conventional business, and need to be tailored to remote sensing to support the management and mining of RSBD. This cross-fertilization perspective is closely related to the four challenges. Rowley (2007) defined the Data-Information-Knowledge-Wisdom (DIKW) hierarchy. This concept can help explain the relationships between the four identified challenges and RSBD ( Figure 2). DIKW Data corresponds to raw remote sensing data, which is associated with the challenges of data storage and metadata management. DIKW Information corresponds to data that conforms to homogeneity, such as the data cube or analysis ready data (ARD) ('CEOS' 2022). Homogeneity should be addressed when transforming DIKW Data into DIKW Information. The parallel computing problem exists in both the process of transforming DIKW Data into DIKW Information and DIKW Information into DIKW Knowledge. Finally, DIKW Information is transformed into DIKW Wisdom using human intelligence, which is used to assist real-world decisions and practices. The process of transforming and formalizing wisdom from knowledge remains challenging for RSBD. Cloud-based big data storage Currently, there are five leading cloud computing and big data technologies applicable to RSBD storage, including the Object Storage System (OSS), Distributed File System (DFS), Relational Database Management System (RDBMS), NoSQL, and array database management systems (array DBMSs). The Object Storage System (OSS) manages data in the form of objects, each of which is identified with a globally unique identifier. In particular, the RESTful API allows data access via HTTP, which means that the object can be easily accessed from anywhere on the network. In addition, OSS can manage additional metadata for data descriptions. (Shvachko et al. 2010). DFS supports more comprehensive interfaces and features in comparison to OSS (Weil et al. 2006). However, DFS can suffer from the bottleneck problems of primary nodes, which restricts the upper limit of scaling to some extent. For example, the metadata of the Hadoop Distributed File System is Figure 2. Relationship between the DIKW pyramid and the four major concerns. stored in the primary node's memory, which restricts the number of files that are stored (Shvachko et al. 2010). In addition, the files stored in DFS can only be accessed through the mounted hosts, which is not as flexible as OSS. Relational database management systems (RDBMS) are a widely used database model (Codd 1970). RDBMS is oriented toward transactional operations and focuses on the properties of atomicity, consistency, isolation, and durability (ACID). The reliability and stability of RDBMS have been greatly improved with the development of RSBD. Some RDBMS, such as PostgreSQL, can manage spatial data and have been widely used for remote sensing metadata management. However, there are apparent bottlenecks in the standalone RDBMS load capacity. A cloud-based distributed RDBMS, NewSQL, was proposed to enhance the scalability of traditional RDBMS for massive structured data (Pavlo and Aslett 2016). Google Spanner is an example of this technology (Corbett et al. 2013). The onset of Web 2.0 drove an increasing need to manage a large amount of unstructured data, which gave rise to NoSQL, e.g. MongoDB, HBase (Mehul Nalin 2011), and Google Big Table (Chang et al. 2006). Unlike RDBMS, NoSQL does not support transactional operations and ACID properties. On the contrary, NoSQL emphasizes the principles of consistency, availability, and partition tolerance (CAP) (Gray and Reuter 1992;Cattell 2010), thus improving concurrency, efficiency, and horizontal scalability. The various NoSQL technologies have distinct technical characteristics that can be applied in different application scenarios. These are generally classified into four types, including wide-column, key-value, document, and search engine. More detailed reviews of NoSQL can be found in the respective literature (Davoudian, Chen, and Liu 2018;Guo and Onstein 2020). Array database management systems (array DBMSs) are a type of scientific database dedicated to the storage and management of array-like scientific data (Zalipynis 2021). Array DBMSs are often grouped as NoSQL. However, we decided to introduce them individually due to their natural affinity for remote sensing and geospatial data (Zalipynis 2020). Array DBMSs support SQL-like queries and operations on arrays (e.g. resampling and aggregations). Such advanced manipulations are essential for remote sensing data management because they simplify data retrieval and pre-processing (Zalipynis 2021;Appel et al. 2018). In addition, array DBMSs generally optimize I/O through the underlying technology, which is beneficial for online RSBD computing. For example, TileDB optimizes the performance of concurrent I/O and sparse arrays by turning multiple random-writes into a single sequential write (Papadopoulos et al. 2016). Furthermore, some advanced array DBMSs support horizontal scaling. For example, SciDB (Brown 2010) and RasDaMan (Baumann et al. 2018) support distributed deployment, and TileDB supports share-nothing cloud computing architecture, storing files as AWS (Amazon Web Services) S3 objects in the cloud. However, importing massive scientific data into an array DBMS may be time-consuming. In addition, as far as we know, no cloud services directly provide array DBMS services, and users can only build an array DBMS through IaaS. Users can build most of the storage technologies introduced above using IaaS. In addition, cloud services also provide out-of-the-box storage services (SaaS). SaaS helps users focus more on business by avoiding database maintenance. Table 1 identifies the open-source technologies corresponding to different storage technologies and mainstream cloud computing SaaS products. Cloud-based data storage for RSBD Massive remote sensing data storage consists of raster data and metadata storage. Raster storage Currently, remote sensing raster data is generally stored as cloud-optimized data formats in an OSS or DFS. In addition, it can be stored in NoSQL databases in the form of tiles or in an array DBMS in the form of arrays. Cloud-optimized data formats are optimized to improve I/O performance in cloud storage. As we mentioned earlier, OSS does not support file opening and writing operations, which is inconvenient for computing. For example, even though only a portion of the data is accessed, the complete remote sensing dataset must be downloaded, resulting in high redundant overhead costs. Cloud-optimized data storage formats for remote sensing data such as Zarr and Cloud Optimized GeoTiff (COG) have emerged and improved the performance of RSBD data storage. Among them, Cloud Optimized GeoTiff (COG), a GeoTiff data format optimized for cloud computing and storage, has been widely adopted ('Cloud Optimized GeoTIFF' 2022). A 16-kilobyte header file is first parsed when accessing COG within OSS. Subsequently, a portion of the remote sensing data can be read on demand without downloading the entire dataset. Furthermore, the COG file retains the original data resolution and creates internal overview copies for lower resolutions, significantly improving the data retrieval efficiency of web-based applications. As a result, COG can improve the retrieval efficiency in both DFS and OSS. Currently, COG is attracting increased attention. For example, COG replaced GeoTiff in 2021 as the standard data format for Landsat series data Collection 2. OSS can store remote sensing tile and raster data, and cloud-optimized data formats, especially COG, are becoming major storage formats for OSS. Amazon, Microsoft, and Google currently use OSS to store a large amount of remote sensing data ( Table 2). The data stored in an OSS can be easily shared through the Internet and leveraged for analysis and visualization, promoting open sharing and the use of remote sensing data. Users can efficiently access the data with little charge and without considering data management and server maintenance, greatly promoting FAIR principles. DFS is a mature big data storage technology and the dominant storage system of RSBD platforms. Distributed file systems cannot share data as easily as OSS, but provide advantageous functions such as appending writes and modifications. There is no apparent requirement for direct data sharing for a computing platform, but there is a clear need for functions such as append write and random read. For example, Digital Earth Australia (successor of the Australian Geoscience Data Cube) stores Landsat archives in the Lustre system (Braam 2019) within the Australian National Computational Infrastructure (Lewis et al. 2017). The JRC Earth Observation Data and Processing Platform (JEODPP) stores remote sensing datasets in EOS, a DFS designed for the European Organization for Nuclear Research (Peters, Sindrilaru, and Adde 2015;Soille et al. 2018), and Earth Engine stores a large amount of remote sensing data in Google Colossus. NoSQL supports the storage of large amounts of unstructured data and can therefore preserve RSBD raster data. Wide-column NoSQL is suitable for storing a vast amount of unstructured data such as tiles. GeoTrellis (Kini and Emanuele 2014), a Spark-based RSBD computation engine, stores remote sensing tiles and vector geospatial data in wide-column NoSQL. However, wide-column NoSQL lacks support for data indexing, especially the spatial index needed for remote sensing data. Therefore, users must carefully design the row keys to enable spatiotemporal queries . In-memory key-value NoSQL databases store data as key-value pairs in distributed memory and can cache the intermediate data for online remote sensing computation. For example, Earth Engine stores the cached data from the service in an in-memory database to reduce secondary access latency (Gorelick et al. 2017). However, the volume of remote sensing data can far exceed the memory storage capacity. Thus, in-memory key-value NoSQL is not suitable for persistent remote sensing data. Document NoSQL provides spatial indexing capability and can support the storage of extensive individual data with more comprehensive capabilities. Wang et al. (2019) and Cheng et al. (2020) implemented storage systems for RSBD based on MongoDB. They stored the data in Mon-goDB after further slicing and achieved the management of remote sensing data based on Mon-goDB's rich query capability. Overall, NoSQL supports more advanced functions than OSS or DFS, such as spatiotemporal data management. It has obvious advantages in RSBD application scenarios, but the cost of NoSQL is much greater than DFS or OSS. As far as we know, the practice of petabyte-level NoSQL-based RSBD storage still requires further research and exploration. Unlike other storage systems that store remote sensing data as files, array DBMSs store and manipulate remote sensing data as arrays. Array DBMSs optimize efficiency based on the underlying storage technology. More importantly, array DBMSs support high-level array manipulation for managing remote sensing data, including data storage, metadata management, indexing, etc. In other words, an array DBMS can be used as an RSBD data management system with a few additional components. For example, EarthServer (Baumann et al. 2016) implements the storage of a large amount of remote sensing data using RasDaMan (Baumann et al. 1998). Furthermore, some array DBMSs can process remote sensing data (e.g. reprojection, resampling) and have been applied in tandem with machine learning (Ordonez, Zhang, and Lennart Johnsson 2019). However, array DBMS-based RSBD data management is still in its infancy. One of the major challenges lies in incorporating data into NoSQL, which can be costly. Specifically, all raw datasets should be pre-processed into the unified format recognized by each type of array DBMS, which is a time-consuming process (Lewis et al. 2017). Metadata storage & management RSBD metadata storage and management are mainly based on NoSQL, RDBMS, and NewSQL. RDBMS is suitable for storing and managing remote sensing metadata and is the mainstream technical approach for RSBD management systems. For example, and Zhou et al. (2021) stored remote sensing data in distributed MySQL and PostgreSQL systems, respectively. The Open Data Cube stores metadata in PostgreSQL (Killough 2018). In our case, we managed the metadata of petabytes of remote sensing datasets (ten million metadata entries) with a standalone PostgreSQL instance. However, there is an upper limit to the storage capacity of RDBMS. Therefore, RDBMS is only suitable for the rapid construction of structured remote sensing metadata storage for medium-sized datasets. Cloud-native NewSQL overcomes the problems with scalability and is ideal for storing significant metadata volumes. For example, Earth Engine uses Spanner as one of its data management tools (Gorelick et al. 2017). NewSQL open-source technology and cloud computing services are still being developed. NoSQL can store unstructured data such as heterogeneous remote sensing metadata (Guo and Onstein 2020). For RSBD data storage systems, data and metadata are often rarely modified after they are input into the database. Therefore, compared to RDBMS, NoSQL's lack of support for ACID is acceptable for RSBD management systems. Search engine is a type of NoSQL that supports full-text search such as Solr and Elastic Search. Search engine NoSQL builds inverted indexes in memory to achieve high performance and a robust full-text index. Fan et al. (2017) stored the metadata of remote sensing in SolrCloud and implemented a full-text search. Their process supports advanced functions such as fuzzy queries and has good adaptability for the complex structures of remote sensing metadata. However, it is costly to implement such storage systems using search engine NoSQL. Wide-column and document NoSQL are also used for metadata storage. For example, Earth Engine adopted the Big Table storage system (Gorelick et al. 2017). Wang et al. (2019) and Cheng et al. (2020) used MongoDB to store both raster data and metadata to achieve integrated data/metadata storage. 3.3. Data model: scene, ARD, data cube, composite layer The ultimate purpose of data storage is to prepare data for analysis, and thus, homogeneity must be considered. The homogeneity of raster data is not prominent in the case of small-scale remote sensing analysis, and computing is mainly implemented within a scene by a standalone processing node. There is an increasing requirement for advanced remote sensing data organization due to the expansion of spatiotemporal scales, which we propose as a data model related to data organization, data structure, and data production methods. It is necessary to adopt a suitable remote sensing data model for large-scale analysis within cloud computing for specific application scenarios (algorithms, parallel computing strategies, etc.). In the past few years, several data organization schemes have been developed for RSBD analysis, including scenes, ARD, data cubes, and composite layers (Figure 3). Scenes are the most basic organization for remote sensing data and have been widely applied during the past several decades. Conventionally, satellites collect remote sensing data in strips and transmit them to the ground segment. The ground segment processes the remote sensing data through corrections and evaluations and profiles them according to a regular grid (e.g. the Military Grid Reference System adopted by Sentinel). The pre-processed remote sensing data are the most common form of remote sensing data corresponding to the remote sensing images fetched from data providers (e.g. USGS, ESA Copernicus). The use of cloud computing greatly diminishes the cost of acquiring scene data. In addition, COG technology also enhances the efficiency of remote sensing data access with higher degrees of freedom. Analysis Ready Data (ARD) (Potapov et al. 2020) was initiated by the Committee on Earth Observation Satellites (CEOS) ('CEOS' 2022). The original intent of ARD was to reduce the threshold for users to leverage the data by reorganizing discrete datasets into regular blocks with Figure 3. Relationships between scenes ARD data cubes and composite layers. a fixed size, resolution, and projection, thus minimizing data processing and correction (Dwyer et al. 2018). ARD must be radiometrically and geometrically corrected, and evaluated for quality at the pixel level using a uniform standard to achieve homogeneous physical characteristics (Frantz 2019). In addition, ARD is commonly reorganized into global unified Discrete Global Grid Systems in the form of tiles. For example, the USGS produces Landsat ARD applying the latest Collection 2 archive standard. Zhong et al. (2021) developed an ARD data product based on GaoFen satellite data. Most of the previous RSBD research stacked multiple datasets into a 'composite' before analysis according to specific rules, which we name as the composite layer (Thorp and Drajat 2021). The composite layer refers to layer-like datasets produced by pre-processing and combining all available data over a certain spatial-temporal range, such as remote sensing data products (Gong et al. 2020). We borrowed the term 'layer' from geographic information systems (GIS) to emphasize that there is one and only one value for each pixel in the region of interest. It is characterized by its ideal homogeneity. Thus, it is the best input data model for non-time series analysis. Therefore, the composite layer is the closest remote sensing data model to the 'Information' in DIKW. For small-scale applications, scenes or ARD can be approximated as the composite layer, especially when the scale of the region of interest is comparable to that of scenes or ARD. However, data coverage at large scales is not guaranteed for medium and high-resolution remote sensing data, which is caused by the data acquisition model or long revisit periods. Therefore, the differences between scenes and layers are more prominent in large-scale applications. Recently, the data cube has gradually become the focus of RSBD research, especially cloud-based RSBD applications (Lewis et al. 2016;. The data cube reorganizes and stacks ARD data along a time dimension and dissects them according to a regular grid. A data cube can be a sparse collection of time-series data or a dense mosaic of the best available values over time. In contrast to the composite layer, which discards multiple available values, the data cube aggregates all available ARD datasets over time to approximate the layer as closely as possible. It is the best input for large-scale remote sensing analysis (especially time-series analysis). However, the data cube tends to be sparse in medium-and high-resolution remote sensing applications. Xu et al. (2022) further developed the connotation of the data cube to improve the homogeneity and proposed Computation Ready Data (CRD). CRD considers the continuity of remote sensing data and the diverse computational needs. This approach helps promote the use of interpolation and spatiotemporal fusion algorithms to fill missing data in the data cube. CRD further reorganizes data according to computational needs and fills in the data model and computation gap. As shown in Figure 3, the relationships between the data models can be summarized as follows. 1. The data volume and information quantity decrease from left to right. The scene datasets retain the most information and the largest volume. The generation of ARD data filters out poor-quality data and reduces the data volume. The data cube screens out data outside a specific spatial and temporal range according to certain conditions, which further decreases the available data. Ultimately, the composite layer generally reduces the dimensionality of the data cube and therefore possesses the smallest data volume. The smaller the data volume, the more convenient it is for transmission and sharing. Consequently, the composite layer is the ideal data model for data propagation. 2. There is a significant difference in the complexity of data production between data models. The production of ARD from raw scene data involves time-consuming and computation-intensive processing, such as rigorous radiation and geometric correction. However, the creation of the data cube or composite layer is mostly a reorganization of datasets with relatively low computational complexity. The cost has been significantly reduced for such data-intensive processing due to the support of cloud computing technologies (e.g. COG). Hence, it is not efficient to produce ARD on-demand since the online construction of the data cube can be quickly implemented with the use of cloud computing (Giuliani et al. 2020). 3. Any processing will cause a loss of accuracy, and the accuracy loss of different data models can vary. The loss of accuracy caused by the correction process in ARD production is acceptable and unavoidable in most RSBD applications (Qiu et al. 2018). However, the production of data cubes or composite layers will affect accuracy in comparison with the original datasets. In most cases, data cubes transform the original datasets with different projections and resampling, and composite layers filter values according to customized rules. Therefore, storing datasets in the form of data cubes or composite layers can lead to irreversible losses in accuracy compared with ARD. 4. Homogeneity gradually increases from left to right. ARD datasets possess homogeneous physical characteristics and data cubes and composite layers are observed to approach continuity. Additionally, the layers and data cubes are closer to the array data model in computer science. Therefore, both are suitable for implementing remote sensing analysis. In this section, the current cloud-based storage technologies are introduced and cloud-based remote sensing data storage technologies are reviewed. Finally, we provide insights into the data models for RSBD applications. Table 3 summarizes the remote sensing data storage schemes and data models for fifteen systems and studies produced from 2016 to 2021. The conclusions are drawn as follows. 1. NoSQL, DFS, Array DBMS, and OSS can store raster data, while NoSQL and RDBMS can manage remote sensing metadata. Figure 4 further summarizes the characteristics of the four NoSQL databases, aside from array DBMS. For cloud-based RSBD raster storage, OSS is the mainstream solution for sharing open data, while DFS is the primary solution for RSBD platforms. In the context of cloud computing, public cloud-based OSS services can significantly lower the cost of RSBD management. Therefore, the RSBD systems in recent years have been more inclined to use OSS. NoSQL and array DBMSs have great potential, but there are still some limitations such as the high cost of use. 3. Both NoSQL and RDBMS can be used to implement cloud-based RSBD metadata storage, and the choice of technology depends on the specific application scenario. On the one hand, NoSQL can manage the complete archive of remote sensing data, which is difficult to achieve through RDBMS Cheng et al. 2020). On the other hand, the functionality and performance of RDBMS has been widely demonstrated. For example, RDBMS has a higher cost performance in the scenario of small and medium volume RSBD management. However, NoSQL is more applicable for larger RSBD systems. 4. Data models are becoming critical for RSBD applications. The cost of producing ARD is generally high and results in an acceptable loss of accuracy (D'Andrimont et al. 2021). Therefore, it is appropriate to store remote sensing data as ARD in the cloud for common applications. In contrast, data cubes and composite layers have less production overhead, suffer from significant accuracy loss, and possess a much smaller data volume than scenes (Sudmanns et al. 2020). Therefore, it is better to produce data cubes or composite layers on-demand in the cloud before propagation . Most importantly, composite layer and data cube models are more suitable for implementing remote sensing computing due to their homogeneity. Cloud-based big data processing Technologies for cloud computing and big data are complex, specialized issues that are beyond this work's scope. This section introduces three active and promising processing technologies for RSBD computing: simple batch, MapReduce, and array-based processing ( Figure 5). In addition, we introduce a lightweight virtualization technology known as containerization. Simple batch processing Simple batch processing refers to a simple processing model where each task is independent of others. This method has been used for more than twenty years. Some examples include the Portable Batch System (PBS), Azure Batch, and AWS Batch (Casado and Younas 2015;Henderson 1995). Typically, users pre-define the execution program and execute a series of identical computing tasks in a batch. Each input set corresponds to a processing instance and outputs a result. Each independent computing task does not affect other computing tasks, and such tasks can be executed asynchronously. Simple batch processing has been widely used in offline batch processing. However, simple batch processing cannot accomplish complex analysis because it does not support inter-task communication. Mapreduce processing MapReduce (Wang et al. 2010) is a popular batch processing model for cloud and distributed processing that was first announced by Google in 2004 (Dean and Ghemawat 2008). MapReduce adapts the idea of functional programming and uses two core operators, Map and Reduce, to implement individual data and aggregate operations, respectively. In 2010, Zaharia et al. developed Spark based on MapReduce, a memory-based distributed processing engine (Zaharia et al. 2010). Spark supports more affluent operators and uses programming languages to support flexible batch processing. In addition, Spark optimizes scheduling by building directed acyclic graphs (DAGs) for workflows and employing a 'lazy' mechanism. Under the 'lazy' mechanism, some of Spark's operators only record processes and defer the actual computation to optimize the execution route. Spark preserves intermediate data with a memory-based data model called Resilient Distributed Datasets (RDDs). RDD can improve the computation efficiency by nearly a hundred times compared with MapReduce (Zaharia et al. 2010), especially for tasks with multiple iterations such as machine learning and deep learning (Lunga et al. 2020). However, the large data volume of data-intensive computations often exceeds memory storage capacity, leading to decreased efficiency. Cloud processing provides services that help users quickly implement MapReduce jobs. Users can build their MapReduce clusters on virtual machines or directly use cloud-hosted Hadoop or Spark services, such as Amazon Elastic MapReduce (Amazon EMR) and Google Dataproc. Array-based processing Array-based processing is a computational technology for scientific array data (Lu, Appel, and Pebesma 2018). The well-known Numpy and OpenCV can implement a variety of complex processes for array data. However, such software is limited by the resources of a single machine and cannot be efficiently applied to very-large arrays. Recent technologies have implemented array manipulations with parallel batch processing for large-scale arrays, e.g. Dask (Rocklin 2015), such as the computational functions of Earth Engine and array DBMSs. Earth Engine adopts Flu-meJava (Chambers et al. 2010), a MapReduce processing engine, to manipulate remote sensing data as an array-like Collection or Image. However, from the usage point of view, such array-based processing is different from MapReduce processing. Users of array-based processing do not need to worry about the underlying parallel processing implementation, such as task scheduling, but directly use arrays as the processing object. At present, array-based processing is still developing and has some known flaws. Array DBMSs lack the flexibility for implementing user-defined functions (Mehta et al. 2017). Furthermore, performance optimization is still not as robust as traditional batch processing technologies (Mehta et al. 2017). However, most scientific data, such as remote sensing, mainly consist of arrays. The direct manipulation of arrays can shield users from many underlying problems. Therefore, we believe that array-based processing will play an increasingly important role in scientific big data processing in the future. All three processing technologies introduced above can implement large-scale remote sensing analysis computations. In Table 4 we briefly summarize the mainstream open-source technologies and the related cloud-based solutions. However, batch processing technology requires the rewrite of remote sensing analysis algorithms, which hinders researchers from performing scientific analysis based on RSBD to some extent (Mehta et al. 2017;Camara et al. 2016). Containerization Containerization is one of the core concepts of cloud-native computing (Li 2019;Pelle et al. 2019). It is widely used in cloud-based processing such as FaaS and serverless applications. Containerization is a virtualization technology that packages algorithms with lightweight containers and provides the runtime environments needed for algorithms, e.g. Docker (Merkel 2014). This technology allows the stable execution of various remote sensing algorithms in different host environments, which improves the portability of remote sensing algorithms by decoupling the algorithms from the host machine (Xu et al. 2022). The technology is essential for cloud-based RSBD because it can port remote sensing algorithms from the local environment to the cloud . Containers can be leveraged jointly with the big data processing technologies mentioned above (e.g. batch processing). In addition, they can be managed by container orchestration platforms in the cloud. Kubernetes is one of the most famous open-source container orchestration platforms; it was developed by Google and contributed to the Cloud Native Computing Foundation in 2015 (Bernstein 2014). Borg (Verma et al. 2015), the internal Google version of Kubernetes, is used for resource scheduling and load balancing within Earth Engine. Cloud-based computing for RSBD As introduced in Section 2.4, RSBD applications can be grouped into two types, data-separable computing and data-inseparable computing. We introduce cloud-based computing for RSBD from these two perspectives. Data-separable computing Data-separable computing covers most remote sensing analysis applications, such as pixel-based and tile-based analysis and has a simple parallelization strategy with better feasibility. The following example illustrates the processing paradigm. Bishop-Taylor et al. (2021) extracted the coastal zone changes for Australia from 1988 to 2019. The study partitioned the region of interest into subregions by space and then produced data cube datasets for each sub-region. Subsequently, the shoreline changes from 1988 to 2019 in each sub-region were extracted using simple batch processing. Finally, the study combined all sub-regions and obtained the complete coastline change for Australia. This example outlines most of the routines for data-separable computing with a simple batch, including (1) partitioning the region of interest, (2) constructing data cubes for each partition, (3) computing each partition, and (4) combining the partitions. This kind of computing has been widely applied in RSBD, especially in data pre-processing and mapping. Each partition's implementation can be considered as an individual computing task, which corresponds with the computing paradigm of simple batch processing. In addition, other processing models, such as MapReduce, can perform such computing as well. Data-inseparable computing There are dependencies between data-inseparable computing tasks, and thus, the input data should be homogeneous as data cubes or composite layers. Currently, data-inseparable computing is mainly processed using MapReduce and array-based processing. The implementation of parallel remote sensing processing algorithms based on MapReduce provides flexibility (Chebbi et al. 2018). MapReduce and Spark offer a range of flexible operators which can build complex computational pipelines to implement diverse parallelized computations for remote sensing analysis. There are a number of studies that have implemented various data-inseparable remote sensing computations based on MapReduce-like technologies, such as K-Means clustering analysis (Chebbi, Boulila, and Farah 2016), parallelized mosaics (Jing et al. 2017), deep learning (Sun et al. 2019), and object-based segmentation . GeoTrellis (Kini and Emanuele 2014) is a Sparkbased MapReduce processing technology that is oriented to remote sensing data processing and provides a set of APIs for remote sensing analysis. The MapReduce paradigm has also been widely used in the spatial information domain, such as SpatialHadoop (Eldawy and Mokbel 2015), GeoSpark (Yu, Wu, and Sarwat 2015), and GeoMesa (Hughes et al. 2015). However, there are still some known shortcomings in implementing data-inseparable computing with MapReduce. Compared with high-performance computing and message passing interfaces, there has been an attempt to shield the user from the underlying programming issues as much as possible. However, remote sensing researchers still must deal with complicated issues in manual parallel computing. The performance of some memory-based MapReduce technologies, such as Spark, is considerable. Yet, they are not suitable for 'data-intensive' RSBD analysis (Makrani et al. 2018). Memory-based processing requires a large memory capacity to handle large remote sensing datasets, but there is a slight improvement for algorithms with low iterative computation requirements. Apart from MapReduce processing, array-based processing is also suitable for data-inseparable computing. Arrays, especially composite layers, are the principal unit of remote sensing analysis. Array-based processing generally pre-defines diversified APIs to manipulate arrays in parallel. The built-in array operations and machine learning algorithms can be used for remote sensing analysis, such as vegetation index extraction and classification (Villarroya and Baumann 2020). For example, Earth Engine is dedicated to remote sensing analysis with many algorithms specializing in remote sensing science, such as time series analysis (Hamunyela et al. 2020) and cloud detection algorithms (Qiu, Zhu, and He 2019). Array-based processing shields remote sensing researchers from the underlying implementation of parallel computing, thus helping them to focus more on the computation itself. However, current array-based processing still has specific problems. Most array-based processing technologies, such as array DBMSs or Dask, are not designed for remote sensing applications, and the provided APIs do not support professional remote sensing processing. Therefore, additional computing implementations are required as post-processers. For example, (Pagani and Trani 2018) constructed a data system with RasDaMan and implemented subsequent remote sensing analysis based on R programming. Furthermore, the highly packaged APIs reduce the flexibility of implementing user-defined algorithms. For example, users cannot extend any analysis that Earth Engine does not support. Fortunately, this problem might be gradually alleviated with the development of open-source array-based processing technologies. Furthermore, the efficiency of current array-based processing (e.g. Dask) is lower than that of MapReduce (e.g. Spark) with some additional restrictions (Fu et al. 2020). Finally, current array-based processing services are mainly maintained by the open-source community (e.g. Pangeo), restricting the available computational resources. RSBD platforms The RSBD platform is the best practice for RSBD applications. Any RSBD storage or computing systems cannot work individually. Instead, RSBD computing should work closely with storage systems, and platforms should implement RSBD applications by integrating storage and computing. Figure 6 summarizes the five architectures of RSBD platforms as characterized by the data model. The four components from left to right are storage, the data model, processing, and output. 1. Type 1 platforms parallelize the computation of scenes using simple batch processing and dataseparable computing. The outcome of such platforms is delivered in the form of scenes. In terms of architecture, Type 1 platforms are composed of two parts, the data storage system and simple batch processing system. The data storage system is responsible for providing data services to the processing system, and the simple batch processing system manages the analysis algorithms and maintains execution tasks. Such platforms were widely used in early batch remote sensing data production and analysis . For example, the European Space Agency's G-POD project pre-populates many algorithms for remote sensing scenes and data and offers batch processing services. 2. Type 2 platforms are the mainstream technology route adopted for large-scale remote sensing analysis and computation. They consist of three main parts: the data storage system, data cube production system, and simple batch processing system. Type 2 platforms are similar to Type 1 platforms in that they support data-separable computing. In contrast, this type of platform processes the data into data cubes as the input and then outputs data cube datasets. For example, JEODPP divides the computational tasks of the data cube into independent subtasks and then uses HTCondor to implement multi-task batch processing (Soille et al. 2018;Corbane et al. 2017). 3. Type 3 platforms adopt MapReduce processing and support data-inseparable computing. For example, the ScienceEarth platform has been used to process remote sensing data as a data cube with the implementation of large-scale remote sensing analysis using Spark (Xu et al. 2022;Xu et al. 2020). Though MapReduce is powerful, it is not friendly to remote sensing researchers. Users must deal with detailed configuration issues in parallel computing by themselves, which prevents the widespread use of MapReduce in RSBD applications. Therefore, such platforms have a high difficulty threshold for their implementation and use. 4. Type 4 platforms use array-based processing as the data processing system and support datainseparable computing. Such platforms consist of three parts, including the data storage system, data cube production system, and array-based processing system. To be specific, the data cube production system is responsible for processing remote sensing scene data into array-like data types (e.g. data cubes and composite layers). The array-based processing system pre-defines many high-level APIs for the analysis of large arrays. 5. Type 5 platforms use array DBMSs as the core component and support data-inseparable com- puting. An array DBMS can store and manage massive remote sensing data with distributed systems. More importantly, it can internally implement array-based processing. Therefore, an array DBMS can independently construct an RSBD platform. For example, Kuo et al. (2018) built an RSBD platform based on SciDB, which supported shared-memory parallelization (SMP) and distributed memory parallelization (DMP). However, only simple array processing is supported by current array DBMS technology. Therefore, other computing systems are needed in most cases to achieve the complex analysis and computation of remote sensing data (Pagani and Trani 2018). In addition to the five types of platforms, some studies further package the technologies on top of existing RSBD platforms. For example, BACI offers web-based services based on Google Earth Engine (Poortinga et al. 2018). OpenEO proposes a unified API for data management as well as the computational resources of different platforms (Schramm et al. 2021). OpenEO accesses data and services in virtual data cubes, allowing for deeper comparisons between compatible Earth observation cloud services rather than accessing them directly. This section identifies four significant cloud-based processing technologies and provides insights into data-separable and data-inseparable computing for RSBD. In addition, we summarize the five major architectures of current RSBD platforms in terms of their data models, data storage, and type of processing. As shown in Table 5, we summarize the representative platforms according to their technology routes. The conclusions are provided below. 1. Simple batch processing has been widely applied in RSBD applications, especially in data-separable computing, while data-inseparable computing is becoming a popular topic in RSBD. Datainseparable computing can be implemented using MapReduce or array-based processing. Although current array-based processing is in its infancy, this promising technology may lead to the next generation of remote sensing processing. 2. There are five principal types of RSBD platforms. Type 1 and Type 2 architectures can only handle the most basic data-inseparable computing. Type 3 platforms have the best scalability for various algorithms but have a specific use threshold for computer technologies. Type 4 platforms are the most advanced implementation at present. However, Type 4 platforms require high construction costs and pose significant technical challenges. Type 5 platforms have reasonable prospects but are restricted by the development of array DBMSs. 3. Regardless of the type of RSBD platform, no single platform can comprehensively handle all diversified RSBD applications (Ni et al. 2021;Xie and Lark 2021;Chen et al. 2021). Under these circumstances, multiple platforms should be jointly leveraged. Thus, data must be transported between platforms (Lu and Wang 2021;Arvor et al. 2021;Brombacher et al. 2020), and a standard RSBD data model is necessary. Data cubes are a promising approach for meeting this standard. Conclusion The joint promotion of space technology, remote sensing science and technology, and computer technology has enabled humans to enter a new era of Remote Sensing Big Data (RSBD). RSBD is the best means to realize global remote sensing analysis and will become the backbone of Big Earth Data, making additional contributions to sustainable human development (Guo et al. 2017;Guo et al. 2021). This research introduces state-of-the-art technologies and research trends concerning RSBD storage and computing. In addition, this study provides a preliminary glance over the basic issues of RSBD for computing experts and remote sensing researchers, especially those who tend to work with large-scale remote sensing research and applications. We would like to arouse the reader's interest in RSBD through this research. However, RSBD is a broad topic, and we could not thoroughly review it from a comprehensive perspective. Therefore, some issues (e.g. open data, data security, confidentiality, visualization, etc.) are not mentioned. Additionally, despite our extensive literature references, there are inevitably controversial representations and claims in the manuscript. Remote sensing data mainly consists of raster data and metadata. Many studies have already accumulated valuable research for RSBD storage. The storage technology for RSBD has achieved satisfactory results for moving data from a single machine to clusters and from the local Array-based environment to the cloud. Among them, cloud-based optimized remote sensing data storage technology and cloud storage technology represented by OSS, NewSQL, and NoSQL will become mainstream technical solutions for RSBD storage and management. Data homogeneity is necessary for large-scale analysis. In this regard, the current RSBD technology mainly adopts four data models with different homogeneity characteristics; these include scenes, ARD, data cubes, and composite layers. The data cube has good compatibility with cloud computing and can provide RSBD analysis through homogeneous multi-dimensional remote sensing data. We suspect that this data model will become the mainstream RSBD data model in the future. According to the computational paradigm, RSBD computing can be divided into data-separable and data-inseparable computing. Data-separable computing has better parallelism and remains the computation type in most RSBD analyses. On the other hand, data-inseparable computing is the current hot topic. There are three mainstream cloud-based big data technologies for remote sensing data analysis: simple batch processing, MapReduce processing, and array-based processing. Simple batch processing has been widely used. The MapReduce-based parallel computing paradigm can be applied to more complex remote sensing analysis applications. Moreover, array-based processing provides an easy-to-use and promising technical tool for remote sensing scientists. In this review, the five types of RSBD platform architectures were summarized. Type 4 has the most advanced architecture, which adopts the data cube model and array-based processing. The multidisciplinary methodologies of RSBD are growing rapidly, and RSBD platforms have already played important roles in various fields. However, in the future, the integration of satellite, airborne, ground-based, geospatial, and even socioeconomic data is needed to produce more effective solutions to real-world problems, which will face additional challenges. Novel real-time data processing paradigms, artificial intelligence algorithms, and innovative tools are expected to be assembled into RSBD platforms to extract more desired information from remote sensing data. The emergence of Federated Learning, a novel distributed processing paradigm, guarantees data security during training and RSBD analysis. In addition, recent advancements in deep learning models provide a promising approach for RSBD interpretation over large scales. GPU-accelerated RSBD platforms are an ideal host for neural network architectures and open datasets. The highquality information extracted at a global scale will provide a new impetus for improving RSBD methodologies and platforms in remote sensing. In addition, the information will aid the scientific community in assessing global disaster risk, monitoring climate change, and addressing the United Nations Sustainable Development Goals (SDGs). Disclosure statement No potential conflict of interest was reported by the author(s).
2022-08-27T15:05:07.058Z
2022-08-24T00:00:00.000
{ "year": 2022, "sha1": "e69f0dcf48c4f7fb5c5dc72b854bce6454ab0acb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/17538947.2022.2115567", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "4133f4c60fa1f8aa13f7d9c03d18979094bd30b2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
257329694
pes2o/s2orc
v3-fos-license
Cortico-Subcortical White Matter Bundle Changes in Cervical Dystonia and Blepharospasm Dystonia is thought to be a network disorder due to abnormalities in the basal ganglia-thalamo-cortical circuit. We aimed to investigate the white matter (WM) microstructural damage of bundles connecting pre-defined subcortical and cortical regions in cervical dystonia (CD) and blepharospasm (BSP). Thirty-five patients (17 with CD and 18 with BSP) and 17 healthy subjects underwent MRI, including diffusion tensor imaging (DTI). Probabilistic tractography (BedpostX) was performed to reconstruct WM tracts connecting the globus pallidus, putamen and thalamus with the primary motor, primary sensory and supplementary motor cortices. WM tract integrity was evaluated by deriving their DTI metrics. Significant differences in mean, radial and axial diffusivity between CD and HS and between BSP and HS were found in the majority of the reconstructed WM tracts, while no differences were found between the two groups of patients. The observation of abnormalities in DTI metrics of specific WM tracts suggests a diffuse and extensive loss of WM integrity as a common feature of CD and BSP, aligning with the increasing evidence of microstructural damage of several brain regions belonging to specific circuits, such as the basal ganglia-thalamo-cortical circuit, which likely reflects a common pathophysiological mechanism of focal dystonia. Introduction Dystonia is a movement disorder characterized by abnormal postures and involuntary movements due to repetitive or sustained muscle contractions. It is now thought that dystonia arises through the involvement of a network including the basal ganglia, cerebellum, thalamus and sensorimotor cortices [1][2][3]. In line with this hypothesis, several studies have found abnormalities in the basal ganglia thalamo-cortical circuit in patients with dystonia [1,[4][5][6][7][8]. In patients with the two most frequent forms of focal/segmental dystonia, characterized by clinical involvement of a single body part, namely cervical dystonia (CD) and blepharospasm (BSP), diffusion tensor imaging (DTI) studies demonstrated WM changes [9] in several structures including the basal ganglia and cerebellum [10][11][12]. Microstructural alterations were also found in the white matter (WM) adjacent to the primary sensorimotor, inferior parietal and middle cingulate cortices in patients with CD [13][14][15] and BSP [16][17][18]. Finally, studies with whole-brain approaches showed WM microstructural disruption in the corpus callosum, the internal capsule and the white matter underlying the sensorimotor cortex in CD and BSP patients [11,19,20]. In CD, tractography-based studies also demonstrated abnormal connections between infratentorial structures and the basal ganglia; specifically, between the pallidum and brainstem [21], between the thalamus, middle frontal gyrus and brainstem [22], and within the dentato-rubro-thalamic tract [23]. However, there have been no studies conducting tractography in BSP, and it is unknown whether CD and BSP have specific microstructural abnormalities, in line with recently demonstrated functional alterations [4], or whether they share similar abnormalities to the basal ganglia thalamo-cortical network. In this paper, we investigate in CD and BSP the possible microstructural changes of WM bundles connecting predefined subcortical and cortical regions involved in the network underlying the pathophysiology of focal dystonia. Using a probabilistic tractography approach [24], we reconstruct WM tracts connecting the globus pallidus, putamen, and thalamus with primary motor, primary sensory, and supplementary motor cortices. We then evaluate the integrity of those WM tracts by deriving their DTI metrics. Finally, we investigate possible correlations between WM microstructural damage and clinical features of dystonic patients. Participants and Clinical Assessment Patients were consecutively recruited from the movement disorder outpatient clinic of the Department of Human Neurosciences, Sapienza University of Rome (Italy). Patient inclusion criteria consisted of a clinical diagnosis of CD or BSP according to diagnostic criteria [25] and age > 18 years old. Exclusion criteria were neurological abnormalities other than tremor, psychiatric diseases, concomitant systemic disease (e.g., diabetes, liver disease, chronic renal failure, cardiovascular diseases) or contraindications to MRI. Fortytwo patients with adult-onset focal dystonia were enrolled. The Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS) [26] and the Blepharospasm Severity Rating Scale (BSRS) [27] were used to assess the severity of CD and BSP, respectively, while quality of life and disability were evaluated using the Cervical Dystonia Impact Profile (CDIP-58) for CD and the Blepharospasm Disability Index (BSDI) for BSP. Disease duration and handedness were recorded for all patients. All patients were evaluated at least 3 months after the last botulinum toxin injection to exclude any possible confounders due to the botulinum neurotoxin effect. None of the patients were under other treatment. A group of 17 age-and sex-matched healthy subjects (HS) from a pool of volunteers was enrolled as a control group. All the participants gave their informed consent and the experimental procedure was approved by the ethics committee of Sapienza University of Rome (CE n 4041, 24 March 2016) and conducted in accordance with the Declaration of Helsinki. MRI Data Analysis Before data analysis, all images were visually inspected for a qualitative assessment of artifacts. Selection of ROIs To reconstruct WM tracts between the sensorimotor cortex and subcortical structures, regions of interest (ROIs) were defined. We use probabilistic atlases to identify cortical regions of interest (ROIs): the primary motor cortex (M1) (head/face region) and the primary somatosensory cortex (S1) (face/upper limb region) were derived from the Brainnetome atlas (https://atlas.brainnetome.org/download.html (accessed on 15 September 2022)) [30]), and the juxtapositional lobule cortex (formerly supplementary motor cortex-SMA) was identified from the Harvard-Oxford Cortical Structural Atlas (http://www.fmrib.ox.ac. uk/fsl/data/atlas descriptions.html (accessed on 15 September 2022)). We thresholded at 25% M1, S1 and SMA ROIs and then divided on the sagittal plane x = 0 in the right and left regions. We used FIRST-FSL to identify subcortical ROIs in each patient: left and right globi pallidi, putamen and thalami. Finally, we registered cortical regions from standard space and subcortical regions from structural space into subject diffusion space and visually checked for accuracy. Tractography We performed probabilistic tractography within each participant's diffusion space using BedpostX, part of FSL (FMRIB's Software Library v.6.0.4, http://www.fmrib.ox.ac.uk/ fsl/) (accessed on 1 October 2022) [31] with default parameters. We generated streamline probability distribution maps between each predefined subcortical and cortical region of interest (ROI). In each reconstructed map, we specified the subcortical region as the seed, the cortical region as the target and the contralateral hemisphere as the exclusion mask. We also specified the cortical target region as a termination mask, to identify the only and exact connections between the given seed and the given target [32]. We then normalized pathway probability maps for seed size by dividing the probability maps by the total number of successfully generated streamlines, and we removed spurious connections by thresholding the resulting maps by 5% [32,33]. We then binarized thresholded probability maps and overlaid of FA, MD, AD and RD on individual maps, from which we extracted average values [34] to evaluate WM tracts' integrity. Statistical Analysis Statistical analysis was performed using SPSS software (IBM SPSS Statistics, version 25.0, IBM Corp., Armonk, NY, USA). One-way ANOVA was used to compare age and the χ2 test was used to compare sex between patients and HS. Group differences in terms of DTI (FA, MD, RD and AD) measures within the WM tracts of interest were tested via multivariate analysis (Kruskal-Wallis). The significance level was set at p < 0.05, Bonferronicorrected for multiple comparisons. To correlate WM microstructural damage of the tracts of interest with clinical scales, altered DTI metrics of each WM bundle were non-parametrically correlated via Spearman's correlation test (Bonferroni corrected for multiple comparisons) with TWSTRS and CDIP-58 for the CD patients and with the BSRS and BSDI for the BSP group. Subsequently, to limit the number of correlations, we derived indexes of global damage of subcortical-sensorimotor cortices WM tracts for each patient by averaging FA, MD, AD and RD values of all reconstructed WM bundles, thus obtaining the FA index, MD index, AD index and RD index. Each index was then correlated with clinical scores via a non-parametric test (Spearman's correlation test). The significance level was set at p < 0.05, Bonferroni-corrected for multiple comparisons. Results Forty-two patients with adult-onset focal dystonia and 17 HS were enrolled in the study. Due to motion artifacts in MRI images, seven patients were excluded (five with CD and two with BSP). Thirty-five patients (17 with CD and 18 with BSP) and 17 HS were included in the analysis. No differences in age (F = 2.35, p = 0.09) or sex (F = 2.89, p = 0.06) were found between the three groups. All subjects were right-handed. The demographic and clinical characteristics of study participants are reported in Table 1. Disease duration, years 13.9 (9.9) 11.6 (3. Streamlines of WM tracts were successfully generated for all participants ( Figure 1). Significant between-group differences in MD, RD and AD were found in the majority of the reconstructed WM tracts, while FA was significantly different in one WM tract alone (Tables 2-5). Post hoc testing (Dunn-Bonferroni) showed significant differences between CD and HS and between BSP and HS, while no differences were found between the two groups of patients. Specifically, patients showed lower FA and higher MD, RD and AD compared to HS (Tables 2-5 and Figures 2-5). In patients with BSP, a significant positive correlation was found between BSRS and the MD and RD of all WM bundles (data not shown). No correlation was found between the extent of WM damage and either TWSTRS or CDIP-58 in the CD group. When correlating indexes of global damage of subcortical-sensorimotor cortices WM tracts with clinical scales, a significant positive correlation was found between the MD and RD indexes and BSRS in patients with BSP ( Figure 6). In patients with BSP, a significant positive correlation was found between BSRS and the MD and RD of all WM bundles (data not shown). No correlation was found between the extent of WM damage and either TWSTRS or CDIP-58 in the CD group. When correlating indexes of global damage of subcortical-sensorimotor cortices WM tracts with clinical scales, a significant positive correlation was found between the MD and RD indexes and BSRS in patients with BSP ( Figure 6). In patients with BSP, a significant positive correlation was found between BSRS and the MD and RD of all WM bundles (data not shown). No correlation was found between the extent of WM damage and either TWSTRS or CDIP-58 in the CD group. When correlating indexes of global damage of subcortical-sensorimotor cortices WM tracts with clinical scales, a significant positive correlation was found between the MD and RD indexes and BSRS in patients with BSP ( Figure 6). Discussion This study investigated white matter microstructural features in the two most frequent types of adult-onset focal dystonia, CD and BSP. For the first time, we studied CD and BSP patients with a methodology recently used for WM tract reconstruction in patients with embouchure dystonia [24] to evaluate specific WM bundles with a probabilistic tractography approach. We found that both forms of dystonia shared extensive microstructural changes of WM bundles of the basal ganglia-sensorimotor network, without any DTI parameter able to differentiate one form of dystonia from the other. The analysis of specific WM bundles connecting subcortical structures and sensorimotor cortices showed extensive fiber loss in CD and BSP compared to HS, with no differences between the two groups of patients. Only a few tractography-based studies demonstrated abnormalities in WM tracts in patients with CD while focusing on tracts between infratentorial structures and basalganglia, specifically, between the pallidum and brainstem [21], between the thalamus, middle frontal gyrus and brainstem [22], and within the dentato-rubro-thalamic tract [23]. In the present study, subcortical ROIs coincided with the putamen and the globus pallidus as the primary basal ganglia input and output structures, respectively, and the thalamus as a relay structure between the basal ganglia, cerebellum and cortex. Cortical ROIs were identified in the head/face regions of the primary sensory and motor cortices and the SMA. The choice of investigating direct cortico-pallidal connectivity was based on animal studies describing direct projections from the cerebral cortex to the globus pallidus [35,36] and on recent diffusion tractography studies showing direct cortico-pallidal projections in humans [37][38][39], relevant in the pathophysiology of dystonia [40,41]. In CD and BSP patients, DTI analysis revealed a diffuse increase in MD, RD and AD in the majority of the reconstructed WM tracts, and an FA reduction limited to pallidum-SMA, without differences between the two groups of patients. An increase in MD, which reflects cellular density and extracellular volume [42,43], indicates a less organized myelin and/or axonal structure [44], while increased AD and RD, which give information about the spatial orientation of fibers, suggest prevalent axonal damage [45] and demyelination [46], respectively. Reduced FA can be caused by the degradation of myelin sheaths and/or axonal membranes [45,47,48]. Overall, data of the present study support the hypothesis of axonal and myelin loss due to microstructural abnormalities of the basal ganglia-thalamo-cortical circuit, with alterations of both direct and indirect pathways [49] and direct cortico-pallidal pathways [37,40,50]. The results of the present study showing changes in specific white matter tracts expand previous literature on CD and BSP, demonstrating diffuse microstructural damage in the basal ganglia, cerebellum and sensorimotor cortical areas [10][11][12][13], as well as in the white matter [11,19,20,51,52]. Moreover, the WM tracts we reconstructed correspond to brain regions with microstructural integrity loss in the WM adjacent to the pallidum and putamen and the precentral and postcentral gyri [13,17,53]. Unlike our results, Berman and colleagues found different patterns of altered microstructural WM changes in CD and BSP. Specifically, when comparing CD and BSP patients, reduced FA in the cerebellum and the bilateral caudate nucleus was found in CD patients, whereas reduced FA in the globus pallidum internus and the red nucleus was found in BSP patients [10]. The reasons for our different findings are probably the different methodological approaches of the studies and the different brain regions investigated. We also found a significant correlation between the MD and RD of all reconstructed WM tracts and the severity of blepharospasm. This finding is in line with previous studies that showed a correlation between altered DTI metrics in subcortical structures [10,12] and long WM tracts [54] with clinical scales in BSP. The absence of a correlation between the extent of WM damage and clinical scales in patients with CD is also consistent with previous studies [10,23,52]. The development of unbiased and reliable clinical scales for focal dystonia is an important field of research in the current literature. This study is not without limitations. The cross-sectional design makes it impossible to conclude whether the changes we described are causative or compensatory. DTI allows noninvasive in vivo assessment of brain structural connectivity; however, caution is needed when interpreting the results given the intrinsic limitations of the DTI technique for defining the direction of structural connection change. The availability of more objective and reliable clinical scales not biased by patient perception could overcome the difficulty of making clinical-radiological correlations. To conclude, the present observation of changes in DTI metrics of specific WM tracts suggests a diffuse and extensive alteration in WM integrity as a common feature of two forms of focal dystonia, namely cervical dystonia and blepharospasm. The present results align with the increasing evidence of microstructural damage to several brain WM bundles belonging to a specific circuit, i.e., the basal ganglia-thalamo-cortical circuit. Altered structural connectivity between the basal ganglia and sensorimotor cortices parallels functional connectivity abnormalities consistently reported in the basal ganglia-thalamosensorimotor circuit in cervical dystonia and blepharospasm, likely indicating a common pathophysiological mechanism underlying both forms of focal dystonia.
2023-03-04T16:14:02.991Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "90549ebb34dc183f938df91027e68c914cf52957", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/11/3/753/pdf?version=1677747330", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e63c0aac3b2484f3cc6a2a029ca098a630705a9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
5571109
pes2o/s2orc
v3-fos-license
Restriction endonuclease triggered bacterial apoptosis as a mechanism for long time survival Abstract Programmed cell death (PCD) under certain conditions is one of the features of bacterial altruism. Given the bacterial diversity and varied life style, different PCD mechanisms must be operational that remain largely unexplored. We describe restriction endonuclease (REase) mediated cell death by an apoptotic pathway, beneficial for isogenic bacterial communities. Cell death is pronounced in stationary phase and when the enzyme exhibits promiscuous DNA cleavage activity. We have elucidated the molecular mechanism of REase mediated cell killing and demonstrate that released nutrients from dying cells support the growth of the remaining cells in the population. These findings illustrate a new intracellular moonlighting role for REases which are otherwise established host defence arsenals. REase induced PCD appears to be a cellular design to replenish nutrients for cells undergoing starvation stress and the phenomenon could be wide spread in bacteria, given the abundance of restriction–modification (R–M) systems in the microbial population. INTRODUCTION Microorganisms have evolved various strategies that allow them to inhabit almost any niche. Metabolic flexibility and the capacity for adaptation to different environmental cues underscore their success. Ability to respond as a population to changing environmental conditions is another crucial factor for their survival. Bacteria employ a variety of developmental programs for their diverse social behavior (1). Although best exemplified in organisms such as Myxococcus xanthus (2), Bacillus subtilis, Pseudomonas aeruginosa, Vibrio cholerae and a few other species, it is now apparent that a large number of bacteria belonging to different groups exhibit community behavior under certain conditions and circumstances (1,3). Well studied quorum sensing and biofilm formation often exhibit interlinked features of bacterial social life (3,4). The studies on switch from plank-tonic to multicellular lifestyle and the associated altered gene expression pattern have led to a paradigm shift in our understanding of the social behavior in bacteria (4). Under certain hostile conditions, bacteria undergo programmed cell death (PCD), defined as the death of any cell which is mediated by an intracellular program (5,6). Different factors and conditions are likely to be associated with PCD and only a few of them are well documented (7). The death of mother cell during sporulation in Bacillus and cell lysis during Myxococcus fruiting body development are well studied examples. PCD mediated by toxin and antitoxin modules under stressful conditions and antibiotics action are other emerging examples. To better understand the biological significance of PCD, it is important to investigate different molecular mediators involved in the process. Here, we describe restriction endonuclease (REase) mediated PCD and its likely benefit for the bacterial population. R-M systems are ubiquitous and diverse, serving as innate immunity component of bacteria by targeting the invading genomes. It is also apparent that cellular defence by R-M systems is not an infallible mechanism to counter invading bacteriophages. Phages elaborate diverse anti-restriction strategies; thus, host and virus are continuously engaged in the co-evolutionary arms race (8). Given their wide distribution and the presence of several enzymes in many genomes, REases are implicated to have other cellular roles (9). These functions range from genetic exchange between the bacteria through DNA uptake, homologous recombination (10), nutrition for viral propagation (11) and virulence. Many of the R-M systems also appear to exhibit selfish behavior (12). We have considered a new intracellular role for REases due to the intrinsic promiscuity exhibited by a number of R-M systems (9). Would the inherent promiscuous nature of REases have consequences for host cell survival under certain conditions? Our studies provide evidence for REase mediated altruistic behavior in bacteria. Endonuclease triggered DNA damage leads to cellular apoptosis which appears to provide benefit for the survival of the rest of the population. Such a moonlighting function for these enzymes could have far reaching implications in community behavior of bacteria. Bacterial Strains Escherichia coli (E. coli) MG1655 strain was transformed with low copy plasmid pACYC184 (13) harbouring M.KpnI (M), entire KpnI R-M (WT) (14) or KpnI R-M with high fidelity mutation (D163I) in R gene (HF). The genes (r.KpnI and m.KpnI) are expressed from their own endogenous promoters. The strains and genotypes of bacteria and plasmids used in this study are listed in Supplementary Table S1. Confocal microscopy and FACS analysis To examine the cells using confocal microscopy, DiBAC4(3) (Sigma) or the Live/Dead Kit was used. The Live/Dead kit (Invitrogen) contains Propidium iodide (PI) and Syto9. PI stained dead cells to give red fluorescence, whereas Syto9 stained live cells and gives green fluorescence upon binding to nucleic acid. E. coli MG1655 cells harbouring WT, HF and M were grown to different time points. The cells were pelleted and washed twice with 1× PBS (phosphate buffered saline). To analyse cell morphology, the samples were stained with 0.1 mg/ml 4 -6-diamidino-2-phenylindole (DAPI) staining (cells appear in blue color). For DiBAC4(3) staining (Invitrogen), 10 l of DiBAC4(3) (1 mg/ml) in ethanol was added. For the Live/Dead staining, 10 l of a 1:1 mixture of PI and Syto9 was used. Samples were incubated for 15 min at room temperature and washed twice in 1× PBS. The cells were visualized in a ZEISS LSM-710 confocal microscope under a 100× objective. In order to observe DiBAC4(3) (green) and Syto9 (green), the argon laser with excitation at 488 nm and emission at 515 nm was used. To observe PI (red), a HeNe laser with excitation at 543 nm and emission at 570 nm was used. To compensate for the overlapped wavelength between Syto9 and PI, a sequential scanning was carried out. The total number of bacterial cells (n = 800) were counted for the quantification of PI staining. Fraction of PI stained cells/10 000 or DiBAC4(3) staining cells/5000 cells were analysed using fluorescence intensity by FACS and results were plotted using FCS Express V3 software. All the experiments were repeated at least three times independently and error bars indicate standard deviation (SD). Real-time PCR analysis Cells were grown to different growth phases (exponential phase: 6 h; stationary phase:16 h). RNA extraction was carried out by using RNA protect bacteria reagent (Qiagen) and RNeasy Mini Kit (Qiagen) according to the manufacturer's protocol. RNA concentration and purity (A 260/280 ) was measured using Nano Drop ND-1000 spectrophotometer. Ten microgram of total RNA was subjected to DNase I treatment (Roche) as per manufacturer's instructions followed by cDNA synthesis using an ABI high capacity cDNA synthesis kit. Two microgram of DNase I treated RNA and random primers were used to synthesize cDNA. For quantitative real-time PCR (qPCR), cDNA was diluted 10-fold and quantitation was carried out in 10 l reaction using SYBR Green Master Mix (Fermentas) in a 7500 Fast Real-Time PCR system (Applied Biosystems). The qPCR cycling conditions were as follows: 95 • C for 2 min, followed by 40 cycles of 95 • C for 15 s, 57 • C for 30 s and 72 • C for 20 s. All reactions were repeated at least three times independently to ensure the reproducibility of the results. idnT transcript levels were used as an endogenous control for RNA levels in the samples (15). Relative mRNA levels of genes in WT and HF were calculated after normalizing with mRNA level from M. Graphs were plotted as a ratio of relative mRNA levels in stationary phase versus exponential phase. Statistical significance was calculated using t-test. The primers used in this study are listed in Supplementary Table S2A. Detection of OH* formation To detect OH*, the molecule hydroxyphenyl fluorescein (HPF), which fluoresces only when it reacts with OH* was employed. E. coli MG1655 cells harbouring WT, HF or M were grown; cells were pelleted and washed twice with 1× PBS. HPF was added to the samples to a final concentration of 10 M and cells were incubated at the room temperature for 1 h, washed twice with 1× PBS (pH 7.2). The fraction of HPF stained cells/5000 cells was analysed using fluorescence intensity by FACS with BD FACSVerse™ machine with a 488-nm argon laser for excitation and a 530 ± 15-nm emission filter, and the results were plotted using FCS Express V3 software. Each experiment was repeated at least three times and error bars indicate SD. Complex I and complex II activity of oxidative respiration Cells were grown to exponential phase (6 h) and stationary phase (16 h), aliquots were taken at different time intervals. Cell pellets were resuspended in 50 mM morpholineethanesulfonic acid (MES) buffer (pH 6.0) with 10% glycerol and 1 mg/ml lysozyme and incubated with vigorous shaking at room temperature for 5 min, sonicated for 10 s twice. Cell extracts prepared were used for NADH dehydrogenase I activity as described (16). To prepare membrane fraction, whole-cell extract was centrifuged (45 000 g, 2 h) and the pellet was suspended in the same buffer. This membrane fraction was used to test succinate dehydrogenase activity (16). Protein concentrations were determined by Bradford method using BSA as a standard. NADH dehydrogenase total activity (Respiratory Complex I + NdhII) was assayed spectrophotometrically at 30 • C by following the absorbance at 340 nm (⑀NADH = 6.22 mM -1 . cm -1 ), in a reaction mixture containing 50 mM MES (pH 6.0), 10% glycerol, 200 M NADH. Succinate dehydrogenase activity was assayed from the membrane fraction by monitoring dichlorophenol indophenol (DCPIP) reduction. The activity was determined spectrophotometrically at 30 • C by following the phenazine ethosulfate (PES)-coupled reduction of DCPIP at 600 nm, in a reaction mixture containing 50 mM Tris-HCl (pH 7.5), 4 mM succinate, 1 mM KCN, 400 M PES and 50 M DCPIP (16). These data represent the results from three independent experiments and error bars indicate SD. Statistical significance was calculated using two way ANOVA coupled with Bonferroni's post-test and is represented by an asterisk (*) [P-value, P < 0.001 (***) and P > 0.05 (not significant--ns)]. Western blot analysis Briefly, Klebsiella pneumoniae (K. pneumoniae) OK8 cell lysate or E. coli MG1655 cells expressing both MTase and REase from their respective endogenous promoters (WT) were grown to different growth phases. Cells were harvested by centrifugation, resuspended in 3 ml of extraction buffer [10 mM Tris-HCl (pH 8.0), 50 mM NaCl, 5 mM 2mercaptoethanol], and disrupted by sonication. Cell debris was removed by centrifugation. Equal amounts (250 g) of total cell lysate were resolved by SDS-12% PAGE and detected by immunoblotting with respective polyclonal antibodies. R.KpnI polyclonal antibody was generated from mice whereas M.KpnI and Ribosomal recycling factor (RRF) polyclonal antibodies were generated from rabbit, respectively. RRF was used as an endogenous loading control and the experiments were repeated three times. Long term growth experiments Overnight grown cultures of WT, HF and M were subcultured and then diluted to O.D 0.1. For long-term growth studies, individual cells expressing WT, HF or M were grown till 144 h and CFU analysis was carried out by plating on solid agar containing Chloramphenicol (25 g/ml) at different time points. Graphs were plotted as CFU/ml against time and error bars indicate SD. Results are presented from three independent experiments. Conditioned medium experiments Filter-sterilized conditioned medium was prepared following the protocol described previously (17). LB medium (50 ml) was inoculated with cells from a fresh overnight culture of WT, HF and M in 250-ml Erlenmeyer flasks at 37 • C with vigorous aeration. After 5 days (120 h), cells were pelleted and the supernatant was collected. The supernatant was filtered through a 0.2 m NYL filter unit (Nalgene) to remove intact cells. This conditioned medium was utilized for the growth of new culture. Freshly grown E. coli MG1655 (pA-CYC 184) cells of 1 × 10 3 CFU/ml were inoculated into the conditioned medium and aliquots were taken at different time points. O.D. was monitored at 595 nm and CFU analysis was carried out at different time points. All experiments were repeated 5 times and data are presented from three independent biological replicates. Error bars indicate SD. Statistical analysis Levels of significance for comparison among the samples were determined by Student's t test distribution and analysis of variance (ANOVA) (18). The represented graphical data are expressed as the SD from three independent experiments unless mentioned otherwise. P-values ≤0.05 were considered to be significant. Prism 5.0 software (Graph Pad Software, Inc., USA) was used for all the statistical analysis. REase mediated bacterial cell death The primary determinants of the cell fate in bacterial populations are not completely uncovered. We hypothesized that R-M systems might be important contributors of cell fate under certain circumstances. A Type II R-M system comprises two components, a methyltransferase (MTase), an epigenetic determinant and a REase, a DNA cleavage factor. We have chosen well studied KpnI R-M system for the present analysis. The genes for REase and MTase are arranged divergently in the KpnI R-M locus and the intergenic region contains all the regulatory elements required for the expression of both the genes ( Figure 1A) (19). The MTase of the KpnI R-M system is an N 6 -adenine methyltransferase and is highly sequence specific (20). The REase exhibits a high degree of promiscuous activity but its single amino acid variant (D163I) shows high fidelity compared to WT, i.e. cleaves only at canonical sites (20)(21)(22). E. coli MG1655 cells harbouring either entire KpnI R-M system (WT) or D163I mutation in the R gene along with M (HF) or the MTase alone (M) in a low copy number plasmid were employed to understand the role of MTase and REase, if any, in cellular physiology. The cells harbouring M alone did not show significant difference in growth compared to vector containing cells. Surprisingly, the viability assays showed that the WT had reduced growth compared to the other two strains (Supplementary Figure S1). In the microscopic analysis, a subpopulation of cells appeared to be dead (PI stained cells -red color) ( Figure 1B). The death was more pronounced in cells expressing WT compared to others at late logarithmic and stationary phases (compare Figure 1C and D). FACS analysis also revealed a higher percentage of dead cell population in REase expressing cells during stationary phase ( Figure 1D). Microscopic analysis of cells expressing HindIII R-M system also showed similar results (Supplementary Figure S2). The REase expressing cells showed filamentation and increased dead cells compared to MTase alone cells. REase induces extensive DNA damage From the above results, we surmise that REase catalysed extensive DNA cleavage would have caused the cell death. DNA damage, if any, caused by REase would have also induced SOS response, leading to RecA activation and LexA proteolysis. We measured the SOS response in the WT, HF and M cells using P dinD:: lacZ fusion as a reporter (23). In the stationary phase, WT activated the highest SOS response (measured as ␤-galactosidase activity) when compared to HF or M cells (Figure 2A). This suggested that the WT R.KpnI cleaved the DNA at the non-cognate sites resulting in increased accumulation of double-stranded breaks leading to SOS induction. Further, the genes under the SOS regulon, viz., recA, lexA, sulA and recN were up-regulated in WT by 2.5-4.5-fold compared to HF (values normalized to levels in M) indicating that a functional SOS response pathway was activated ( Figure 2B). Similarly, in K. pneumoniae OK8 itself, we found the difference in the expression of genes belonging to SOS regulon (Supplementary Figure S3). We hypothesized that the observed SOS response could be due to altered levels of MTase and REase. Expression analysis (both at mRNA and protein level) showed that indeed the levels of REase was more than MTase (Figures 2C-D and 3A-B) at stationary phase, suggesting that the increase in REase level induces the SOS response. Although, mRNA level is not significantly different in this low copy number plasmid experimental set up when the cells transit from exponential to stationary phase, a substantial difference in the levels of the two proteins is apparent. However, in K. pneumoniae OK8 (having genomic copy of the R-M system), increased expression of REase mRNA and protein was seen. DNA cleavage assay showed promiscuous activity of the enzyme (Supplementary Figure S4); both canonical and non-canonical sites were cleaved efficiently by the REase. Endonuclease mediated bacterial apoptosis Cell death observed above could be a natural process accelerated or by the induction of a specific program of cascading events seen during PCD. During PCD, bacterial cells are known to undergo DNA damage, cell filamentation, membrane depolarization and hydroxyl radical formation (7). Microscopic analysis showed that a sub-population of cells were filamented in WT ( Figure 3C). Further, DNA fragmentation was visualized by analyzing the genomic DNA Nucleic Acids Research, 2017, Vol. 45, No. 14 8427 Figure 2. REase induces SOS response by DNA damage: (A) E. coli AP1 strain with a chromosomally placed dinD::lacZ promoter fusion carrying plasmids (WT, HF or M) were grown at 37 • C in LB broth to different growth phases. The cultures were diluted to OD 600 0.3 and the ␤-galactosidase activity was measured as described previously by Miller. Miller units (in units per milliliter) were calculated and the results were plotted. Data are represented as SD from three independent experiments. Statistical significance was calculated using two way ANOVA coupled with Bonferroni's post-test and is represented by an asterisk (*) [P-value, P < 0.001 (***)]. (B) Real-time PCR analysis was carried out to quantify recA, lexA, sulA and recN mRNA levels at exponential phase (6 h) and stationary phase (16 h). idnT transcript levels were used as an endogenous control. Relative mRNA levels of genes in WT and HF were calculated after normalizing with mRNA levels from M. Graph is plotted as a ratio of relative mRNA levels in stationary versus exponential phase. Statistical significance was calculated by unpaired t-test (*) P-value <0.05. (C) Cellular mRNA levels of the r.kpnI and m.kpnI were determined in E. coli MG1655 expressing WT or HF at different growth phases. idnT transcript levels were used as an endogenous control. Relative mRNA levels of genes in WT and HF were calculated after normalizing with mRNA levels from early exponential phase. Statistical significance was calculated using two way ANOVA coupled with Bonferroni's post-test and is represented by an asterisk (*) [P < 0.05 (*) and P > 0.05 (not significant--ns)]. (D) Cellular mRNA levels of the r.kpnI and m.kpnI was determined in K. pneumoniae OK8 at different growth phases. 16S RNA transcript levels were used as an endogenous control for RNA levels in the samples. Relative mRNA levels of genes were calculated after normalizing with mRNA levels from early exponential phase. Experiments were carried out three times independently and error bars represent SD. Statistical significance was calculated by unpaired t-test and (*) P-value P <0.05. (3) intensity. The experiment was repeated three times independently and error bars indicate SD. (F) The hydroxyl radical (OH*) formation was determined by HPF staining using flow cytometry. Graphs were plotted by considering HPF stained cells/5000 cells. All experiments were carried out three times independently and error bars indicate SD. Significant differences were observed between WT and HF or M cells as calculated by using one way ANOVA coupled with multiple comparison post-test and is represented by an asterisk (*) [P-value, P < 0.001 (***) and P < 0.05 (**)]. Figure S5). Membrane depolarization was analyzed using DiBAC4(3), a voltage-sensitive fluorescent dye (24). A sub-population of WT exhibited enhanced fluorescence ( Figure 3D). Quantification by FACS revealed that >2-fold of WT cells displayed DiBAC4(3) fluorescence ( Figure 3E), i.e. more membrane depolarization compared to cells carrying HF or M. Membrane depolarization causes reactive oxygen species (ROS) production leading to apoptosis (25). Indeed, an enhanced formation of hydroxyl radical (OH*) was observed ( Figure 3F). All these results indicate that REase induced DNA damage stimulates apoptotic-like response. We observed 6-fold increase in the rpoE expression in WT cells during stationary phase (Supplementary Figure S6). The sigma E is known to direct the expression of genes specific for cell lysis (26). (Supplementary Different mechanisms of PCD in bacteria have been connected to different cellular phenomena (5,27). The three major PCD pathways described to date are mazEF mediated death (28), thymine less death (TLD) (29) and apoptotic-like death (ALD) (30,31). The mazEF, one of the well-studied TA system, induces PCD in most of the population by increasing the synthesis of 'death proteins' (28). ALD is an extreme DNA damage induced death pathway and is mainly dependent on RecA (31). TLD pathway is triggered due to thymine starvation in E. coli cells (29). Among these, we considered a possibility that the PCD observed in the present study is occurred through the ALD pathway, given the extent of DNA damage and induction of SOS response (see Figure 2). Compared to the canonical SOS pathway, a unique set of genes termed as extensivedamage induced (Edin) genes are induced during ALD (30). Indeed, expression analysis of a representative subset of Edin genes, viz. sdhA, uvrA, oppB and aceA showed a 1.2-2.8-fold up-regulation in WT compared to M alone ( Figure 4A). However, analysis of expression of genes specific to either mazEF system (yfiD, yfbU and yajQ) or TLD (yebG, ycgB) mediated pathways did not show any appreciable change (Supplementary Figure S7). Notably, earlier studies have revealed that sdhA and oppB responded to both ALD and TLD pathways but in opposite manner; they were upregulated in the former and the expression decreased in the latter pathway (29,30). From the data, it appears that REase mediated cell death follows ALD pathway. Further, during ALD, decrease in the activity of complex II of the oxidative respiration was demonstrated (30). In our studies, the activities of both the complexes (I and II) of oxidative respiration were reduced at stationary phase compared to log phase in REase expressing cells ( Figure 4B). Together, these studies demonstrate that REase harbouring cells induced PCD in bacteria through the ALD pathway. REase induced cell death benefits population at large In bacterial species where PCD has been studied, it is apparent that the evolutionarily conserved processes primarily contribute for population benefit. For example, under conditions of nutrient starvation, M. xanthus cells undergo PCD as a mechanism to release nutrients for the remaining clonal cells for the development of myxospores (32). In contrast, the 'cannibalistic' B. subtilis resorts to feeding on their siblings in order to delay the onset of sporulation (33). These and other examples show the importance of PCD for increasing the fitness of a population as a whole. Would the REase mediated PCD confer any such benefit to bacteria? Comparison of growth of WT, HF and M cells (Supplementary Figure S1) revealed the reduced net growth of WT, correlating with the cell killing observed in the stationary phase (see Figure 1), implying that WT cells could be fitness compromised. However, when cultures were grown for longer periods, i.e. beyond 72 h, surprisingly, more viable cells could be scored with WT compared to the others (Figure 5A). The survival of the WT in the long-term growth could be due to the nutrients released from the dying cells that would allow the clonal sibling survival, prolonging life span of the population and contributing to the fitness under nutrient limiting conditions. This premise was tested by the following experiments. Conditioned media was prepared from the cultures of WT, HF and M cells and tested whether media can support the growth of freshly inoculated new culture. Supplementary Figure S8 shows the schematic of preparation of conditioned medium (17). Briefly, conditioned media was prepared by filter sterilization of supernatants collected from 5-day-old cultures. When a freshly grown culture of E. coli MG1655 cells was inoculated into the conditioned medium obtained from WT, HF and M cells, an increase in the CFU (3 × 10 3 CFU/ml) was observed in the medium collected from WT compared to the other two (Figure 5B and C). To understand the basis for the enhanced growth, the nature of contents in the conditioned medium was analysed. Upon treatment with DNase I and RNase A, a substantial increase in the CFU was observed in case of WT (Supplementary Figure S9). Increased protein content (determined by SDS-PAGE and protein estimation) in the conditioned medium also supported the growth. Heat treatment of the medium curtailed the growth of the bacteria (Supplementary Figure S9). We interpret all these results as follows. REase mediated apoptotic-like death results in the release of DNA, RNA, proteins and other cellular essential components. This would facilitate the survival of the other starving cells as they can take up these nutrients. Thus, the rest of the bacterial population would benefit from the REase induced death of some cells. DISCUSSION Normally, REases as components of R-M system provide immunity to the modified host genome, destroying the incoming DNA (34). However, the occurrence of a large number of enzymes and the inherent promiscuity exhibited by many of them (35), could be suggestive of their participation in other cellular functions (9). Here, we show that REases can cause bacterial cell death. DNA fragmentation catalysed by the enzyme initiates a cascade of reactions and events ultimately leading to cellular apoptosis. DNA damage induces the expression of SOS and Edin genes, which subsequently decrease the complex I and II activities of the respiratory chain, leading to membrane depolarization and generation of hydroxyl radicals causing cellular destruction. The features of the REase mediated PCD matches with the ALD pathway (31). Importantly, the nutrients released from the dying cells provide growth benefit to the remaining , oppB and uvrA). idnT transcript levels were used as an endogenous control. Relative mRNA levels of genes in WT and HF were calculated after normalizing with mRNA level from M cells. The experiments were carried out three times independently and error bars represent SD. Statistical significance was carried out by t-test and indicated by (*) P-value ≤ 0.05. (B) Activities of complex I and II were determined in WT, HF and M. Cells were grown to different growth phases (exponential phase: 6 h; stationary phase: 16 h). Enzymatic activities were determined as described in Materials and Methods. Total activity (NADH dehydrogenase I) was determined by measuring the reduction levels of NADH at 340 nm. Complex II activity (succinate dehydrogenase) was determined by measuring the reduced levels of dichlorophenolindophenol (DCPIP) and absorbance was monitored at 600 nm. These data represent the results of three independent experiments and error bars indicate SD. Statistical significance was calculated using two-way ANOVA coupled with Bonferroni's post-test and is represented by an asterisk (*) [P-value, P < 0.001 (***) and P > 0.05 (not significant--ns)]. members of the community ( Figure 5). Thus, death of some members of the population appears to allow the survival of a subpopulation that eventually may form a nucleus for renewed growth when growth conditions become favourable with the availability of the nutrients. While it is clear that PCD benefits multicellular organisms (36), the selective advantage of promoting cell death is less apparent in bacteria. However, several studies have emerged that demonstrate the benefit of induced cell death for the community in bacterial populations (5,27). During B. subtilis sporulation, the mother cell undergoes autolysis through PCD to release nutrients for spore maturation (27). M. xanthus developmental paradigm is yet another well studied PCD. During the early steps of fruiting body formation and under nutrient depletion, a population of cells undergoes altruistic cell lysis to release the contents which feed the remaining cells that differentiate into myxospores. Similarly, mazEF mediated PCD appears to benefit bacterial community, as it is involved in the development of biofilms (37) and release of virulence factors (38). Likewise, REase induced cell death described here seems to release nutrients to benefit the remaining members of the population. PCD at late stationary phase thus appears to provide survival advantage for the population at large. Exponentially growing cells undergo a major transition when they enter stationary phase. In stationary phase, not only the nutrients are limited, the metabolic status of the cell is also altered. Alteration in gene expression and nucleoid associated protein (NAPs) profiles, increase in Mn 2+ concentration, decrease in polyamines and possibly changes in several other factors accompany the transition from exponential phase to stationary phase (39)(40)(41)(42). The concentration of polyamines and NAPs that protect the genome in exponentially growing cells is significantly reduced at stationary phase (14,41), rendering the genome more exposed and vulnerable to attack by its own nuclease. Reduced supercoiling and decompacted nucleoid are characteristics seen in stationary phase cells (41). The increased expression of the REase in the stationary phase ( Figures 2C-D and 3A-B) and its enhanced promiscuous activity in the presence of Mn 2+ (20,22) would lead to higher susceptibility of the genome for DNA damage. Indeed, we find increased level of Mn 2+ concentration in cells in stationary phase and high promiscuous activity of the enzyme in the presence of Mn 2+ (Supplementary Figure S10). Thus, change in the intracellular environment, chromosomal dynamics, REase/MTase ratio and increased promiscuity of the enzyme would lead to enhanced REase induced cell death ( Figure 6). Genomes of E. coli and K. pneumoniae have about 500 and 900 canonical (GGTACC) sites respec-tively for R.KpnI (43) (www.rebase.neb.com) and a much larger number of non-canonical sites. Extensive cleavage at the non-canonical sites would result in massive genome destruction which would be beyond the capacity of SOS mechanism to repair and resurrect the cell, leading to ALD and consequent cell death. Such an extensive intracellular damage would result in activation of the rpoE regulon (Supplementary Figure S6), and the expression of RpoE regulated genes involved in cell death, followed by lysis (26). This process would liberate nutrients, which can be taken up by neighbouring desperately starving cells thereby enhancing the survival of the population as a whole. Prolonging the life span of the population gives an opportunity for these cells to thrive if nutrient availability is restored. Thus, it appears that the promiscuity built into a site-specific enzyme is a cellular design to induce apoptosis when needed. The PCDs described so far have similarities and differences. Orchestrated, temporally regulated gene expressions coupled to specific signalling cascades are hallmarks of sporulation events occurring under starvation stress (27). In contrast, mazEF mediated cell death occurs only during exponential phase and not during stationary phase. An extracellular death factor, a small peptide is shown to enhance mazEF mediated PCD in exponential phase (44) which has been not susbstantiated further. Our extensive experiments aimed to identify such signals of apoptosis have also not lead to the identification of any candidate molecules. In the hindsight, it seems unlikely that such a signal would be required in the present scenario given that the changes in the intracellular environment and internal dynamics lead the exposure of the target (genome) to the nuclease. We surmise that the REase induced apoptosis could be an intracellular built in program. The stochastic events in gene expression in a bacterial cell have been established (45). An exciting possibility is that REase mediated cell death is not confined to one R-M system. Although, originally REases were considered to exquisitely site-specific, now, it is evident that a large number of them are promiscuous in their DNA cleavage characteristics (14,22,35,46). The 'star' activity exhibited by a number of REases is due to their promiscuous cleavage activity (21). From the present studies with HF enzyme and the studies with EcoRV (47), it is clear that even the REases which confine to site-specific cleavage could end up damaging the self DNA. However, DNA damage inflicted by enzymes having 'star' activity such as EcoRI (47) would be more severe as in the case of KpnI. Moreover, the experiments described with yet another R-M system substantiate the idea that REases can cause cell death (Supplementary Figure S2). Induction of SOS response has been examined in the case of MboII R-M system (48) which exhibits star activity. It is likely that the MboII also induces PCD. All these observations are pointers that many REases can engage themselves in more robust PCD under conditions requiring such behavior to support the survival of the population. Another factor which could contribute to cell death is the imbalance in intracellular concentration of REase and MTase. In spite of being organized together and in close proximity to each other, REase and MTase are often subjected to differential regulation at transcription and post transcriptional stages including protein turn over (49). In many cases, a transcription regulator encoded by C gene controls the expression of R and M genes in opposite fashion (50). Further, the expression of many REases may change significantly during the stationary phase as seen in the present study. REase mediated apoptosis thus would be yet another aspect of altruistic as well as social behavior of eubacteria. This new intracellular role also appears to be a cellular design to provide nutrients for the cells under starvation stress. Given the large repertoire, diversity and promiscuity seen in REases, the phenomenon is likely to be wide spread adding another facet to bacterial cell biology.
2018-04-03T05:32:15.031Z
2017-07-07T00:00:00.000
{ "year": 2017, "sha1": "12c78d63b5d9b7290c78715b90c2b7d81bc42f90", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/45/14/8423/22895047/gkx576.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "996677c5977518a9226ad13f1805943820b7e3e3", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
245926209
pes2o/s2orc
v3-fos-license
Surgical left atrial appendage closure: Success rate and its relationship with cerebrovascular accident Background: Several surgical procedures such as excision or exclusion are recommended for the closure of the left atrial appendage (LAA). This study was conducted with the aim to evaluate the success rate of different surgical techniques for LAA closure, their respective complications, and the rate of post-surgical cerebrovascular accident (CVA). Methods: This retrospective study included 150 consecutive patients who underwent LAA closure most commonly after mitral valve surgery within 3 to 6 months after surgery. An expert echocardiographic fellow collected the data on patients’ surgical LAA closure methods and history of CVA, types of prosthetic valves, mortality, and bleeding. Results: The failure rate for complete LAA closure was 36.7% (55 patients) in our study. The greatest success rate of complete LAA closure was seen in purse-string method (75.5%), followed by resection method (71.4%), while the lowest success rate (≈ 33.3%) was observed in ligation method. A significant relationship was observed between clots on the surface of metallic valve and postoperative CVA (P = 0.001; likelihood ratio: 32).In multivariate analysis, there was also no statistically significant relationship between partial LAA closure and the incidence of post-surgical CVA (P > 0.050). Conclusion: We observed the highest success rate of complete LAA closure in purse-string method followed by resection method. Interestingly, our results showed that despite the higher rate of residual LAA clot in cases of partial LAA closure, the occurrence of post-surgical CVA was mostly related to the presence of clots on the surface of metallic mitral prostheses rather than the presence of partial LAA closure. Introduction Atrial fibrillation (AF) is the most common type of cardiac arrhythmia and is a major cause of morbidity and mortality. 1,2 AF is associated with an increased risk of stroke and leads to more severe stroke cases than the other causes of ischemic attack. 3,4 The left atrial appendage (LAA), as the cardiogenic source of clot embolism in patients with AF, is now of great interest to researchers because available evidence indicates that up to 90% of left atrial (LA) clots in non-valvular AF originate from the LAA. 5 Generally, there are 2 methods available for reducing the complications of AF incidence, pharmaceutical (i.e., anticoagulants or substitutes) and non-pharmacological (removal or closure of the LAA from the atrial blood flow cycle). 6 For the first time, in 1949, Madden resected the LAA to prevent recurrent arterial embolism in patients with AF and rheumatic mitral valve stenosis. 7 Since then, several studies have reported lower stroke risks after LAA closure, especially in patients with AF. 5,8 Thus, to prevent stroke in the future, percutaneous and surgical LAA closure has been accorded special attention in patients undergoing open-heart surgery. 5,9,10 In this regard, several surgical procedures are recommended for LAA closure, including excision (LAA removal with scissors or amputating stapling devices) and exclusion (LAA closure with running sutures, external ligation, or the purse-string method) by stapling. In some cases, for such reasons as the inadequate intensity of mechanical closure or the continuous movement of the myocardium, the closure of the LAA may prove incomplete, or the LAA might reopen in some instances, all of which are considered incomplete LAA closure. 11 In these situations, a continuous Doppler flow is maintained between the LA and its adjacent appendage. The presence of even a small residual stump after excision could increase the risk of thrombosis formation, which is a potential source for embolization. 12 The remaining stump is also regarded as the unsuccessful closure of the appendage, even if it is free of thrombosis or its flow and velocity are low. 13 In about half of the cases with incomplete LAA closure (41%), LAA thrombi are likely identified. In some studies, an incomplete LAA closure rate of up to about 60% has been reported. Among various LAA closure methods, the reported success rate of surgical LAA excision is about 73%, and it has been recognized as the most effective method. 9 Some investigations have reported approximate incomplete LAA closure rates of 60% with suture exclusion and 58% with stapler exclusion and in the partially closed LAA, thrombus formation is observed in 46% and 67% of cases with suture and stapler exclusion, respectively. 10,11 It is, therefore, not reasonable to discontinue anticoagulants in patients with incomplete LAA closure. Notwithstanding the availability of simple surgical methods for LAA closure, there is still insufficient assurance regarding efficacy and reliability. What further compounds the situation is the current paucity of evidence for the success rate of LAA closure via different surgical techniques. 11 Accordingly, in the present study, we sought to evaluate the success rate of different surgical procedures for LAA closure and their respective complications, together with the rate of post-surgical cerebrovascular accident (CVA). Materials and Methods The current retrospective study was conducted on 150 individuals who underwent LAA closure (most of them underwent mitral valve surgery) followed by transesophageal echocardiography (TEE) within 3 to 6 months after surgery between 2017 and 2018. The requested data were gathered retrospectively from the patients' files on a checklist. The information collected included the patients' demographic characteristics, LAA velocity before and after surgery, LAA closure/removal surgical methods (including resection, purse string, ligation, and sutures), and heart rhythms before and after surgery, LA area, left ventricular ejection fraction, history of transient ischemic attack or CVA before and after surgery, LA clots, and LAA smoke/clots before and after surgery. In our center, the main methods of LAA closure are purse-string ligation, suture ligation, and surgical resection. In the present study, stapling for LAA closure was not used due to unavailability and high cost. The surgical LAA closure methods drawn upon in this investigation http://cjn.tums.ac.ir 07 October are illustrated schematically in figure 1. Although there are no discrete criteria for incomplete LAA closure, many investigators have relied upon patent LAAs, persistent flow into the LAA after its surgical exclusion as detected in TEE, and the presence of residual stumps. 13 Results of incomplete LAA closure was also included. In the patients with valvular surgery, the type of the valve (mechanical or bioprosthetic) and clots on it were recorded. Additionally, post-surgical complications, including mortality and bleeding, were incorporated in the data analysis. Statistical analyses were performed in SPSS (version 18, SPSS Inc., Chicago, IL, USA). Data were expressed as mean values ± standard deviation for interval variables and count (%) for categorical variables. All variables were tested in terms of normal distribution using Kolmogorov-Smirnov test. Categorical values were compared using chi-square test or Fisher's exact test. Comparisons between the sub-groups were performed using Mann Whitney U test for categorical variables and Kruskal-Wallis or ANOVA for quantitative variables. P values < 0.05 were considered statistically significant. Patients' characteristics: The present study was performed on 150 cases, comprised of 40 (26.7%) men and 110 women at a mean age of 55 years (range: 19-79 years). Prior to surgery, 53 (36.0%) patients had sinus rhythm, which decreased to 48 (32.0%) after surgery. The mean values of left ventricular systolic function, body surface area, and body mass index (BMI) were roughly equal in different applied LAA closure approaches. Left atrial appendage closure method and the related success rate: LAA closure was performed in 74 (49.3%), 53 (35.3%), 14 (9.3%), and 9 (6.0%) cases, respectively, through purse-string, suture, resection, and ligation methods. In total, 63% of the LAAs (95 patients) were completely closed according to postoperative TEE. LAA was partially closed in 55 patients, only for 23 of whom LAA velocity was recorded. However, there was no significant difference in the mean value of velocities in the same individuals (Table 1). The frequency of LAA closure via different methods is presented in table 2. The greatest success rate of complete LAA closure in our center was seen in purse-string method (75.5%) followed by resection method (71.4%), while the lowest success rate (≈ 33.3%) for complete LAA closure was observed in ligation method, which resulted in a 66.65% rate of partial LAA closure. There was a significant relationship between surgical type of LAA closure and success and failure of complete LAA closure (P = 0.030). Left atrial appendage closure and related complications: The rate of clot on the surface of the prosthetic valve was higher in cases with partial LAA closure (P = 0.008). A higher rate of recent stroke was also seen in this patient population (P = 0.400) ( Moreover, 20 (13.3%) patients had a history of stroke before surgery, and 15 (10.0%) reported postoperative stroke. Whereas among the 55 (36.7%) patients with partially closed LAAs, 9 (16.3%) cases exhibited no presentation of CVA, 6 (6.3%) out of the 95 (63.3%) patients with complete LAA closure experienced a stroke, which is a statistically significant difference (P = 0.047). Postoperative bleeding and mortality were seen in 6% and 2% of the patients, respectively. The lowest rate of bleeding (P = 0.530) and death (P = 0.520) was reported in ligation method, albeit it was not significant. The occurrence of stroke and postoperative AF rhythm were not different in various methods of LAA closure (P = 0.100, P = 0.720, respectively). Complications of post-surgical LAA closure are presented in table 4. Before surgery, LAA clots and smoke patterns were detected in 39 (26%) and 86 (57.3%) cases, respectively. Furthermore, the rate of clot formation (P < 0.001) and smoke patterns (P = 0.001) in LAA after partial surgical closure was 6 (4.0%) and 10 (6.7%), respectively, which is much lower than the reported rate in other studies. LAA clots were seen in 7.5% and 2.7% of the patients who underwent LAA closure via the suture method and the purse-string method, respectively. This rate was 0% in other methods of LAA closure. LAA smoke patterns were reported in 9.4% and 6.8% of the cases of partial LAA closure via the suture and purse-string methods, respectively. Nonetheless, no smoke pattern was reported in ligation and resection cases. Clots on the valve surface were observed in 21 (14.0%) of the patients who underwent simultaneous LAA closure and mitral valve prosthesis implantation. Moreover, recent stroke was seen in 12 (73.3%) of these patients. As seen in table 5, a significant relationship was observed between clots on the valvular surface and the occurrence of postoperative CVA (P = 0.001; likelihood ratio: 32). None of the 6 patients with LAA clots reported a stroke. In other words, none of the 15 cases of post-surgical stroke were attributable to LAA clots. Furthermore, only 2 out of the 10 (20.0%) patients with LAA smoke patterns reported that they had experienced a post-surgical stroke, which was 13.3% of all the cases presenting with stroke. It is of note that 1 out of the 2 mentioned cases had simultaneous clots on their metallic valve. In the patients affected by post-surgical stroke, there was a slightly higher rate of AF rhythm; however, the incidence of stroke did not significantly correlate with the patients' underlying rhythm (sinus rhythm vs. AF), the presence or absence of clots or smoke patterns in LAA, or the mean left ventricular systolic function. Mild, moderate, and severe LA enlargement was seen in 44.7%, 35.3%, and 16.0% of the patients, respectively. Among patients presenting with stroke, 6.7% had a normal-size LA, 60.0% had mild LA enlargement, 26.1% had moderate LA enlargement, and 6.7% had severe LA enlargement. Consequently, the size of the LA did not correlate with the incidence of stroke after open left-heart surgery. The highest rate of postsurgical stroke was observed in patients who underwent LAA resection (28.6%). Table 5 depicts the association between recent stroke and echocardiographic parameters. Discussion The main finding of our investigation was that despite the high rate of incomplete LAA closure, the rate of stroke was not related to smoke patterns or even clots in a partially closed LAA. Moreover, the occurrence of stroke was more frequently associated with clots on the patients' valve surface than with the presence of LAA clots or smoke patterns, or even residual stumps. Furthermore, the incidence of stroke was not related to patients' background heart rhythm and LA size. Therefore, clots on the surface of cardiac valves appear to be the strongest risk factor for the occurrence of stroke following surgical LAA closure and the absence of LAA clots in patients with CVA may be due to the formation of microthrombosis or embolization after clot formation or the presence of small clots that the current TEE resolution scales cannot visualize. Thus, in cases of incomplete LAA closure following surgical LAA closure in conjunction with the maze surgery or mitral valve surgery, the incompleteness of LAA closure may be less daunting than the presence of prosthetic valve clots regarding post-surgical stroke. In this setting, appropriate anticoagulation therapy is of greater significance than the patency or closure of LAA. 14 Complete surgical LAA closure is a highly operator-dependent and challenging method. In our investigation, the rate of incomplete LAA closure was 37%, which is approximately similar to the previously reported rate of 35% in the literature. 11,14,15 Katz et al. reported a 36% rate of incomplete LAA closure in their cohort study. 15 In the Left Atrial Appendage Occlusion Study (LAAOS), postoperative TEE elucidated a 34% rate of incomplete LAA closure in individuals who underwent either stapling or suture ligation. 16 Likewise, the highest rate of incomplete LAA closure (33.3%) was seen in ligation method in our survey. The explanation might be the application of shallow suture bites recommended to avoid closure of the adjacent circumflex coronary artery or failure to extend running sutures to the distal edge of LAA orifice. 13 The highest success rate in our study was seen in complete LAA resection and suture ligation methods. Suture exclusion through epicardial or endocardial ligation, surgical exclusion or resection, and stapler are the currently used surgical LAA closure methods. 17 Thus, the LAA resection method is suitable since no LAA would be left behind. LAA closure using staplers has been implemented in Europe and the US with excellent outcomes, 18 but it is highly costly and is associated with side effects. The high success rate in our center for LAA closure without the use of staplers implies that the closure of the LAA can be accomplished through far more straightforward methods without incurring high costs and causing complications such as bleeding. In the current study, the reported rates of clots and smoke in partially closed LAAs were much lower than those reported in other investigations. This might be related to the size of the partially closed LAA orifices or residual stumps. It is worth noting that the majority of our patients underwent surgery due to severe mitral stenosis; hence, our lower rate of clots and smoke patterns in partially closed LAAs may have been the consequence of increased atrial blood flow after the reestablishment of the mitral inflow of severe stenotic mitral valves. This explanation needs confirmation through further investigations. Conclusion Our results indicate that the greatest success rate of complete LAA closure was seen in purse-string method followed by resection method. Indeed, the occurrence of post-surgical CVA is related to the presence of clots on the surface of metallic mitral prostheses rather than the presence of partial LAA closure. Undeniably, the purse-string method is the choice method for surgical LAA closure due to its surgeon-friendly nature and high success rate of complete closure. However, further studies are required in this regard. http://cjn.tums.ac.ir 07 October Limitations: One of the limitations of retrospective studies is that the study population cannot be recalled to undergo a semi-invasive procedure like TEE without evident indications. This is one of the inherent and ethical constraints of retrospective studies that can undermine the assumption of the success rate of LAA closure.
2022-01-14T16:42:49.251Z
2021-10-07T00:00:00.000
{ "year": 2021, "sha1": "03c391a5b803e70adc639ead415c3c9a49f7cd15", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/cjn.v20i4.8350", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5efea5e367c832dcab87b318998ddc4a38d30eec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
809819
pes2o/s2orc
v3-fos-license
Beyond-Constant-Mass-Approximation Magnetic Catalysis in the Gauge Higgs-Yukawa Model Beyond-constant-mass approximation solutions for magnetically catalyzed fermion and scalar masses are found in a gauge Higgs-Yukawa theory in the presence of a constant magnetic field. The obtained fermion masses are several orders of magnitude larger than those found in the absence of Yukawa interactions. The masses obtained within the beyond-constant-mass approximation exactly reduce to the results within the constant-mass approach when the condition $\nu \ln (\frac{1}{\hat{m}^{2}})\ll 1$ is satisfied. Possible applications to early universe physics and condensed matter are discussed. I. INTRODUCTION In the last few years the magnetic catalysis (MC) of chiral symmetry breaking [1]- [3] has been the focus of attention of many works on non-perturbative effects of magnetic fields [1]- [21]. The phenomenon consists on the dynamical generation of a fermion condensate (and consequently of a fermion mass) when the fermion interactions occur in the presence of an external constant magnetic field. A most significant feature of the MC is that it requires no critical value of the fermion's coupling for the condensate to be generated. That is, the symmetry breaking takes place at the weakest attractive interaction. Physically it is due to the fact that the magnetic field forces the low energy fermions to reside basically in their lowest Landau level (LLL), while the higher energy fermions actually decouple [8]. This, in turn, yields a dimensional reduction of the infrared fermion dynamics. The dimensional reduction is reflected in an effective strengthening of the fermion interactions leading to dynamical symmetry breaking through the generation of a fermion condensate. A particularly important question to understand in this context is how the MC is affected by the introduction of fermion-scalar interactions. Fermion-scalar interactions are an essential element of the unified theories of fundamental forces. As is well known, they are expected to be responsible for the fermion mass appearing due to the spontaneous symmetry breaking of the electroweak symmetry. Fermion-scalar interactions are also relevant in condensed matter physics where the complexity of strongly-correlated many-body systems some times calls for a description in terms of more simple, phenomenological theories that contain interacting scalar in addition to fermions (see e.g. [22]). In Refs. [12]- [13] two of us studied the realization of magnetic catalysis in a (3+1)-dimensional Higgs-Yukawa (HY) model, showing that the magnetic-field-induced fermion mass is enhanced by fermion-scalar interactions. As we will show below, this enhancement is also found within a more accurate approximation for a wide range of couplings. This result might find applications in early universe transitions, as well as in condensed matter physics. In [12]- [14] some applications of the MC to the early universe were briefly considered. They were motivated by many astrophysical observations of galactic and intergalactic magnetic fields indicating the existence of seed fields that originated from large primordial magnetic fields (for a recent review on cosmic magnetic fields see [23]). If the primordial magnetic fields in the early universe were large compared to the values close to the phase transition point of the fermion masses generated through the usual mechanism of spontaneous symmetry breaking, the fermion would seem approximately massless. Under these circumstances, it is important to investigate if the primordial magnetic fields could contribute to the masses of the fermions through MC and hence influence the phenomenology of the early universe [13]. On the other hand, to discuss applications of MC in the context of a HY theory to condensed matter, we need, besides interactions modelled by fermion-scalar terms, a physical system that, despite being non-relativistic, can be described under certain conditions by a "relativistic" Hamiltonian. We will see below that these conditions are indeed present in the physics of high-T c superconductors. High-T c superconductors, which are characterized by the existence of nodal points where the order parameter (gap function) vanishes, provide a practical realization of a "relativistic" system in condensed matter physics. This is so because the low-energy spectrum of the nodal quasiparticles is linear, hence the quasiparticle excitations are described by an anisotropic Dirac Hamiltonian [24]. In Ref. [22] a quantum-critical phase transition to a new superconducting state, characterized by the appearance of a secondary pairing at some doping level, was proposed to explain recent measurements [25] of an anomalously large inelastic scattering of quasiparticles near the gap nodes of a superconductor. The observed secondary pairing transition made the nodal quasiparticles fully gapped. Based on the symmetries of the superconductor, the authors of Ref. [22] made a classification of a set of fermion-scalar interactions that in principle could be in agreement with the experimental observations, and then performed a perturbative renormalization-group analysis of each model to determine the possible existence of a quantum-critical point. In Ref. [26], expanding on the ideas of [22], the existence of a quantum critical point was established directly in a (2+1)-dimensional HY theory beyond a non-perturbative approach, which allowed to make quantitative predictions for the corresponding quantumcritical behavior. The gap generation (fermion mass) was associated in [26] with the breaking of a discrete chiral symmetry. We would like to underline that the breaking of the chiral symmetry in [26] was found to occur when the Yukawa coupling (assumed to be related to the doping level) reached a critical value, that is, the symmetry breaking was not associated to the phenomenon of MC, as no external magnetic field was introduced in the analysis. However, as recently observed [27] by measuring the splitting of the conductance peak that characterizes the nodes of high-T c superconductors, the development of a secondary quasiparticle gap may be triggered not only by the doping level, but also by an applied magnetic field. Could the secondary gap triggered by the magnetic field be the consequence of MC occurring within the superconductor? We believe that the results we are going to derive below strongly indicate that the answer is yes, if, as argued in [22] and [26], the HY theory is the model describing the appearing of the secondary gap. Nevertheless, to match the experimental observations we would need to particularize the analysis done in the present paper to the (2+1)-dimensional case, and adjust the physical values of the couplings to those characteristic of the superconductor. As already mentioned, in Ref. [13] the phenomenon of MC in a (3+1)-dimensional Abelian gauge theory with HY interactions was studied. In that work it was shown that the non-perturbative solution of the minimum equations for the composite-operator effective action leads not only to a magnetically catalyzed fermion dynamical mass, but also to a nonzero scalar vev ϕ c and consequently, to a nonzero scalar mass. In other words, thanks to the magnetic field, a scalar-field minimum solution is generated by non-perturbative radiative corrections. We should underline though that the fermion and scalar masses of ref. [13] were obtained within a simplified approximation known in the literature as the constant mass approximation (CMA). In general, to find the dynamical mass -which is nothing but the part of the fermion self-energy proportional to the identity matrix-one has to solve a non-perturbative gap equation (i.e. the Schwinger-Dyson equation for the full fermion propagator). This means to solve a non-linear, implicit integral equation for the fermion self-energy, which is a momentum-dependent function. Most authors approach such a mathematically complicated problem with the help of the rough CMA approach. It consists on neglecting the momentum dependence of the self-energy in the gap equation. This is done by substituting the self-energy function in the gap equation by its value at zero momentum, that is, by the infrared mass. There is no general principle that guarantees the validity of this approximation for the whole range of physical couplings. For theories with several couplings, due to the richness of the parameter space, the reliability of the CMA is questionable and should be investigated in detail. In the case of the HY model, aside from the multiple-coupling problem, one has to deal with a system of non-linear, coupled integral equations, one for the fermion dynamical mass and other for the scalar vev [13]. We cannot disregard in this situation the possibility of regions of these parameters where the CMA is reliable and regions where it is not. In this case one has to turn to a more accurate approximation on which the momentum dependence of the self-energy is taken into account when solving the gap equation. This more accurate approximation is known as the beyond-constant-mass approximation (BCMA). For theories like QED containing only one coupling constant, the CMA is known to be appropriate, since going beyond it does not produce qualitatively different results. This has been explicitly shown for (3+1)-dimensional [2] and (2+1)-dimensional QED [28], and independently corroborated by our calculations below. In Ref. [9] the BCMA mass solution of (3+1)-dimensional QED was found to agree with the CMA mass obtained from the improved-ladder 1 gap equation. This result was later corroborated by numerical calculations in [29]. Considering that different physical applications of the MC in models with fermion-scalar interactions would require different values of the couplings constants, and in particular, given the relevance that the Abelian gauge Higgs-Yukawa theory may have for condensed matter and other field theory applications, it is important to perform a BCMA investigation of this model in all possible regions of the parameter space and find out whether it significantly differs or not from the CMA results. A main goal of the present paper is to carry out such a study. By going beyond the CMA, we will determine the region of Yukawa and scalar self-interaction couplings where the CMA is valid, and will obtain the numerical BCMA solutions for the fermions and scalar dynamical masses in the complete physically meaningful parameter region. As we will see below, the CMA results for the Abelian gauge Higgs-Yukawa theory are mostly reliable in the available parameter space. An important finding is that the (BCMA-found) mass values are many orders of magnitude larger than those obtained in the absence of fermion-scalar interactions, corroborating, within this more accurate approximation, the enhancement of the dynamical mass by the Yukawa term. The paper is organized as follows: In Section II we derive the non-linear integral equations for the fermion self energy (gap equation) and the scalar vev in a gauge Higgs-Yukawa theory. The integral gap equation is then converted into a second order differential equation with boundary conditions. In Section III, this boundary-value problem is analytically solved leading to the self energy as a function of the momentum and the infrared fermion mass. Using the self-energy solution and the equation for the scalar minimum, we arrive at two coupled transcendental equations depending on the infrared dynamical mass and the scalar vev. These equations are numerically solved and the results are used to determine the region of reliability of the CMA and to compare the mass values obtained in the CMA and in the BCMA approaches. We end Section III discussing the solution of the gap equation at zero Yukawa coupling, and showing that it leads to the same result found in Ref. [2] for (3+1)-dimensional QED. In Section IV, we state our concluding remarks and reconsider the question of the relevance of the magnetic catalysis in the electroweak phase transition using the BCMA results. II. INTEGRAL EQUATIONS Let us consider the following Lagrangian density that describes a gauge Higgs-Yukawa model with a fermion field coupled to scalar and electromagnetic fields. The scalar field is electrically neutral, but self-interacting. The Lagrangian density (1) has U(1) gauge symmetry, fermion number global symmetry and discrete chiral symmetry Notice that the quadratic scalar term has the correct sign of a mass term, thus no vacuum expectation value of the scalar field exists at tree level. In the course of our calculations we will take µ → 0 to search for a dynamically induced mass. The discrete symmetry (4) forbids a mass for the fermions to all orders in perturbation theory. Nevertheless, this symmetry could be dynamically broken through non-perturbative generation of a composite field (fermion-antifermion condensate). Such a fermion condensate would lead to a dynamical fermion mass and to a non-zero vacuum expectation value of the scalar field [13], which in turn would contribute to the scalar mass. the mass solution (either CMA or BCMA), assuming that the gap equation is found using a ladder or an improved ladder approximation. It is known that in the case of the non-gauge (3+1)-dimensional Higgs-Yukawa theory, no value exists for a running λ y at which a chiral symmetry breaking fermion condensate can be generated 1 . As shown in [13], the situation drastically changes when a magnetic field is introduced. In this case a non-trivial solution exists at the weakest value of λ y and one can show that a fermion condensate (together with a dynamical fermion mass and a scalar vev) is magnetically catalyzed. However, as already mentioned, the solutions in Ref. [13] were found within the CMA, and therefore it is important to investigate their reliability beyond that approximation. Our task hereafter will be to extend the results of Ref. [13] beyond the CMA to find the dynamical mass and the scalar vev for all physically meaningful values of λ y and λ. For the sake of understanding, we will repeat the outline of the derivations done in Ref. [13] that lead to the coupled set of integral equations (gap and scalar vev equations) that will be the starting point of our new calculations. Let us consider the Lagrangian density (1) in the presence of an external constant magnetic field B (without loss of generality we assume that the magnetic field is directed along the third coordinate axis and that sgn (eB) > 0), which can be introduced by adding the external potential A µ = (0, 0, eBx 1 , 0) as a shift to the oscillatory gauge field A µ in Eq. (1). To find the vacuum solutions of this theory we need to solve the extremum equations of the effective action Γ for composite operators [30], [31] | 0 is a composite fermion-antifermion field, and ϕ c represents the vev of the scalar field. The subindex B indicates that the effective action is considered in the background of the external magnetic field. Equations (5) and (6) are, respectively, the Schwinger-Dyson (SD) equation for the fermion self-energy operator Σ (gap equation) and the minimum equation for the vev of the scalar field. As we are interested in the possibility of a scalar mass induced -through the interactions with the fermions-by a dynamically generated fermion condensate, we will set, as stated above, the bare scalar mass µ to zero. Notice that, if the minimum solutions of Eqs. (5) and (6) are non trivial, the discrete chiral symmetry (4) is dynamically broken and both fermions and scalars acquire mass. The loop expansion of the effective action Γ for composite operators [30], [31] can be expressed as Here C is a constant and S (ϕ c ) is the classical action evaluated in the scalar vev ϕ c . Non-bar notation indicates free propagators, as it is the case for the gauge and the scalar propagators. Here ξ is the gauge fixing parameter and M 2 = λ 2 ϕ 2 c denotes the scalar square mass. A dependence on full boson propagators is not included since we do not expect the gauge field to acquire nonzero expectation values for its composite operator. On the other hand, we are going to explore the possibility of a non-zero vev of the scalar field, hence, a composite-operator solution for the scalar would be a correction of higher order that can be neglected. The bar on the fermion propagator G (x, y) means that it is taken full. The full fermion propagator in the presence of a constant magnetic field B can be written as [5], [11], [32]- [33], with Σ(p) being the fermion self energy, p = (p 0 , 0, − √ 2gBk, p 3 ), and k denoting the Landau level number. Similarly, the free fermion inverse propagator in the presence of B is given by Note that λ y ϕ c enters as a contribution to the fermion mass due to the shift ϕ → ϕ + ϕ c in the scalar field done in the classical action to account for a possible non-zero scalar vev. The value of ϕ c will be determined self-consistently through Eq. (6). In the above equations, Ritus' E p functions [32]- [33] were introduced. They form an orthonormal and complete set of matrix functions and provide an alternative method to the Schwinger's approach to problems of QFT on electromagnetic backgrounds 2 . Ritus' approach was originaly developed for spin-1/2 charged particles [32]- [33], and it has been recently extended to the spin-1 charged particle case [34]. The function Γ 2 G, ϕ c in (7) represents the sum of two-and higher-loop two-particle irreducible vacuum diagrams with respect to fermion lines. For weakly coupling theories, like the case of Lagrangian (1), one can use the Hartree-Fock approximation, which means to retain only the contributions to Γ 2 that are lowest-order in coupling constants (i.e. two-loop graphs only), so that it becomes As discussed above, the infrared dynamics (p << √ 2eB) of a system of interacting fermions in the presence of a magnetic field is mainly governed by the contribution of the LLL [1]- [2]. To obtain an explicit form for Eqs. (5)-(6), we use the propagators (8)- (10) in Eqs. (7) and (12), and take into account that in the background magnetic field the self-energy structure entering in the full fermion propagator (10) should be written as [11] Σ(p) = Z (p)γ.p + Z ⊥ (p)γ.p ⊥ + Σ(p). Here we are using the notation p = (p 0 , p 3 ) and p ⊥ = (p 1 , p 2 ) for the momentum components. The wave function renormalization coefficients Z ,Z ⊥ are scalar functions of the momentum. Using this structure for Σ in the full fermion propagator, evaluating at the LLL (k=0), and using the solution of the wave function renormalization, Z = 0, found in Ref. [12], we have that the gap equation (5) and the scalar minimum equation (6) of our theory take the form and respectively. Dimensionless field-normalized quantities are denoted by Q = Q √ 2eB . Notice that if we set λ y = 0 in the above equations, Eq. (14) reduces to the same gap equation found in [2] for (3+1)-dimensional QED, since, in the absence of a Yukawa term, the theory (1) becomes equivalent to a QED theory on which an extra, but disconnected, real scalar field has been added. Changing q to polar coordinates (k, θ) in the above integrals and integrating in the angle, we find The functions κ t ( p 2 , x) are defined by To make the calculation more manageable, it is convenient to divide the momentum integration in Eq. (16) in the two regions separated by the dimensionless momentum square p 2 . Expanding the kernels κ t ( p 2 , x) appropriately on each region, we find Notice that we used Eq. (17) to combine the last two terms of Eq. (16) into the last term of Eq. (19). The analytical solutions of Eqs. (17) and Eq. (19) can be explored by converting first the non-linear integral equation (19) to a second order non-linear differential equation. First, however, we must take into account that the consistency of the LLL approximation requires to use a momentum cutoff of order √ 2eB in the momentum integrations, and hence the infinity limit in all the integrals in k 2 should be changed to 1. One can easily see, by taking derivatives of Eq. (19) with respect to x ≡ p 2 and combining them conveniently, that the integral equation (19) is equivalent to the following second order differential equation If we now differentiate (19) and evaluate the result at x = 0, we obtain the following boundary condition where Similarly, taking the derivative of (19), multiplying it by g(x) g ′ (x) and evaluating at x = 1, we obtain the second independent boundary condition In doing so, we have traded a non-linear integral equation for a non-linear boundary value problem. Finding the solutions to the coupled set of Eqs. (17) and (20), with boundary conditions (21) and (24) will be the aim of the next section. A. Beyond-Constant-Mass Analytical Solutions An analytical expression for the solution Σ(x) of (20)-(21) can be found considering a linearized version of the equations (20) and (17), on which the fermion self energy in the denominators is replaced by its zero momentum value Σ (0) = m. The consistency of such linearization is justified if the self energy is a rapidly decreasing function of the momentum. We will corroborate at the end of the derivations that follow that this is indeed the case. Then, the gap equation (20) can be written as while the equation for the scalar minimum takes the form From a physical point of view, we expect that the masses for both fermion and scalar fields will be much smaller than the magnetic field that induces them through the formation of a fermion condensate. Therefore, it is reasonable to assume that m 2 ≪ 1 and M 2 ≪ 1. At the end of our calculations we must check in the obtained results the consistency of this assumption. Taking into account the asymptotic behaviors of the function g (x) in the regions: one ends up with a different boundary value problem at each region. The two boundary value problems are defined by the following equations: (1) For x << M 2 << 1 x Σ ′ (x) x=0 = 0 (30) where ǫ = g(1) g ′ (1) = 1.477 and α = 1 137 is the fine-structure constant. For most physically interesting applications of the gauge Yukawa theory, the Yukawa coupling λ y is ≤ 10 −1 . For those λ ′ y s, the parameters ν = α 2π and ν = α 2π + λ 2 y 16π 2 practically coincide (for λ y = 10 −2 they already have three significant common figures). Thus, we can take ν ≃ ν, in Eq. (31), reducing the problem to a single second order differential equation. The new problem is then defined by Eq. (29) and the two boundary conditions (30) and (32). The solution to this boundary value problem can be written as the following combination of hypergeometric functions (for properties and formulas of the hypergeometric functions see [35]) Taking into account the boundary condition (30) and the formula we obtain A 2 = 0. As m = Σ (0) , it is clear that A 1 = m. Therefore the self-energy solution becomes The second boundary condition (32) gives rise to which establishes a relation between the fermion dynamical mass m, and the scalar vev ϕ c . This is an implicit, quite non-trivial equation for m : besides the dependence on m in the hypergeometric functions, the scalar vev ϕ c depends on m through Eq. (26). To find the solution to the system formed by (26) and (36), we first note that Eq. (29) can be rewritten in the form Using (33) and (38) in Eq. (26), and the values of A 1 and A 2 just found, we obtain From the asymptotic behavior of the hypergeometric function for large values of its argument [35] F (a, b; c; z) ≃ we can show that and so the function can be approximated by Similarly, one can see that where the parameter t = ν ln( 1 m 2 ). Eqs. (46)-(47) represents the BCMA implicit solution for the fermion and scalar masses catalyzed by the magnetic field. This is as far as we can stretch our analytical calculations for m 2 and M 2 without introducing any additional approximation. In the following subsections we will perform a numerical analysis of these solutions. B. Numerical Solutions in the BCMA Since Eqs. (46)-(47) are highly transcendental, to obtain the explicit dependence with the couplings of the BCMA fermion and scalar masses, we have to resort to numerical methods. Figs. 1 and 2 display logarithmic plots of the numerical solutions of Eqs. (46), (47), versus couplings λ y and λ. From them, one can easily see that the two masses widely agree with the initial assumptions m 2 ≪ 1, M 2 ≪ 1. Only in the region of very large λ y (very large n) and very small λ (very small k) the fermion mass becomes of order one, hence, to be consistent, we should disregard the results in this corner. In any place out of this limited section of the parameter space, the results are reliable for both masses. Notice that the fermion mass grows with λ y at any given value of λ. This in turn implies an enhancement of the fermion mass as compared to its value within QED. While in QED the largest mass was no more than ∼ 10 −10 √ 2eB [2], here the mass surpass this value in the majority of the parameter space in at least 5 orders of magnitude. It is because of such a significant enhancement of the dynamically generated mass in the presence of scalars, that the magnetic catalysis could play an important role in realistic applications of the HY model. The region of large λ y , large λ, where the results are quite reliable, is the most interesting for applications to the electroweak theory, since the values of the coupling constants in that section include the value of the scalar self-couplings consistent with current experimental limits for the Higgs mass, as well as the Yukawa coupling of the top quark. To finish this subsection, let us consider the behavior of the self energy with the momentum. In Fig. 3 we have plotted the self energy solution (35) as a function of the momentum for fixed values of the couplings. As can be seen, decreases very quickly with the momentum. This behavior is in good agreement with the linearization used in Eq. (25). It also justifies the ultraviolet cutoff at √ 2eB that was imposed on the integrals appearing in the gap equation (19) and the scalar minimum (17), since, as seen here, the main contribution to the integrals comes from the deep infrared region. From (50) we see that since m 2 has to be positive, the consistency of the CMA solution requires 1 2 ν 2 ln 2 1 m 2 < 1, which is equivalent to have t < 1.4. Below, we will numerically check that this condition is indeed always satisfied. To compare the BCMA and CMA solutions we will explore whether there is a condition under which the CMA and BCMA equations reduce to an identical set. To this end, let us assume that νln 1 m 2 ≃ νln 1 m 2 ≪ 1. This restriction allows us to write Eqs. (46)-(47) as respectively. They are exactly the same equations found from (49) and (50), after using t ≪ 1. Thus, in this limiting case, the BCMA reduces to the CMA, thereby t ≪ 1 defines a condition of reliability of the CMA. The explicit region of parameter space where the CMA is reliable can be determined from a numerical plot of the ratio between the CMA and BCMA mass square solutions. To be sure that we are working with consistent masses, we will restrict the couplings to a strip in the (λ y ,λ) plane, leaving out the corner of Fig. 1, where, as discussed above, the consistency of the approximation breaks down. Figs. 4 and 5 show logarithmic plots of the ratio of CMA over BCMA mass square results for fermion and scalar masses respectively, taken in the region of couplings 10 −8 < λ < 10 −1 , 10 −6 < λ y < 10 −2 . Both figures display similar behavior of the ratios, characterized by a discernible region of the parameter space, approximately given by 10 −4 < λ < 10 −1 and 10 −6 < λ y < 10 −5 , where a disagreement between BCMA and CMA results is apparent. However, even in this segment, the BCMA and CMA mass squares differ at most in one order of magnitude. Out of this limited region we find very good agreement between BCMA and CMA results, particularly at large λ y , indicating that this is the most reliable region of the CMA solution within this model. The above observations are corroborated by the plots of the BCM-and CMt's, as shown in Figs. 6 and 7 respectively. Both surfaces have similar t-values at equal set of couplings, even when t ≪ 1 is not satisfied, indicating that, after all, and as already seen in Figs. 4 and 5, the two approximations give rise to very near mass values. Notice that the larger the λ y 's, the smaller the t's in both approximation, leading to a better agreement between BCMA and CMA results, as expected from our previous analytical considerations. Therefore, although the numerical calculations show that the CMA results are widely reliable, it is in this extreme section of the parameter space where the two approximations totally coincide. From Fig. 7 it is evident that the CM-t never goes over the limiting value of 1.4, so, even in the region of larger discrepancy between the CM and BCM results (large λ, relatively low λ y ), the CM mass solution remains real, as it should. The curves reflect the fact that the CM-approximation tends to overestimate the mass, because it substitutes in the integrals the self-energy function, which rapidly decreases with momentum, by a constant. D. BCMA in the λy = 0 Limit (QED case) We shall discuss now the limiting case λ y = 0 which reduces to (3+1)-dimensional QED with a decoupled selfinteracting scalar field. Let us find the solutions for the masses in this case. It is clear from Eq. (17) that no scalar vev, and hence no scalar mass, is generated in this case. The fermion dynamical mass solution can be found from (46) evaluated at λ y = 0. It leads to In terms of m 2 , it can be rewritten as follows Taking into account that νǫ ≪ 1 and using the asymptotic behavior arctan(x) ≃ π 2 − 1 x , we obtain This result coincides with the BCMA results found for QED within the ladder approximation (see Ref. [2] for details). As known, it is qualitatively very close to its CMA counterpart m 2 ≃ e −π √ π α [2]. Thus, we are corroborating here the conclusion of the authors of Ref. [2], namely, the reliability of the CMA approach in (3+1)-QED 4 . It is worth to notice that the dynamical mass behavior is basically affected by the infrared conditions of the self energy, but it is practically indifferent to the ultraviolet boundary condition used in Ref. [2]. This explains why, despite using a momentum cutoff at √ 2eB and imposing the second boundary condition at x = 1, we still get in the λ y = 0 case the same result as in [2], where the momentum was allowed to run up to infinity. IV. CONCLUDING REMARKS In this paper we have performed a BCMA study of the magnetically catalyzed fermion and scalar masses in a (3+1)-dimensional Abelian Higgs-Yukawa theory in the presence of a constant magnetic field. Our results show that even in this multiple-coupling theory, the discrepancy between the masses obtained within the CMA and within the more accurate BCMA is not very significant, being the difference in the mass square of at most one order of magnitude. We find that the region where CMA and BCMA results exactly coincide is defined by the condition t = ν ln( 1 m 2 ) ≪ 1. The BCMA calculations led to fermion masses many orders of magnitude larger than those obtained in the QED case, thereby confirming, within a more accurate approximation, that the Yukawa interactions strengthen the generation of the dynamical fermion mass by several orders of magnitude, a claim done in previous papers [12,13] based only on CMA results. As mentioned in the Introduction, a motivation for the inclusion of fermions-scalar interactions in the study of the magnetic catalysis was to find out if this phenomenon could influence the phenomenology of the early universe. A fundamental question here to understand is whether the strengthening of the mass by the fermion-scalar interactions may have any impact in the electroweak phase transition. For this effect to be of any significance for the electroweak physics, a condition has to be met: during the electroweak transition the universe has to be permeated by a primordial magnetic field strong enough as to induce, even at temperatures comparable to the electroweak critical temperature, a modification in the value of the fermion mass. We should keep in mind that at temperatures below, but close enough, the critical temperature for the electroweak spontaneous symmetry breaking, the fermion masses generated through the Higgs mechanism are very small, since the transition is expected to be either second order or weakly first order. Then, if the magnetic field is much larger than these tiny masses, the fermions will be mainly constrained to their LLL and the MC can be fully operative. However, this is only true if the thermal fluctuations are not as important as to take the fermions out of the LLL. Another way to put this is to say that the critical temperature at which the magnetically induced fermion mass evaporates has to be larger than the electroweak critical temperature. Magnetic fields may have well been present at the early universe. In fact, there are very plausible arguments favoring the existence of primordial magnetic fields that can serve as the source of the seed fields required to explain the observed magnetic fields in galaxies and clusters of galaxies [23]. The literature on this topic is rich in possible primordial fields generating mechanisms, and many of them can produce very strong fields at and before the electroweak transition [36,37]. Although the model used in our calculations lacks the complexity of the electroweak theory, it shares some common features with the electromagnetic sector of the electroweak model, and as so we expect that any conclusion drawn within our model can be seen as an indication (even if qualitative) of what the relevance of the effect would be in the electroweak context. Taking into account that the critical temperature for the vanishing of the magnetically catalyzed fermion mass is typically of the order of the value of the dynamical mass at zero temperature [7,12], that is T ∼ m d (T = 0), and that a reasonable estimate [36,37] for the primordial magnetic field at the electroweak scale is ∼ 10 24 G, one obtains, for the values of λ y and λ that gives rise to the largest zero-temperature dynamical mass, that T c ∼ 1GeV ≪ T ew ≃ 100GeV . Hence, no magnetically induced mass would be present at the electroweak temperature because temperature effects override field effects at this scale. Unless new sources of extremely large B >> T 2 primordial magnetic fields can be identified in the future, these results indicate that the MC has no relevance during the electroweak transition. Nevertheless, the outcomes of this work may be important for applications of the HY model in situations where magnetic field effects are present at sufficiently low temperatures. We expect that they will be particularly relevant in condensed matter applications. As mentioned in the Introduction, a HY theory has been proposed [22,26] to describe the observed emergence of a secondary quasiparticle gap in high-T c superconductors at certain doping levels. According to recent experiments [27], the secondary gap can be also triggered by an applied magnetic field. The resemblance of this behavior with the MC is intriguing and deserve a thorough investigation. Such an study, in turn, will require the extension of the results of the present paper to the two-dimensional case in order to make quantitative predictions that can be compared with the experiment.
2017-09-17T19:30:31.395Z
2002-09-27T00:00:00.000
{ "year": 2002, "sha1": "412b01daa7bd08e20a6173a9cec3dc319ae55695", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0209324", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "56396bfaecd0e03dac1703d474ba54389c525400", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253581282
pes2o/s2orc
v3-fos-license
Measurement of the Kerr nonlinear refractive index and its variation among 4H-SiC wafers The unique material property of silicon carbide (SiC) and the recent demonstration of low-loss SiC-on-insulator integrated photonics platform have attracted considerable research interests for chip-scale photonic and quantum applications. Here, we carry out a thorough investigation of the Kerr nonlinearity among 4H-SiC wafers from several major wafer manufacturers, and reveal for the first time that their Kerr nonlinear refractive index can be significantly different. By eliminating various measurement errors in the four-wave mixing experiment and improving the theoretical modeling for high-index-contrast waveguides, the best Kerr nonlinear refractive index of 4H-SiC wafers is estimated to be approximately four times of that of stoichiometric silicon nitride in the telecommunication band. In addition, experimental evidence is developed that the Kerr nonlinearity in 4H-SiC wafers can be stronger along the c-axis than that in the orthogonal direction, a feature that was never reported before. Introduction Silicon carbide (SiC) recently emerged as a promising photonic and quantum material due to its unique properties, including a wide transparency window spanning from the visible to the midinfrared, simultaneously possessing second-and third-order optical nonlinearities, large thermal conductivity, and the existence of various color centers that can be exploited as single-photon sources or quantum memories [1][2][3]. In addition, SiC is a robust, CMOS-compatible material with its quality supported by a fast-growing industry, as single-crystal 4H-SiC substrates up to six inches are already commercially available at an affordable cost [4]. These features, coupled with the recent demonstration of low-loss SiC-on-insulator integrated photonics platform [5][6][7][8][9], portend potential disruption of quantum information processing through scalable integration of SiC-based spin defects with a wealth of quantum electrical and photonic technologies on the same chip [3]. Despite the impressive progresses made in SiC photonics over the past decade, some of its important photonic properties are yet to be fully explored [10]. For example, the Kerr nonlinear refractive index 2 of SiC, a third-order nonlinear property that underpins optical nonlinear applications such as optical parametric oscillation (OPO) and Kerr frequency comb generation, is predominantly reported in the literature to be in the range of (5 − 8) × 10 −19 m 2 /W in the telecommunication band (see Table 1). (Note this number is approximately 2-3 times of that of stoichiometric silicon nitride, which is around 2.5×10 −19 m 2 /W at 1550 nm.) However, our recent work suggested that 4H-SiC wafers from different manufacturers seem to yield different levels of Kerr nonlinearity, as 2 of 4H-SiC from ST Microelectronics (formerly known as Norstel AB and hereinafter referred to as "Norstel" for short) is estimated to be near (3.0 ± 1.0) × 10 −19 m 2 /W for the transverse-electric (TE) modes while that of II-VI Incorporated ("II-VI" for short) 4H-SiC wafers is even lower [11]. A closer look into the literature also exposes the limited data points relied upon by most of the existing works for the 2 estimation, which tended to ignore various uncertainties in the experiment and thus introduced sizeable errors to the process [11][12][13][14][15][16]. In this work, a systematic approach for the accurate measurement of the Kerr nonlinearity in 4H-SiC wafers is developed. We focus on on-axis, semi-insulating 4H-SiC wafers from three major wafer manufacturers, i.e., Norstel, II-VI, and Cree. While both Cree and Norstel SiC wafers are of high purity (i.e., undoped), the II-VI wafers attain high resistivity through vanadium doping, which has been shown to result in color centers that emit single photons in the telecommunication O band (1278-1388 nm) [17]. Our study confirms, for the first time, that the Kerr nonlinearities of the aforementioned commercial 4H-SiC wafers are indeed significantly different, with Cree wafers exhibiting the highest 2 of (9.1 ± 1.2) × 10 −19 m 2 /W while II-VI wafers exhibiting the lowest 2 of (2.3 ± 0.5) × 10 −19 m 2 /W. For 4H-SiC wafers, our work also points to a stronger Kerr nonlinearity along the -axis compared to the orthogonal direction, with the Norstel 4H-SiC wafers exhibiting 2 of (4.6 ± 0.6) × 10 −19 m 2 /W for the transverse-magnetic (TM, dominant electric field along the -axis) modes and 2 of (3.1 ± 0.5) × 10 −19 m 2 /W for the TE modes (electric field in the wafer plane). Finally, our examination of various waveguide geometries made of the same SiC material also compels an important correction to the existing model for the 2 estimation in high-index-contrast waveguides; otherwise considerable errors can be introduced. FWM experiment and measurement Our approach to determining the Kerr nonlinear refractive index is based on measuring the fourwave mixing (FWM) efficiency between two narrow-linewidth lasers (pump and signal, linewidth < 100 kHz) in high-SiC microresonators (intrinsic s in the range of 1 − 5 million) [13,15,19]. For this purpose, 4-inch-size SiC-on-insulator (SiCOI) wafers were fabricated using a customized bonding and polishing approach (NGK Insulators) for on-axis, semi-insulating 4H-SiC substrates obtained from Norstel, II-VI and Cree (see Supplementary for their wafer specifications). After dicing each wafer to 1 cm × 1 cm chips, we fabricate high-SiC microring and racetrack resonators using ebeam lithography and dry etching. In addition, grating couplers are designed to facilitate the input and output coupling between fibers and on-chip waveguides, with typical insertion loss near 5-7 dB at the center wavelength for each grating coupler [11]. As illustrated in Fig. 1, light from the pump laser (Toptica CTL1550, output power fixed at 10 mW) and the signal laser (Agilent 81642A, output power fixed at 1 mW) is combined before being coupled to the on-chip waveguide through a fiber V-groove array (VGA) [11]. The power of each laser can be externally varied through a variable optical attenuator (VOA) to minimize thermo-optic bistability and higher-order idler generation in the FWM experiment. In addition, the high attenuation accuracy and repeatability (error < 0.1 dB) of VOAs enables an individual estimation of the on-chip power for the pump and signal separately. This is achieved by applying the maximum attenuation (60 dB) to the pump (signal) laser while keeping the normal attenuation level (<15 dB) for the signal (pump) laser, measuring the off-chip powers from the VGA fibers ("in" and "out" ports as illustrated in Fig. 1) using an optical power meter (OPM), and inferring the corresponding on-chip signal (pump) power with the estimated insertion loss. At the output, the pump and signal wavelengths are separated into two paths through a wavelength-division multiplexing (WDM) filter, allowing each of them to be photo-detected and tuned to their respective resonances from the transmission scan [20]. Once aligning the pump and signal laser wavelengths to the selected cavity resonances, we measure the idler power, which is generated from the FWM process in the SiC microresonator, using an optical spectrum analyzer (OSA). At this stage, we also tune the pump/signal laser out of resonance and verify that the power measured by OSA is consistent with the number obtained previously from OPM. Such a power calibration scheme proves to be critical as the insertion loss from the chip can deteriorate by 1-2 dB due to unstable fiber-grating alignment during the resonance scan and/or the idler power measurement, resulting in an inaccurate estimation of on-chip powers. We define the FWM efficiency as the ratio between the idler power (denoted as , which is the on-chip idler power in the waveguide) and the signal power (denoted as ,in , which is the on-chip signal power before entering the SiC microresonator). In the frequency matched scenario, i.e., the pump, signal and idler are all perfectly aligned to their respective resonances and their wavelengths are close to each other, this FWM efficiency is given by the following expression [19]: where is the pump wavelength; is the group index of the resonant modes in the C band; is the circumference of the SiC microresonator; is the FWM nonlinear parameter which is proportional to the Kerr nonlinear refractive index 2 ; ,in denotes the on-chip pump power before entering the SiC microresonator; and ( ) is the loaded (coupling) of the resonant mode with the subscripts , , denoting the pump, signal, and idler, respectively. According to Eq. 1, is explicitly determined by the following factors: where the first multiplying factor can be accurately computed given that and are known, and is inferred from the mode's free spectral range (FSR, which is related to through FSR = /( ) with being the speed of light in vacuum). The second multiplying factor in Eq. 2, which is the ratio between the on-chip idler power (after the SiC microresonator) and signal power (before the SiC microresonator), is experimentally determined by tuning the pump laser into resonance and recording the idler power (when the signal is on resonance) and the signal power (when it is off resonance) both from OSA (see Fig. 1). This practice removes uncertainty in the common loss factor shared by the signal and idler, including the insertion loss from the grating coupler and fiber connectors. To address the possibility that this loss factor might be slightly different between the signal and idler, we switch their spectral positions (i.e., set the signal laser at the idler wavelength while keeping the pump the same) and obtain another FWM efficiency for statistical averaging. As such, the FWM efficiency can be reliably measured with an estimated relative uncertainty < 10%. The third factor in is inversely proportional to the on-chip power for the pump, whose error is predominantly caused by the unstable fiber-grating alignment during the FWM experiment. With our power calibration protocol in place (see discussions following Fig. 1), its relative uncertainty is controlled to be < 10%. The final constituent factor in indicates the crucial importance of accurate estimation, as scales as 2 / 4 and a 10% error in can generate up to 20% − 40% errors in the estimation. To accurately determine the factors from the linear swept-wavelength transmission measurement, we divide a portion of the tunable laser output to a fiber-based MZI, which has a path difference of three meters and an FSR of 68.1 MHz around 1550 nm (see Fig. 1). By scanning the SiC chip and MZI simultaneously and using the known FSR of the MZI to calibrate the swept wavelengths, we are able to correct various scan nonidealities arising from the limited tuning resolution in tunable lasers. Take the signal laser (Agilent 81642A) for example: the frequency tuning rate of the piezo scan (i.e., varying the laser frequency in a narrow range by applying an external voltage) is found to be nonuniform across a linear voltage scan ( Fig. 2(a)). This directly affects the estimation as the inferred cavity linewidth will depend on the relative position of the resonance within the scan range, which is difficult to control precisely from one scan to another. On the other hand, repeated continuous frequency sweeps from the laser's motor scan also yield 10% − 20% fluctuations in the inferred loaded s without calibration ( Fig. 2(b)). Such scan nonidealities are ultimately related to the limited wavelength resolution (pm level) present in most of tunable lasers, which poses a challenge to determining optical s accurately on the million level and above. Hence, the introduction of the MZI to this experiment for the scan calibration becomes necessary, which improves the uncertainty in the estimation to be < 3% ( Fig. 2(b)). Despite the developed calibration processes for the power and measurement, appreciable variations (on the order of 20% − 30%) in the estimation (and hence 2 ) still exist. To further reduce the uncertainties, we carry out the FWM experiment on multiple devices for each SiC material so that a statistically meaningful average is obtained. Moreover, different combinations of azimuthal orders in each device are employed to account for the variations in their intrinsic and coupling s, which are partially attributed to their scattering-limited radiation losses and frequency-dependent couplings [21]. In Fig. 3, exemplary results for four different devices based on the Norstel SiC are presented: the two devices corresponding to Figs. 3(a) and 3(b) are 36-m-radius microrings from the SiC chip that has been previously used for the microcomb generation [11], with an approximate SiC thickness around 475 nm; on the other hand, the devices corresponding to Figs. 3(c) and 3(d) are larger racetrack resonators (bending radius of 100 m and circumference near 1.3 mm), which are fabricated on a different SiC chip with a nominal thickness around 850 nm. To ensure frequency matching between the interacting waves in the FWM process, we choose resonances belonging to the same mode family with only one FSR separation and verify that their dispersion is indeed small enough [11]. The mode order and polarization of each mode family are identified by comparing the measured FSR and coupling s to the simulation results [21]. While in theory we should expect a uniform for the same mode family, the fluctuations observed in Fig. 3 indicate that the aforementioned experimental uncertainties for the estimation cannot be completely removed. 3. 2 estimation from measured After extracting from the FWM experiment for each device, the final step in the Kerr nonlinear refractive index measurement is to connect to 2 based on = 2 2 /( eff ), where eff is the effective mode area. The exact definition of eff , however, is not well agreed upon in the literature. For example, one common version of eff that is applicable to low-index-contrast Table 2. waveguides takes the following form [22]: where ( , ) is the electric field of the waveguide mode under consideration and , are the coordinates in the waveguide cross-section. (Note the denominator in Eq. 3 is only integrated within the waveguide core, which is the only material assumed to possess a nonzero 2 .) For high-index-contrast waveguides, which is the case for SiCOI, we believe that eff needs to be modified as (see derivation in Sect. III.D of the Supplementary from Ref. [20]): where ( , ) is the relative permittivity and 0 denotes the refractive index of the waveguide core. Note that while the first multiplying factor in Eq. 4 resembles the effective mode volume derived in Ref. [23], an additional correcting factor, which depends on the ratio between 0 and (group index), is introduced here. This factor can be intuitively understood based on the fact that 2 is defined for the bulk material while is obtained from confined waveguide modes. Aside from theoretical justification, experimental evidence for the correct eff can be developed by computing 2 from the measured for various waveguide geometries made of the same material, which should result in a consistent 2 . Such an example is provided in Table 2 for the SiC devices measured in Fig. 3. By focusing on the TM polarization, we find that Eq. 3 resulted in dramatically different numerical values of 2 for the two distinct waveguide geometries corresponding to Figs. 3(b) and 3(d), despite the fact that they are both fabricated from the same Norstel SiC wafer. In contrast, the application of Eq. 4 leads to consistent 2 (within measurement uncertainties) for a variety of waveguide geometries (more evidence in Supplementary), which lends strong support to its validity. A closer look into the two eff formulas suggests (see Supplementary) that Eq. 3 is only applicable when the waveguide mode is well confined within the core and the corresponding group index is similar to the refractive index of the bulk material 0 (e.g., the TE modes in Fig. 3); otherwise the more generic formula, i.e, Eq. 4, should be used for the 2 estimation. Fig. 3, all of which were made from the same Norstel SiC material. Norstel Given the sensitivities of the estimation to the measurement and the smaller uncertainties in the estimation of 36-m-radius microrings compared to those of the larger racetrack resonators, we adopt the 2 result for the Norstel material in Table 1 based on Figs. 3(a) and 3(b). In the Supplementary, we provide additional experimental data for the Cree and II-VI SiC wafers and summarize their results in Table 1, both of which are based on the FWM measurement in 36-m-radius microrings. We want to emphasize that one of the main conclusions of this work, that the Kerr nonlinear refractive index 2 from the three major SiC wafer manufacturers is significantly different, is unlikely to be caused by the errors introduced in the connection from the experimentally measured to 2 . This is because we can focus on the TE-polarized modes that are well confined in the in-plane direction (waveguide widths > 2 m), for which different eff expressions yield similar results (see Supplementary for a table summary for the TE modes). Conclusion In conclusion, we developed a systematic approach for the accurate measurement of the Kerr nonlinearity in 4H-SiC wafers, and showed, for the first time, that there are significant variations in the Kerr nonlinear refractive index among 4H-SiC wafers from different manufacturers. Our work also revealed a larger Kerr nonlinearity along the -axis than that in the orthogonal direction, and a necessary correction in the modeling of 2 to obtain consistent results in high-index-contrast waveguides. We believe these findings, in particular the fact that the Kerr nonlinear refractive index of 4H-SiC can be up to four times that of stoichiometric silicon nitride, are crucial to the future development of the SiCOI platform for a variety of nonlinear applications in both the classical and quantum regimes. 4H-SiC wafer specification We list the wafer specifications of 4-inch-size, semi-insulating 4H-SiC wafers that have been used in this work in the following Cree Production -----> 5 × 10 5 Table S1. Wafer specifications of 4-inch-size, semi-insulating 4H-SiC wafers obtained from the three major SiC wafer manufacturers. MPD: micropipe density. It is worth noting that for II-VI 4H-SiC wafers, optical tests from multiple (> 5) wafers and in different batches confirm that their optical properties, including the Kerr nonlinearity, are fairly consistent with no noticeable changes. For the Norstel SiC, we only managed to obtain two wafers of the test grade. Due to their significant warp values, uneven SiC thicknesses (up to 100 nm variations) are observed in the device layer following the bonding and polishing step [11]. The Cree data is based on SiC chips made from a single Cree wafer of the production grade (which does not seem to have an inspection report). measurement and 2 estimation for Cree devices We perform similar device fabrication and four-wave mixing (FWM) measurements for the Cree SiC wafer as we did in the main text for the Norstel material. The Cree chip has an estimated thickness of (630 ± 30) nm based on reflectometry. The SiC microrings have a radius of 36 m and varied ring widths. In the dry etching step, we remove approximately 500 nm SiC (calibrated using profilometer), leaving a pedestal layer with a nominal thickness around 130 nm. In the end of the fabrication, a 1-m-thick PECVD oxide layer is deposited on top of the SiC devices. Table S2. For (a) and (b): the left figure shows the measured loaded ( ) as well as inferred coupling ( ) and intrinsic ( ) for the resonances that have been used in the FWM experiment (pump, signal and idler are only separated by 1 FSR); and on the right we plot the extracted for varied pump wavelengths (i.e., different azimuthal orders), with the blue diamond (red star) curve corresponding to the case that the signal wavelength is smaller (larger) than the pump wavelength. . S1(b) TM 00 2500 ± 100 630 ± 30 3.95 ± 0.11 13.6 ± 1.2 9.4 ± 0.9 Table S2. Estimation of the Kerr nonlinear refractive index for the Cree SiC devices shown in Fig. S1. Both devices have an etch depth near 500 nm and a top cladding layer of oxide. The sidewall angle of the device is estimated to be near 80 degrees. Cree In Fig. S1, we present exemplary results for the TE and TM resonances supported by the SiC microrings. Their polarization and mode order are identified by comparing the measured FSRs and coupling s to the simulation results [21]. Using the extracted , we estimate 2 in Table S2 by taking the uncertainties in the waveguide dimensions into consideration. While the mean value of 2 for the TM polarization (whose dominant electric field is along the -axis) is slightly bigger than that of the TE polarization (whose dominant electric field is orthogonal to the -axis), this difference (≈ 5%) is within the measurement error and is not statistically significant. Therefore, we averaged 2 for the TE and TM polarizations in Table 1 of the main text and increased its uncertainty slightly to account for both cases. measurement and 2 estimation for II-VI devices Likewise, we fabricate 36-m-radius SiC microrings on semi-insulating II-VI 4H-SiC (primary grade) chips and perform FWM experiments to extract their and 2 . The II-VI chip has an estimated SiC thickness of (600 ± 30) nm based on reflectometry. In the dry etching process, we remove approximately 500 nm SiC, leaving a pedestal layer with a nominal thickness around 100 nm. For this chip, the top cladding is air. Table S3. For (a) and (b): the left figure shows the measured loaded ( ) as well as inferred coupling ( ) and intrinsic ( ) for the resonances that have been used in the FWM experiment (pump, signal and idler are only separated by 1 FSR); and on the right we plot the extracted for varied pump wavelengths (i.e., different azimuthal orders), with the blue diamond (red star) curve corresponding to the case that the signal wavelength is smaller (larger) than the pump wavelength. Note that the axis in (a) is in the log scale as the coupling s of the TE 00 mode family are much larger than the intrinsic s (i.e., under-coupled). In Fig. S2, we present representative results for the TE and TM resonances supported by the SiC microrings. As can be seen, the mean value of 2 along the −axis (TM) is approximately 20% − 30% larger than that of the orthogonal direction (TE). Nevertheless, this difference is still within the measurement uncertainties. As such, we took the averaged 2 for the TE and TM polarizations in Table 1 of the main text, and increased its uncertainty to account for both cases. Detailed comparison of different eff expressions In this section, we will take a closer look into the two eff formulas that were discussed in the main text. For convenience, we reproduce their expressions below: where ( , ) is the electric field of the waveguide mode under consideration; , are the coordinates in the waveguide cross-section; ( , ) is the relative permittivity; and 0 denotes the refractive index of the waveguide core. In the literature, Eq. S1 (Eq. 3 in the main text) is commonly used for the calculation from 2 as = 2 2 /( eff ) [22]. As explained in the main text, we believe that a modified formula, i.e., Eq. S2 (Eq. 4 in the main text), is required for high-index-contrast waveguides [20]. By comparing the inferred 2 from these two expressions for varied waveguide geometries made of the same material, we find that Eq. S2 provides a consistent estimation of 2 , which lends strong support to its validity. By contrast, results based on Eq. S1 often lead to dramatic variations in 2 that are difficult, if not impossible, to explain for high-quality, single-crystal materials used in this work. To better understand the difference between the two eff expressions, in particular their reasonable agreement for the TE-polarized modes and significant disagreement for the TMpolarized modes in Table 2 (of the main text), we use the waveguides modes corresponding to Figs. 3(a) and 3(b) (of the main text) as an example. As shown in Fig. S3, the TE modes are well confined within the waveguide core and their group index is close to the material index 0 ( 0 ≈ 2.56 at 1550 nm). As a result, the difference between Eqs. S1 and S2 is relatively small. On the other hand, the TM mode expands more outside the waveguide core, given that the vertical dimension is much smaller than the horizontal dimension. This results in a 35% reduction in the field integral of eff by weighting the electric field with the relative permittivity (i.e., ), as done in Eq. S2, compared to the one without (as in Eq. S1). In addition, Eq. S2 has another multiplying factor that depends on the ratio between 0 and . Because the group index for the TM mode is considerably larger than 0 , this factor will contribute another 30% reduction in the effective mode area. Combined together, the numerical value of eff given by Eq. S2 is approximately 46% of that obtained with Eq. S1 for the waveguide mode corresponding to Fig. 3(b). We believe the data presented in this paper unanimously supports the adoption of Eq. S2 (Eq. 4 in the main text) as the general formula for connecting to 2 , while Eq. S1 (Eq. 3 in the main text) is only applicable for waveguide mode that is well confined in the waveguide core and whose group index is similar to the refractive index of the bulk material. Fig. S3. Computation of two different expressions of eff , i.e., Eq. S1 and Eq. S2, for the waveguide modes corresponding to Figs. 3(a) and 3(b) in the main text. While numerical values of eff for the TE polarization are reasonably close between the two formulas, their results are more than two times different for the TM polarization, which are contributed by the weighted field integral and a factor depending on the ratio between and 0 . Both waveguides have an oxide cladding underneath and an air cladding on top. Table S4. Summary of the experimental results for the TE 00 mode family in 36-mradius SiC microrings made from semi-insulating, on-axis 4H-SiC wafers from three major wafer manufacturers. The two eff expressions (i.e., Eq. 3 and Eq. 4 in the main text) provide a reasonably close estimation of 2 for each material, confirming that its numerical values are indeed significantly different among 4H-SiC wafers produced by II-VI, Norstel and Cree. Summary Finally, we want to emphasize that one of the main conclusions of this work, that the Kerr nonlinear refractive index 2 from the three major SiC wafer manufacturers is significantly different, is unlikely to be caused by the errors introduced in the connection from the experimentally measured to 2 . This is because we can focus on the TE-polarized modes that are well confined in the in-plane direction (waveguide widths > 2 m), for which different eff expressions yield similar results. For this purpose, we summarize the experimental results corresponding to the TE 00 mode family in 36-m-radius SiC microrings and the estimated 2 in Table S4.
2022-11-18T06:42:40.518Z
2022-11-16T00:00:00.000
{ "year": 2022, "sha1": "9936bd1ce1315debd095406fc572b2a465c26829", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9936bd1ce1315debd095406fc572b2a465c26829", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
202728416
pes2o/s2orc
v3-fos-license
Optical generation of high carrier densities in 2D semiconductor heterobilayers We realize Mott transition from interlayer exciton to charge-separated electron/hole plasmas in 2D WSe2/MoSe2 heterobilayers. INTRODUCTION Two-dimensional (2D) transition metal dichalcogenides (TMDCs) are emerging platforms for exploring a broad range of electronic, optoelectronic, and quantum phenomena. These materials feature strong Coulombic interactions, making them ideal for studying highly correlated quantum phenomena as a function of charge carrier density. Seminal demonstrations include, among others, charge density waves and superconductivity in TiSe 2 and MoS 2 by electrostatic gating (1)(2)(3)(4). These exciting demonstrations have been possible due to the high charge carrier densities (~10 14 cm −2 ) achievable with ionic liquid gating. Under bias, a capacitive electrical bilayer is formed between the charge carriers in the 2D material and counter ions in the liquid. Among the limitations of using a liquid as dielectric is that controlling charge carrier density requires gate switching near room temperature, but the appearance of interesting electronic phases occurs mostly upon cooling on hour time scales under the gate bias. Alternatively, in TMDC type II heterobilayers, photoexcited electrons and holes separate on femtosecond time scales (5,6) to form oppositely charged monolayers. While these spatially separated electrons and holes form Coulomb-bound interlayer excitons (7)(8)(9)(10), the insulating exciton gas can be transformed to conducting charge-separated electron/hole (e/h) plasmas if excitation density is increased to above the Mott threshold (n Mott ) (11,12), as illustrated schematically at the top of Fig. 1 for the WSe 2 /MoSe 2 heterobilayer studied here. The Mott transition has been observed in optically excited monolayer and bilayer WS 2 (13), but the electron and hole plasmas exist in the same material, which remains charge neutral. In contrast, TMDC heterobilayers host spatially separated electrons and holes with long lifetimes (7)(8)(9)(10)14). Therefore, these systems offer a unique opportunity to control high carrier densities in individual 2D monolayers. In this case, the resulting e/h bilayer across the heterointerface in the presence of photoexcitation, particularly under continuous wave (CW) conditions, resembles the capacitive electric bilayer in an ionic-gated 2D material. Here, we use photoluminescence (PL) spectroscopy and time-resolved (TR) reflectance spectroscopy to demonstrate optically driven Mott transition from interlayer exciton to charge-separated e/h plasmas in the WSe 2 /MoSe 2 heterobilayer. The experimental findings are supported by calculations from quantum theory. The achieved carrier density is as high as 4 × 10 14 cm −2 , more than two orders of magnitude above the Mott density. RESULTS Experiments: Mott transition from interlayer exciton to charge-separated plasmas We use transfer stacking to form WSe 2 /MoSe 2 heterobilayers encapsulated by hexagonal boron nitride (h-BN), with the two TMDC monolayers aligned within the light cone (15) for radiative interlayer exciton emission (twist angle q = 4°± 2°from K/K or K/K′ stacking), with a dark heterobilayer sample with q = 13°± 2°(from K/K or K/K′ stacking) as control (see fig. S1 for optical images and figs. S2 and S3 for monolayer alignment). The WSe 2 and MoSe 2 monolayers are exfoliated from flux-grown single crystals, each with defect density <10 11 cm −2 , two orders of magnitude lower than in commonly used commercial crystals (16). This is critical for suppressing defectmediated nonradiative recombination previously seen to dominate TMDC heterobilayers (6) and for sustaining high excitation density in the charge-separated e/h plasmas. All measurements are carried out with the samples at 4 K in a liquid helium cryostat. The spectroscopic measurements include steady-state PL with CW excitation (hn = 2.33 eV), TRPL with pulsed excitation (hn = 2.33 eV; pulse width, 150 fs), and transient reflectance spectroscopies with pulsed excitation (hn = 1.82 eV; pulse duration, 150 fs) (see fig. S4 for the experimental setup). At both excitation photon energies, we calculate the absorptance (percentage of incident light absorbed; see fig. S6) to be 8% at the low excitation density limit based on the reported dielectric functions of WSe 2 and MoSe 2 monolayers (17). We carefully calibrate experimental electron/hole density, n eh , by including the saturation of absorptance from self-consistent Maxwell semiconductor Bloch equation calculations (see figs. S8 and S9 and table S1). Under the experimental conditions used here, we find the measurements completely reproducible, i.e., there is no sample damage due to laser excitation. However, damage to other heterobilayer samples has been observed for laser excitation exceeding the upper limit shown here. Figure 1A shows the CW PL spectra from the WSe 2 /MoSe 2 heterobilayer with n eh spanning over four orders of magnitude (1.6 × 10 10 to 3.2 × 10 14 cm −2 ), achieved by varying excitation power density from r = 0.5 W/cm 2 to 1.5 × 10 5 W/cm 2 . We quantitatively calibrate the equilibrium excitation density based on n eh = F • s • t 0 , where F is the incident photon flux, s is the absorptance, and t 0 is the population decay time constant determined in TRPL; both s and t 0 are numerical functions of n eh (see below) determined systematically through our computations and measurements, respectively. A complete set of spectra with normalized peak intensities is shown for the 1.31 to 1.41 eV region in Fig. 1B. Also shown in Fig. 1A are PL spectra of MoSe 2 (blue) and WSe 2 (green) monolayers. The former is characterized by the neutral exciton (X M ) and the trion, while the latter consists of a series of peaks assigned to exciton (X W ), trion, biexciton, etc., in agreement with previous reports (18)(19)(20)(21). At n eh ≤1 × 10 13 cm −2 in the heterobilayer, PL from intralayer excitons is completely quenched, while interlayer exciton (IX) emission with E IX = 1.3566 ± 0.0005 eV (at n eh = 1.6 × 10 10 cm −2 ) dominates (7,22). The IX peak grows with n eh and blue shifts only by~8 meV in the entire excitation density range, as is known for coupled (23) and uncoupled (24) III-V quantum wells. To experimentally detect the Mott transition, we plot in Fig. 1C the n eh dependences of the integrated intensities from interlayer PL (solid black circles) and its spectral full width at half maximum (FWHM; open red triangles), along with the intralayer PL (open black squares) integrated over the 1.50 to 1.75 eV energy rage. The interlayer emission peak broadens substantially when the theory-assigned n Mott = 3 × 10 12 cm −2 (vertical dashed line; see below) is crossed. The corresponding FWHM increases by as much as a factor of four, verifying that excitons (and the narrow linewidth they sustain) are absent above n Mott . We also observe that intralayer PL, corresponding to broad emission from MoSe 2 and/or WSe 2 monolayer(s), reappears and grows for n eh >1 × 10 13 cm −2 . As the charge-separated e/h plasmas form at n eh > n Mott , the band offsets between the two TMDC monolayers are reduced due to both band renormalization and charge separation. The latter can be understood from a simple capacitive model (see "The capacitor model for charge separation across the WSe 2 /MoSe 2 heterobilayer" section in the Supplementary Materials), which predicts from the e/h charge separation a voltage buildup, DV C . This DV C can cancel Fig. 1. Excitation density-dependent PL and Mott transition in the WSe 2 /MoSe 2 heterobilayer. PL spectra (A) and intensity-normalized PL spectra (B) from a BNencapsulated WSe 2 /MoSe 2 heterobilayer with q = 4°± 2°angular alignment between the two monolayers. a.u., arbitrary units. The spectra were obtained with CW excitation at hn = 2.33 eV and calibrated excitation densities (n eh ) between 1.6 × 10 10 and 3.2 × 10 14 cm −2 at 4 K. The spectral region (hn ≥ 1.51 eV) corresponding to PL emission from monolayers WSe 2 and MoSe 2 is multiplied by a factor of 30. Also in (A) is PL from monolayer WSe 2 (green) and monolayer MoSe 2 (blue). Shown on the 2D pseudocolor (normalized intensity, I/I P , where I P is peak intensity) plot in (B) are contours of 50% (solid curve) and 25 out the initial~300 meV band offset (14), leading to the repopulation of the conduction (valence) band of WSe 2 (MoSe 2 ) and to intralayer radiative recombination. This interpretation is supported theoretically (Fig. 1D), which shows the computed source for PL emission, i.e., the probability of simultaneously finding electrons and holes in the K valleys of WSe 2 (green), MoSe 2 (blue), and between the two monolayers (black). The experimental onset of intralayer PL matches perfectly with the rise in the computed spontaneous emission source for MoSe 2 , while PL from WSe 2 remains suppressed. Further support for this interpretation comes from PL measurement on the control sample of a WSe 2 /MoSe 2 heterobilayer with q = 13°± 2°alignment. The large momentum mismatch between the K (or K′) valleys across the interface means that the interlayer excitons are nonradiative (10). We observe no measurable IX emission, but only intralayer PL at n eh >> n Mott (solid gray squares in Fig. 1C; see fig. S10 for the PL spectra). We determine the lifetimes of interlayer exciton emission using TRPL under pulsed excitation (hn = 2.33 eV; see fig. S5 for the instrument response function, which gives a time resolution of~40 ps). Figure 2A shows TRPL data in the broad initial excitation density range of n 0 = 1.1 × 10 10 to 6.0 × 10 13 cm −2 . The corresponding timeintegrated PL spectra ( Fig. 2B) are similar to the CW PL spectra in Fig. 1A (see fig. S11 for direct comparisons). The PL decays at low excitation densities (10 10-11 cm −2 ) are close to single exponentials, with a decay time constant of t 0 = 200 ± 40 ns. As n 0 increases, particularly above n Mott , the PL decay becomes faster and exhibits a major deviation from single exponential. This behavior is expected for plasma luminescence, as demonstrated in various III-V quantum well systems (25). Above the Mott transition, luminescence from the e/h plasmas scales approximately with n eh 2 . In addition, carrier density may decay nonradiatively, e.g., via Auger recombination that scales approximately with n eh 3 . As a result, PL decays faster at higher carrier densities, but this is difficult to analyze quantitatively due to the varying Auger scattering cross sections resulting from the expected density-dependent Coulomb screening. Figure 2C plots the initial PL decay time constant as a function of n 0 . Our PL lifetimes are one to two orders of magnitude longer than those of previous reports on WSe 2 /MoSe 2 heterobilayers (7,22,26), suggesting the suppression of nonradiative recombination in the less defective TMDC samples used here. These long PL lifetimes are essential to reaching excitation density well above the Mott threshold and to obtaining high steady-state n eh under CW excitation, as n eh is proportional to t 0 . To further explore the properties of charge-separated e/h plasmas in the WSe 2 /MoSe 2 heterobilayer, we apply transient reflectance spectroscopy (time resolution~40 fs; see fig. S5), which has been used before to probe excitons and electron-hole (e-h) plasma in TMDC monolayers (13) and charge separation in heterobilayers (5,6). We excite the samples with a 150-fs pulse at 1.82 eV and probe the change in reflectance using broadband white light (1.2 to 1.8 eV). We present transient reflectance, DR/R 0 , as a function of pump-probe delay (Dt), where DR = R -R 0 ; R is the reflectance at Dt, and R 0 is the reflectance without the pump. At the 2D limit and low excitation densities, DR/R 0 is proportional to transient absorption (27). Figure 3 (A to D) shows pseudocolor plots of transient reflectance spectra in a broad range of excitation densities. At n 0 ≤ n Mott ( Figure 3, A or B), each spectrum is dominated by two prominent photobleaching peaks at~1.62 and 1.70 eV, attributed to the reduction in oscillator strength (6) of transitions in monolayers WSe 2 and MoSe 2 , respectively. The induced absorption signal (red) on the sides of the main bleaching peaks can be attributed to shifts in intralayer transition energies resulting from competing effects of screening/Pauli blocking of the Coulomb interaction and band renormalization. Note that, at n 0 < n Mott , DR/R 0 is negligible below 1.5 eV, including the IX region. This is expected as the oscillator strength of the interlayer exciton is two orders of magnitude lower than those of the intralayer excitons in each monolayer (28). The absence of DR/R 0 signal below 1.5 eV is evident in horizontal cuts at selected Dt values, shown for n 0 = 1.0 × 10 11 cm −2 in (Fig. 3E). In agreement with the CW results in Fig. 1A, transient reflectance spectra under pulsed excitation reveal plasma formation above the Mott density. At n 0 = 5.6 × 10 12 or 3.4 × 10 13 cm −2 (Fig. 3, C and D), the spectra show, in addition to bleaching of intralayer exciton transitions, broad induced absorption extending to the low energy end (~1.3 eV) of the probe window. These broad features are evident in horizontal cuts (spectra) at short pump-probe delays, as shown for n 0 = 3.4 × 10 13 cm −2 in Fig. 3F. This broad absorption feature is the optical signature of a 2D plasma, which consists of broad induced absorption (positive) extending to the renormalized bandgap and gain (negative) just above the bandgap (13,29). While the spectroscopic measurements presented here were obtained at 4 K, we have also carried out PL measurements as functions of both excitation density and temperature up to 48 K (fig. S12). The broadening of PL emission peak across the Mott density is similarly observed at temperatures >4 K. However, the decrease in the excitonic emission intensity with temperature and the broadening due likely to exciton-phonon scattering make the quantitative analysis of the Mott transition less reliable at higher temperatures. Note also that the current manuscript focuses on the transition from interlayer excitons to charge-separated e/h plasmas in the WSe 2 /MoSe 2 heterobilayer; the Mott transitions from intralayer exciton to e-h plasma have also been observed in transient reflectance spectra for individual WSe 2 or MoSe 2 monolayer (figs. S13 and S14). In the latter case, the e-h plasma is not charge separated and is overall charge neutral, similar to the observation of Chernikov et al. (13) on WS 2 monolayer and bilayers. with a weak external probe field E(t) incident perpendicular on the TMDC heterobilayer. The photoexcited electrons and holes generated by a strong pump field are described in quasi-equilibrium by Fermi distribution functions f a k . The linear susceptibility in the frequency domain is used in a second step to derive reflectance and absorptance spectra, as detailed below. In the SBE, material properties enter via band structures e a k , screened Coulomb matrix elements W q and dipole matrix elements d k . Band structure renormalizations due to photoexcited carriers are given by the screened-exchange-Coulomb-hole self-energyS a k;SXCH , while plasma screening is described by a dielectric function in the long-wavelength approximation via W ab k;k 0 ¼ e À1 kÀk 0 ;pl V ab k;k 0 (31). The band structure of the unexcited MoSe 2 -WSe 2 heterolayer is modeled under an effective mass approximation for the relevant conduction and valence band valleys as shown in fig. S8. The energetic ordering of the bands is inspired by first-principle calculations (14) while we adjust the band edges to match our experimental reflectance spectra. We assume that the effective masses are approximately given by the masses of the respective monolayers as provided in (32). For the Q and G valleys, we average over both materials. The band edges and masses are collected in table S1. The Coulomb interaction between carriers located in different TMD layers is significantly weaker than the intralayer Coulomb interaction due to the spatial separation of carriers in growth direction. To account for this effect, we use model Coulomb matrix elements in a 2D layer basis |a〉 = {|MoSe 2 〉,|WSe 2 〉} V ab where the contribution of a certain layer a to the band a is given by |c a a ðkÞ| 2 . We assign layer contributions according to the first-principle results in (14) as given in table S1. The matrix elements V ab q are modeled by a macroscopic dielectric function e À1;ab q;b and a form factor F ab q according to The dielectric function for each layer combination is obtained by solving Poisson's equation for the respective dielectric structure (33) as shown in fig. S9. The dielectric constants of the TMD materials are computed as geometric mean of the values given in (34), where also layer widths are provided. The dielectric constant of h-BN is taken from (35). The layer substrate distance h 1 = 0.5 nm has been found to be an appropriate value in (33), while we assume that the two TMD layers are slightly closer to each other using h 2 = 0.3 nm. The form factor accounts for the confinement of carriers inside the atomically thin layers via the confinement functions x a (z) For the confinement functions, we assume eigenfunctions of the infinitely deep potential well with two nodes due to the mostly d-like character of electronic orbitals. To describe light-matter interaction, we assume a circularly polarized electric field selecting dipoles in the K valley between like-spin bands. The numerical values for the intralayer dipoles are computed using the simple lattice model from (36), where we neglect the momentum dependence. For the interlayer transition dipoles, we assume a value that is 10 times smaller than that in the MoSe 2 monolayer (28). The SBE contains a phenomenological damping factor g, which corresponds to the HWHM of lines in optical spectra. Because of excitation-induced dephasing, g depends on the actual excited carrier density. We fix the value of g at different densities by matching simulated and experimental reflectance spectra. For the intralayer MoSe 2 transition, this yields g = 25 meV for carrier density n = 1.3 × 10 12 cm −2 , g = 30 meV for n = 1.9 × 10 12 cm −2 , g = 35 meV for n = 5.3 × 10 12 cm −2 , and g = 50 meV for n = 3.13 × 10 13 cm −2 . For the intralayer WSe 2 transition, we use a g that is 50% larger to account for the stronger dephasing, in accordance with the experimental reflectance spectra. Figure 4A shows simulated transient reflectance spectra at excitation densities n 0 = 6 × 10 11 , 4 × 10 12 , and 3 × 10 13 cm −2 obtained from theoretical optical absorptance and the experimental sample geometry. Also shown as comparison are experimental transient reflectance spectra (Dt = 1 ps) at similar n 0 values Fig. 4B. The simulations and experimental spectra are in excellent agreement, including main features of bleaching of intralayer excitonic transitions for all excitation densities, the broad induced absorption feature above the Mott density, and stimulated emission near the renormalized bandgap at~1.3 eV. This agreement provides strong support for the conclusion on Mott transition from the interlayer exciton to charge-separated e/h plasmas and for the calibration of carrier density in the CW measurement in Fig. 1. Figure 4C shows calculated absorptance spectra at selected n eh values. By determining at which n eh excitonic absorption resonance becomes bleached, we find n Mott = 3 × 10 12 cm −2 . This value is close to n Mott = 1.6 × 10 12 cm −2 obtained from an analytical estimate (29) of a 0 n Mott 1/2 ≈ 0.25 and an interlayer exciton radius of~2 nm (14). More specifically, we follow excitonic absorption where exciton features gradually fade through broadening from a clear peak to transparency and eventually to gain (24,37). Below n Mott , the presence of excitons significantly reduces scattering. There is an accelerated broadening after excitons cease to exist above n Mott (24), and this leaves a signature in increased PL linewidth. Note that the observed increase in PL peak width above the Mott density is much larger than what was observed before in coupled III-V quantum wells (11,12). The interlayer excitons in the 2D TMDC heterobilayer (7-10) are much more strongly bound and less Coulomb screened than their counterparts in III-V coupled quantum wells (11,12); as a result, the Mott transition has a much larger effect on reducing Coulomb screening in the former. In addition to revealing the Mott threshold from the disappearance of sharp excitonic features, the theoretical absorption spectra show the decrease in oscillator strength with increasing n eh , as expected from Pauli blocking and screening effects. Optical transparency is reached at n eh~4 × 10 14 cm −2 , above which stimulated emission dominates. On the basis of the calculated optical spectra, we obtain the n ehdependent relative absorptance (s/s 0 , where s 0 is the absorptance at the low n eh limit) shown in Fig. 4D for two photon energies. These calculated results are used in the calibration of experimental excitation densities (see fig. S7). Mechanisms of interlayer PL emission from the heterobilayer We now turn to the mechanism of PL emission from interlayer excitons and charge-separated e/h plasmas. A comparison of TRPL in Fig. 2 and transient reflectance in Fig. 3 reveals a major discrepancy in the time scales involved. PL decays are characterized by time constants of 10 2 ns, but transient reflectance features time constants in the range of 10 1-2 ps. We show kinetic profiles (vertical cuts of transient reflectance spectra) for two representative probe energies, hn = 1.351 and 1.624 eV, for induced absorption (Fig. 3G) and photobleaching (Fig. 3H), respectively. Figure 3G shows little induced absorption at hn = 1.351 eV for n 0 = 1.0 × 10 11 and 9.6 × 10 11 cm −2 , as expected from the absence of plasmas. When n 0 is increased above n Mott , we observe both positive (induced absorption) and negative (stimulated emission) DR/R 0 signal, consistent with the transformation to the charge-separated plasmas region. For the intermediate density n 0 = 5.6 × 10 12 cm −2 , stimulated emission dominates. At the highest density of n 0 = 3.4 × 10 13 cm −2 , induced absorption dominates at Dt < 60 ps and stimulated emission at Dt > 60 ps. The kinetics profiles at hn = 1.624 eV (Fig. 3H) reveal the shorttime nature of photobleaching. At n 0 = 1.0 × 10 11 , 9.6 × 10 11 , and 5.6 × 10 12 cm −2 , photobleaching (−DR/R 0 ) grows with time constants of t 1 = 140 ± 30 fs, attributed to the ultrafast dissociation of intralayer excitons in each TMDC monolayer to form charge-separated states that increase the Pauli blocking effect. The photobleaching intensity peaks in subpicoseconds and decays on longer time scales. At n 0 ≤ n Mott (1.0 × 10 11 and 9.6 × 10 11 cm −2 ), bleaching intensity decays with time constants of t 2 = 30 ± 10 ps. This time constant increases above n Mott to t 2 = 90 ± 30 ps and t 2 = 290 ± 60 ps at n 0 = 5.6 × 10 12 and 3.4 × 10 13 cm −2 , respectively. There is a three order of magnitude difference between the time constants for PL decay (t PL ) and those of photobleaching recovery (t 2 ). The fast recovery in photobleaching cannot result from the loss of photoexcited charge carriers to recombination but rather to the scattering of these carriers away from the K valley. Computational studies on the WSe 2 /MoSe 2 heterobilayer have shown that the conduction band is lower in energy at the Q point than that at the K point, while valence band energy at the G point is close in energy to that of the K point (14). Following charge separation, intervalley scattering transfers carrier populations in the K valleys to the Q and G valleys. This process reduces Pauli blocking of optical transitions in the K valleys and accounts for the t 2 = 30 to 290 ps decay time constants. Efficient intervalley carrier scattering involves optical phonons, and its rate is decreased by screening as excitation density is increased, thus accounting for longer t 2 at higher n 0 above n Mott . The Q and G valleys serve as carrier reservoirs; the momentum-indirect nature prohibits radiative recombination of electrons and holes in these valleys. Instead, scattering of electrons and holes back to the K valleys likely occurs before radiative recombination happens. This explains the long PL lifetimes on the 10 2 ns time scale. In a similar proposal, dark traps have been suggested as exciton reservoirs for slow PL emission in monolayer MoS 2 (38). DISCUSSION The results presented here establish photoinduced charge separation at van der Waals interfaces as an effective means to control 2D charge carrier densities. Using the heterobilayer of WSe 2 /MoSe 2 , we show the spectroscopic signature of Mott transition from interlayer excitons to charge-separated e/h plasmas, in excellent agreement with calculation based on a fully microscopic quantum theory. We point out that the spectroscopy measurements probe the combined responses of the electron and hole plasmas across the heterobilayer interface. Resolving the individual response of the electron or hole plasma is challenging but possible with time and angle-resolved photoemission spectroscopy, which is underway in our laboratory (39). The combined PL and transient reflectance measurements also reveal the participation of intervalley scattering and dark exciton/ carrier reservoirs in radiative recombination dynamics. Photoinduced charge separation under CW conditions allows us to reach charge carrier densities as high as~4 × 10 14 cm −2 , which is two orders of magnitude above the Mott density and is at the same level demonstrated previously for gate-doped superconductivity in TMDCs (1-4). These findings suggest that photoinduced charge separation at van der Waals interfaces is an effective means to realize complex electronic phases in 2D materials, particularly photoinduced superconductivity under CW conditions. MATERIALS AND METHODS Preparation of 2D WSe 2 /MoSe 2 heterobilayer samples Monolayers of WSe 2 and MoSe 2 were mechanically exfoliated from bulk crystals grown by the self-flux method. These monolayers had low defect densities (<10 11 cm −2 ) (16). h-BN flakes of thicknesses 5 to 35 nm and of flat surfaces were also obtained by mechanical exfoliation. The flakes (WSe 2 , MoSe 2 , and BN) were characterized by atomic force microscopy and Raman spectroscopy. The crystal orientations of WSe 2 and MoSe 2 monolayers were determined by second harmonic generation (SHG) measurement on an inverted optical microscope (Olympus IX73). Linearly polarized femtosecond laser light (Coherent Mira 900, 80 MHz, 800 nm, 100 fs) was focused onto a monolayer with a 100×, numerical aperture (NA) 0.80 objective (Olympus LMPLFLN100X). The reflected SHG signal at 400 nm was collected by the same objective, filtered by a short-pass dichroic mirror, short-pass and band-pass filters, and a Glan-Taylor linear polarizer; detected by a photomultiplier tube (R4220P, Hamamatsu); and recorded by a photon counter (SR400, Stanford Research Systems). We obtained the azimuthal angular (q) distribution of SHG signal by rotating either the sample (40) or the laser polarization (41) (via a half waveplate) with fixed polarization detection. Because of the D 3h symmetry, the nonvanishing tensor elements of the second-order susceptibility of WSe 2 and MoSe 2 monolayers are c ð2Þ yyy ¼ Àc where the x axis is defined as the zigzag direction. When we rotated the sample, the SHG intensity showed sixfold symmetry: I ⊥ º cos 2 (3q) and I ∥ º sin 2 (3q), where q is the angle between the laser polarization and the zigzag direction. When we rotated the laser polarization, the SHG intensity showed fourfold symmetry: I y º cos 2 (2q) and I x º sin 2 (2q). We used triangular flakes of monolayer WS 2 (6Carbon) or MoS 2 (2DLayer), where zigzag directions are the same as crystal edges, both grown from chemical vapor deposition, to calibrate the SHG setup. The 2D WSe 2 /MoSe 2 heterobilayer was prepared by the polymerfree van der Waals assembly technique (42). A transparent polydimethylsiloxane stamp coated by a thin layer of polypropylene carbonate (PPC) was used to pick up a thin layer of exfoliated h-BN. This h-BN was then used to pick up the first TMDC monolayer. The second TMDC monolayer was aligned to and picked up by the first monolayer on a high-precision rotation stage. The heterostructure was finally stamped onto a thicker layer of h-BN and detached from the PPC at elevated temperatures (90°to 120°C). The residual PPC was washed away by acetone to give a clean h-BN/MoSe 2 /WSe 2 /h-BN heterostructure on the Si/SiO2 substrate. Figure S1 shows optical microscope images of the two BN/WSe 2 / MoSe 2 /BN heterobilayer samples used in the spectroscopy measurements shown in the main text. Figures S2 and S3 show SHG polarization data used to determine the two alignment angles, q = 4°± 2°and 13°± 2°, respectively. Steady-state and time-resolved PL measurements All spectroscopic measurements were performed on a home-built reflection microscope system based on a liquid-helium recirculating optical cryostat (Montana Instruments Fusion/X-Plane) with a 100×, NA 0.75 objective (Zeiss LD EC Epiplan-Neofluar 100×/0.75 HD DIC M27). The temperature of the sample stage could be varied between 3 and 350 K. In all experiments presented in this study, the TMDC heterobilayer and monolayer samples were at 4 K in a vacuum (<10 −6 torr) environment, unless otherwise noted. In steady-state PL measurements, a CW laser (532 nm) was focused by the objective to a diffraction-limited spot on the sample. The excitation power was measured by a calibrated power meter (OPHIR Star-Lite) with broad dynamic range. The PL light was collected by the same objective, spectrally filtered, dispersed by a grating, and detected by an InGaAs photodiode array (PyLoN-IR, Princeton Instruments). The wavelength was calibrated by neon-argon and mercury atomic emission sources (IntelliCal, Princeton Instruments). The intensity was calibrated by three independent NIST traceable light sources: a 400 to 1050-nm tungsten halogen lamp (StellarNet SL1-CAL), a 250 to 2400-nm quartz tungsten halogen lamp (Oriel 63355), and a 425 to 1000-nm LED (light-emitting diode) (IntelliCal, Princeton Instruments). In TRPL measurements, the pulsed excitation light (hn = 1.82 eV; pulse duration, 150 fs) was from a wavelength tunable output of an visible optical parametric amplifier (Coherent OPA 9450) pumped by a Ti:sapphire regenerative amplifier (Coherent RegA 9050, 250 kHz, 800 nm, 100 fs). The interlayer PL emission in the 900 to 1000-nm region was selected and focused onto a single-photon avalanche photodiode (IDQ ID100-50). The TRPL trace was collected with a time-correlated single-photon counting module (Becker & Hickl GmbH SPC-130). The instrument response function, determined by collecting scattered laser light, has an FWHM of 100 ps ( fig. S5). The time resolution of TRPL was estimated at~20% of the FWHM, i.e.,~20 ps. Reflectance and transient reflectance measurements In reflectance measurements, the broadband white light was directed to the sample with the objective, reflected, collected by the same objective, and detected by an InGaAs photodiode array (PyLoN-IR, Princeton Instruments). For the reflectance at the low-density limit, the spectrally filtered and collimated white light from a 3200 K halogen lamp (KLS EKE/AL) was used. Reflectance was also taken for the white light probe in the same geometry as transient reflectance to confirm that it is in the linear regime. A 150-nm gold film deposited by electron beam evaporation on the same Si/SiO 2 substrate was used as a reflectance standard. In transient reflectance measurements, femtosecond laser pulses from the Ti:sapphire regenerative amplifier (Coherent RegA 9050, 250 kHz, 800 nm, 100 fs) was split into two beams: One was used to pump the visible optical parametric amplifier (Coherent OPA 9450) to generate tunable pump light, and the other was focused onto a sapphire crystal to generate white light continuum probe light. The pump was then chirp compensated by a prism pair, delayed by a motorized translation stage, modulated by an optical chopper, combined with the probe, and directed collinearly to the sample by the objective. To achieve homogenous excitation, average over a sufficient area, and reduce nonlinear effect of probe, both beams were focused onto the back focal plane of the objective to obtain a large beam diameter at the sample plane, unless otherwise specified. The reflected probe light was then collected by the same objective, spectrally filtered to remove pump light, and recorded with the InGaAs photodiode array (PyLoN-IR, Princeton Instruments). This detector was synchronized with the optical chopper through a home-made frequency doubler. At each specific pump-probe delay, the reflected probe spectra with and without pump was recorded, and the transient reflectance (DR/R) was calculated. We determined the sign of the transient reflectance signal by recording the chopper output with a data acquisition board (National Instruments) triggered by the InGaAs detector. The chopper modulation frequency was selected to maximize the signal-to-noise ratio of transient reflectance signal.
2019-09-19T09:09:04.662Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "ca081ad511ce1eedd20b57697746358fd0a2877c", "oa_license": "CCBYNC", "oa_url": "https://advances.sciencemag.org/content/advances/5/9/eaax0145.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d33cf8e6a7a10ee8a0b41c808dccd1ba68d0592d", "s2fieldsofstudy": [ "Physics", "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
13424040
pes2o/s2orc
v3-fos-license
Immune thrombocytopenic purpura in ulcerative colitis: a case report and systematic review Over 100 extraintestinal manifestations are reported in ulcerative colitis (UC). A commonly reported hematological manifestation is autoimmune hemolytic anemia. On rare occasions, immune thrombocytopenic purpura (ITP) has been reported with UC. The presence of thrombocytopenia can complicate the clinical scenario as the number of bloody bowel movements is an important indicator of disease activity in UC. A proposed theory for this association is antigenic mimicry between a platelet surface antigen and bacterial glycoprotein. We are reporting a case of UC and associated ITP managed successfully with anti-TNF therapy. We also performed a systemic review of case reports and a case series reporting this association. A 26-year-old African American female with a history of genital herpes developed hematochezia 6 weeks prior to hospital admission. She was seen at an urgent care center and was prescribed a 1-week course of oral antibiotics. The patient continued to have hematochezia for which she was scheduled to have a colonoscopy as an outpatient. Her hematochezia worsened over the next 2 weeks and she was admitted to an outside hospital where she was managed with a presumed flare of ulcerative colitis (UC) based on clinical diagnosis. The patient was treated with oral steroids and intravenous antibiotics and discharged on a tapering course of steroids for 1 week. During the hospitalization, her platelet count was 236)10 3 /mm 3 . She experienced partial improvement in her symptoms with steroids use, but shortly after stopping steroids her symptoms worsened. She presented to our hospital emergency department with 2 days of 8Á9 bloody bowel movements per day, mild left-lower quadrant abdominal pain, no fever or chills but reported a 20 pound weight loss. On admission, her white cell count was 10,370/mm 3 , hemoglobin of 11.4 g/dL, and platelet count was 76)10 3 /mm 3 . Intravenous methylprednisolone was administered. Her medications were reviewed for possible thrombocytopenia and there was no reported history of alcohol use. HIV and H. pylori serology and blood cultures were negative. Colonoscopy showed pan colitis with histopathology consistent with UC without any viral inclusions. On day 5 of hospitalization, her platelet count decreased to 5)10 3 /mm 3 , her frequency of bloody bowel movement increased, and she became febrile and tachycardic. The patient was started on intravenous ciprofloxacin plus metronidazole and intravenous steroids were continued. The peripheral smear showed few large size platelets and the patient did not have splenomegaly. These results suggested the possibility of immune mediated peripheral destruction of platelets. Intravenous immunoglobulin (IVIGg) was administered and one unit of platelets transfused. On days 6 and 7, platelet counts improved to 24)10 3 /mm 3 and 36)10 3 /mm 3 , respectively, but the patient continued having bloody bowel movements. Positive platelet-associated antibodies confirmed diagnosis of immune thrombocytopenic purpura (ITP). Because of the lack of response to steroids, Infliximab infusion 5 mg/kg was administered on day 7. On days 8 and 9, frequency of bloody stool decreased significantly and platelet count continued to improve, 94)10 3 /mm 3 on day 9. At that point, the patient was discharged on a tapering dose of steroids and scheduled Infliximab infusion. On outpatient follow-up 10 days after discharge, her platelet count was normal at 151)10 3 /mm 3 and she was free of hematochezia. Systematic review Search and data compilation A comprehensive search of two major databases of biomedical publications was performed during the last week of August 2013. No age or language restrictions were applied. A summary of our search strategy is described in the Appendix. Titles and abstracts were reviewed to identify cases. The references of eligible articles were hand searched to elicit additional cases. All of the adult and pediatric cases reports and series reporting UC associated with ITP were included. Data points were extracted based on the best information reported. Results Cases of ITP associated with UC were first reported in 1963 (1). Since then, a total of 40 cases (including the above-mentioned case) were identified, seven of them being of pediatric age group. Table 1 summarizes patient demographics and management of UC and associated ITP. Fifty six percent of cases were male and the median age of presentation was 27 years (interquartile range 14Á42 years). Median age of presentation was higher in females (41 vs. 22 years) but the difference was not statistically significant (p 0 0.0718). As shown in Table 1, 52% of patients were white, 45% were Asian (mostly of Japanese origin). The current report documents the first case in an African American patient. In the majority of cases, ITP resolved with treatment of UC flare. IVIGg or anti-D antibodies were used in 15 cases; response was adequate and lasting in 11 of them. Amongst the remaining four patients, one responded dramatically to 5-ASA; in two cases, ITP was resistant to both IVIGg and splenectomy and required a colectomy; and in one case, colectomy and splenectomy were performed together, which improved the ITP. Ten patients underwent a colectomy; one of them had a colectomy some years prior to the development of ITP (2). Of the remaining nine cases, eight responded well but one patient continued to have recurrent ITP despite colectomy (3). In one case, ITP resolved with H. pylori eradication (4). Discussion The development of ITP adds complexity in the clinical course of UC flare as the number of bloody bowel movements is one of the important criteria to assess disease severity. Since 1965, a total of 40 cases reported association of ITP with UC. Rarity of occurrence limits methodologically sound studies to establish causal relationship between the two disorders. In disease epidemiology, Sir Austin Bradford Hill proposed criteria for causation, also knows has Hill's criteria for causation (5). Results of this systematic review elicit multiple interesting observations to generate a hypothesis of causal relationships between UC and ITP. 1) In most cases, UC preceded ITP, which demonstrates a temporal relationship between UC (exposure) and ITP (effect). Only three cases have been reported where ITP preceded UC (6,7). In two of the cases, ITP preceded UC by just 18 months. This could be a result of delay in diagnosis or subclinical disease, which is not uncommon in UC. In another case, ITP preceded UC by 18 years, which appears to be a result of random concurrence of the two disorders. 2) Platelet count was lowest during the flare of UC, demonstrating a biological gradient. 3) In most cases, treatment of UC resolved ITP, analogous to the removal of exposure leading to reversal of effect. Biological plausibility in ITP development in patients with UC is hypothesized to be due to antigenic mimicry between platelet surface antigen and luminal antigens, including bacterial surface antigen. Increased exposure to luminal antigens is thought to be the result of mucosal injury. This is also postulated for the association of ITP with Crohn's disease (8). The results of our systematic review suggest that peak age for this association is in the third decade of life with a trend towards earlier occurrence in males. Cases have been reported in both pediatrics and adults above 60 years of age. About half of the cases were reported in the Japanese population. We report the first case in the African American population. In the management of this coexistence, treatment of UC is the corner stone. In severe cases of thrombocytopenia, IVIGs or Anti-D antibodies in combination with 5-ASA and/or steroids are effective in most cases. These patients should also be screened for H. pylori. The infection needs to be eradicated if presented. Refractory cases respond to colectomy and splenectomy but are rarely necessary (4). We are reporting a second case where a colectomy was avoided by using anti-tumor necrosis factor therapy (9). In conclusion, ITP appears to be an extraintestinal manifestation of UC. The proposed pathogenesis is antigenic mimicry between luminal antigen and platelet surface antigen. Treatment of underlying UC flare is the cornerstone in managing the condition and in severe cases of ITP, IVIGs is effective. Though colectomy has been proposed as a definitive treatment option, the use of biological agents is an acceptable alternative in a steroidresistant case of UC associated with ITP.
2017-06-19T10:35:16.087Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "370f7eaf436dc80be2244c8595dcf8e828289173", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/jchimp.v4.23386?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "370f7eaf436dc80be2244c8595dcf8e828289173", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
242253559
pes2o/s2orc
v3-fos-license
Relationship between Diabetic Retinopathy , and Diabetic Nephropathy Diabetic nephropathy is accountable for nearly third of the world cases of last step of renal disease; it becomes a major public health problem with social and economic burden. To assess the relationship between Diabetic Retinopathy and Diabetic Nephropathy in Type II Diabetes Mellitus patients. !e present study was a cross sectional study conducted in the department of Ophthalmology at BIRDEM General Hospital, Dhaka, over a period of 12 months during March 2018February’ 2019 and were assess for the relationship between Retinopathy and Nephropathy. All patients of Type II Diabetes Mellitus patients with Diabetic Retinopathy and Diabetic Nephropathy were included in the study. Majority (64.0%) patients had diabetic nephropathy and 36(36.0%) had not diabetic nephropathy. Almost three fourth (73.4%) patients was found diabetic retinopathy in diabetic retinopathy and 27(54.0%) in without diabetic retinopathy. !e di"erence was statistically signi#cant (p<0.05) between two group. !is study suggests that Diabetic Nephropathy has a signi#cant association with the presence of Diabetic Retinopathy in persons with Type II DM. INTRODUCTION Diabetes mellitus is one of the most familiar metabolic disorders of several etiologies. e multisystem special e ects of diabetes such as nephropathy, retinopathy, neuropathy and cardiovascular diseases have a signi cant impinging on the working age individuals in our country. 1 Diabetic nephropathy is accountable for almost third of the world cases of end stage renal disease; it is a foremost public health problem which also social and nancial burden. 2 Diabetes is multi system disorder which can e ected both eyes and kidneys. Glomerular ltration rate (GFR) and microalbuminuria are clinically important markers for the assessment of renal function. 3 Diabetic nephropathy is de ned when GFR is less than 60 ml in occurrence of proteinuria. 4 Duration of disease is the most important risk factor; type 1 DM patients express diabetic retinopathic changes after a common period of 3-5 years of beginning of systemic disease. In type 2 DM patients, the time of onset and therefore length have been more complicated to determine accurately, so newly diagnosed type 2 DM patients infrequently present with retinopathy as initial sign of DM. METHODOLOGY: e study was a cross sectional study conducted in the department of Ophthalmology at BIRDEM General Hospital, Dhaka, over a period of 12 months during March 2018-February' 2019 and were evaluate for the association between Retinopathy and Nephropathy. Inclusion criteria: x All patients of Type II Diabetes Mellitus patients x Diabetic Retinopathy. x Diabetic Nephropathy. Exclusion criteria: x Patients with Type 1 Diabetes Mellitus, x Retinopathy due to other causes, x Nephropathy due to other causes. Total 100 cases were studied over 3 years. Relevant assessment like Slit Lamp Bio microscopy, Visual acuity, Fundoscopy by Direct and Indirect ophthalmoscope, Blood Parameters, Urine albumin FFA, 24 hours urinary protein and Renal Biopsy were done. In this study showed sixty nine (69.0%) patients were hypertension and 21(21.0%) was smoker. Mean BMI was found 26.0±3.0 kg/m 2 , FBS was 7.5±2.8 mmol/l, 2HABS was 11.7±4.8 mmol/l, HbA1c was 7.4±1.8 percent, systolic blood pressure was 135.8±21.7 mmHg, diastolic blood pressure was found 81.9±11.9, triglycerides was 180.9±97.2 mg/dl, total cholesterol was 192.1±31.6 mg/dl, LDL was 104.7±34.3 mg/dl, eGFR was 42.2±38.3 mg/dl and serum creatinine was 1.8±0.9 mg/dl. Lee et al. 5 reported 73.0% patients were hypertension and 18.70% were smoker. FBS was 144.8±43.6 mg/dl, HbA1c was 7.56±1.50 percent, systolic blood pressure was 132.7±17.8 mmHg, diastolic blood pressure was 76.3±13.2, triglycerides was 180.3±127.9 mg/dl, total cholesterol was 186.3±37.8 mg/dl, LDL was 105.2±33.9 mg/dl, eGFR was 83.36±22.70 ml/min/1.73m 2 and serum creatinine was 0.93±0.45 mg/dl. Chen et al. 11 observed that the predicting competence of microalbuminuria and moderately compact GFR on predicting the development of retinopathy among 487 type 2 diabetic patients. During the mean follow up of 6.6 years, they found that patients with microalbuminuria and estimated GFR >60 mL/min/1.73 m 2 had a threefold increase in risk compared with those with normoalbuminuria and estimated GFR 30-59.9 mL/min/1.73 m 2 . Reddy et al. 1 Almost three fourth (73.4%) patients was found diabetic retinopathy in diabetic retinopathy and 27(54.0%) in without diabetic retinopathy. e di erence was statistically signi cant (p<0.05) between two groups. Ahmed et al. 2 the frequency of nephropathy among individuals with retinopathy was 35.6%. e regression model analysis showed signi cant association between nephropathy and development of retinopathy. Lee et al. 5 association between DR (both DR itself and PDR) and DN (both microalbuminuria and overt nephropathy) is signi cant in the univariate x 2 test. A number of studies provide evidence that DR may be independently associated with the development of microalbuminuria and hence be a powerful predictor for the progression of renal damage in DM patients. [12][13][14][15] Multivariate logistic regression reported that patients with DR were 4.37 times more probable to have DN as those without DR. Schmechel and Heinrich 16 indicated that patients with DR exhibited proteinuria more commonly than did those without DR. Villar et al. 13 also demonstrated that DR was one of the most important risk factors for the development of incipient nephropathy in normoalbuminuric, normotensive patients with either type 1 or type 2 DM. Di erent studies have shown the prevalence of PDR, rather than DR itself, is a risk factor for DN (microalbuminuria 8,17,18 and overt nephropathy 8,18 ). Chen et al. 19 reported that a microalbuminuria threshold of 10.7 mg/24 h, which was within the conventional 'normal range', can predict the increased risk for diabetic retinopathy development. Reddy CONCLUSIONS is study found that Diabetic Nephropathy has a signi cant association with the occurrence of Diabetic Retinopathy in persons with Type II DM.
2021-02-05T11:04:30.714Z
2019-05-30T00:00:00.000
{ "year": 2019, "sha1": "ffac0334a8749b71449a79f46b3b57fb2e80827e", "oa_license": null, "oa_url": "https://doi.org/10.3329/bmj.v48i2.51289", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ffac0334a8749b71449a79f46b3b57fb2e80827e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
2372464
pes2o/s2orc
v3-fos-license
Nrf2 pathway activation contributes to anti-fibrosis effects of ginsenoside Rg1 in a rat model of alcohol- and CCl4-induced hepatic fibrosis AIM To investigate the anti-fibrosis effects of ginsenoside Rg1 on alcohol- and CCl4-induced hepatic fibrosis in rats and to explore the mechanisms of the effects. METHODS Rats were given 6% alcohol in water and injected with CCl4 (2 mL/kg, sc) twice a week for 8 weeks. Rg1 (10, 20 and 40 mg/kg per day, po) was administered in the last 2 weeks. Hepatic fibrosis was determined by measuring serum biochemical parameters, HE staining, Masson's trichromic staining, and hydroxyproline and α-SMA immunohistochemical staining of liver tissues. The activities of antioxidant enzymes, lipid peroxidation, and Nrf2 signaling pathway-related proteins (Nrf2, Ho-1 and Nqo1) in liver tissues were analyzed. Cultured hepatic stellate cells (HSCs) of rats were prepared for in vitro studies. RESULTS In the alcohol- and CCl4-treated rats, Rg1 administration dose-dependently suppressed the marked increases of serum ALT, AST, LDH and ALP levels, inhibited liver inflammation and HSC activation and reduced liver fibrosis scores. Rg1 significantly increased the activities of antioxidant enzymes (SOD, GSH-Px and CAT) and reduced MDA levels in liver tissues. Furthermore, Rg1 significantly increased the expression and nuclear translocation of Nrf2 that regulated the expression of many antioxidant enzymes. Treatment of the cultured HSCs with Rg1 (1 μmol/L) induced Nrf2 translocation, and suppressed CCl4-induced cell proliferation, reversed CCl4- induced changes in MDA, GPX, PCIII and HA contents in the supernatant fluid and α-SMA expression in the cells. Knockdown of Nrf2 gene diminished these actions of Rg1 in CCl4-treated HSCs in vitro. CONCLUSION Rg1 exerts protective effects in a rat model of alcohol- and CCl4-induced hepatic fibrosis via promoting the nuclear translocation of Nrf2 and expression of antioxidant enzymes. Introduction Hepatic fibrosis, a pathological outcome of the wound-healing response of the liver to repeated injury, and its end stage, cirrhosis, which is associated with an increased risk of liver failure, portal hypertension and liver cancer [1] , are of great concern worldwide because of the associated high morbidity and mortality [2][3][4] . Hepatic fibrosis results from an imbalance between extracellular matrix (ECM) synthesis and degradation which causes accumulation of ECM deposition. The activation of hepatic stellate cells (HSCs) plays a crucial role in the pathogenesis of hepatic fibrosis [5] . Activated HSCs could transform to myofibroblast-like cells, expressing α-smooth muscle actin (α-SMA) and secreting ECM composed of various proteoglycans and proteins. Reactive oxygen species (ROS) play an important role in the activation of HSCs, and a review emphasized that ROS contributes to both onset and progression of liver fibrosis [6] . CCl 4 is a widely used hepatoxin for animal models of liver fibrosis. It can be activated to trichloromethyl radical (·CCl 3 ) and trichloromethyl peroxyradical (·OOCCl 3 ) by CYP450 in the liver and stimulates Kupffer cells to produce ROS, which damage the liver [7,8] . In addition to the induction of oxidative stress, alcohol may upregulate the activity of P450 2E1 (CYP2E1) which catalyzes the conversion of CCl 4 into ·CCl 3 [9,10] . The combination of the two hepatotoxins will accelerate the ROS damage. Antioxidant enzymes such as superoxide dismutase (SOD), glutathione peroxidase (GSH-Px) and catalase (CAT) have an important role in the elimination of ROS. The upregulation of many antioxidant enzymes in the liver is mediated by Nrf2. Several studies have shown that Nrf2 plays a protective role in CCl 4 -induced liver fibrosis by regulating the antioxidant enzyme activity and the expression of down-stream genes [11][12][13] . Ginseng, the root of Panax ginseng CA Meyer, has been a key component in traditional Chinese medicine for more than 1000 years [14] . The traditional beneficial effects of ginseng are replenishment of vital energy, longevity and mood elevation [15] . The molecular constituents responsible for the action of ginseng are ginsenosides, among which, Rg1 is the most abundant and active ingredient of P ginseng [14,16] . Several studies have shown that Rg1 or ginseng extract has a liver protective effect [16][17][18][19][20][21][22] . These findings strongly indicated that Rg1 is a potent antifibrotic agent. The aim of the present study was to investigate the antifibrotic effects of Rg1 on alcohol-and CCl 4 -induced hepatic fibrosis in rats. We demonstrate that Rg1 inhibits hepatic inflammation and HSC activation in the pathogenesis of hepatic fibrosis. Rg1 effectively protects against alcohol-and CCl 4 -induced hepatic-fibrosis, of which the mechanism is related to the activation of Nrf2 pathway. Materials and methods Chemicals and reagents Rg1 (HPLC 98%) was obtained from Jilin University (Jilin, China). Bicyclol tablets were obtained from Beijing Union Pharmaceutical Factory (Beijing, China). CCl 4 was purchased from Beijing Chemical Works (Beijing, China); olive oil was obtained from WILMAR Edible oils BV (Shenzhen, China). Animals experimental design Seven-week-old male Wistar rats weighing 200-220 g (SPF Experimental Animal's Science and Technology Co, Ltd, Beijing, China) were housed under standard environmental conditions and allowed free access to a commercial standard rodent diet and water ad libitum. The rats were maintained under constant conditions (23±2 °C, 55%±5% humidity and 12 h light-dark cycle) in an air-conditioned room. All animals were handled in accordance with the standards established in the Guide for the Care and Use of Laboratory Animals published by the Institute of Laboratory Animal Resources of the National Research Council (United States) and approved by the Animal Care Committee of the Peking Union Medical College and the Chinese Academy of Medical Sciences. Animals were randomly divided into three main groups: group A, group B and group C. Group A had six small groups with six to ten rats per group as follows: group 1, control; group 2, CCl 4 group; group 3, CCl 4 plus Rg1-10 mg/kg; group 4, CCl 4 plus Rg1-20 mg/kg; group 5, CCl 4 plus Rg1-40 mg/kg; and group 6, CCl 4 plus Bicyclol 200 mg/kg. Rg1 and Bicyclol were prepared in a vehicle (double distilled water, DDW), and 50% CCl 4 was dissolved in olive oil. Group 1 was treated subcutaneously with olive oil, and groups 2-6 were treated subcutaneously with 50% CCl 4 at a dose of 2 mL/kg of body weight twice a week for 8 weeks. Six weeks after treatment, group 1 was administered the vehicle, and groups 2-6 were administered Rg1 or Bicyclol every day for 14 d. Group 1 was given normal water, and groups 2-6 were given 6% (v/v) alcohol in water during the entire experiment. The setup of group B was same to group A except for the CCl 4 treatment. Group B had six sub-groups with 10 rats per group as follows: group 1, control; group 2, alcohol water group; group 3, Rg1-10 mg/kg; group 4, Rg1-20 mg/kg; group 5, Rg1-40 mg/kg; and group 6, Bicyclol 200 mg/kg. Six weeks after treatment with 6% (v/v) alcohol water, group 1 was administered vehicle, and groups 2-6 were administered Rg1 or Bicyclol every day for 14 d. Group 1 was given normal water, and groups 2-6 were given 6% (v/v) alcohol water during the entire experiment. Group C had two sub-groups with 10 rats per group as follows: group 1, control; group 2, Rg1-40 mg/kg group. Group 1 was treated with DDW and group 2 with Rg1 orally once daily for 14 d. All groups were given normal water. Liver functions evaluation Blood samples were obtained 72 h after the last CCl 4 injection from the postcaval vein after the animals had been anaesthetized with ether. Serum alanine aminotransferase (ALT), aspartate aminotransferase (AST), lactate dehydrogenase (LDH) and alkaline phosphatase (ALP) were measured by an automatic biochemistry analyzer (TBA-40FR, TOSHIBA, Japan). Histopathological examination and immunohistochemistry Excised livers were fixed in 4% buffered paraformaldehyde for 24 h, embedded in paraffin, and sectioned. Sections were cut into 4-μm-thick sections and stained with hematoxylineosin (H&E), Masson's trichrome and immunohistochemistry (α-SMA) procedure. To evaluate the histopathological changes, the stained tissue samples were examined under a light microscope. An arbitrary scope was given to each microscopic field viewed at magnifications of ×40-200. At least 10 fields were scored per liver section to obtain the mean value. The scoring system for CCl 4 chronic changes of hepatic inflammation, balloon degeneration and fibrosis were evaluated according to a score proposed by Thompson [23] . The scores of hepatic inflammation are as follows: Score 0: Absent; Score 1: Small amount of cells present at the junction of the necrotic zone; Score 2: Normal amount of cells present; Score 3: Predominantly neutrophils present; Score 4: Predominantly mononucleic cells present. Fibrosis extent was graded as: www.chinaphar.com Li JP et al Acta Pharmacologica Sinica npg Score 0: Absent; Score 1: Thin septa present; Score 2: Thin septa present linking hepatic veins; Score 3: Broad/well-developed septa; Score 4: Cirrhosis. All specimens were scored by three pathologists, who were blinded to the scoring of the other pathologists. The results were analyzed with SPSS 13.0 using a nonparametric ranking analysis (Kruskal-Wallis test). Alcohol dehydrogenase (ADH) activity in liver tissues A 10% liver homogenate was used for the determination of the levels of ADH activities in liver tissues. ADH activity was detected colorimetrically with UV-visible spectrophotometer using commercial kits (Nanjing Jian Cheng Bioengineering Institute, Nanjing, China) according to the manufacturer's protocol. Assay of hydroxyproline in liver tissues Hepatic hydroxyproline content was measured using a commercial kit (Nanjing Jian Cheng Bioengineering Institute, Nanjing, China) according to the manufacturer's instructions. Briefly, 100 mg of liver was hydrolyzed by alkali at 95 °C for 20 min. After cooling in water, the pH was adjusted to 6.0-6.8. A total of 10 mL double-distilled water was added to each tube, mixed with powdered activated carbon, and centrifuged at 1100×g for 10 min. After combining with the designated reagents and a 15 min incubation at 60 °C for 15 min, the mixture was centrifuged at 3500 r/min for 10 min. The absorbance of the supernatant was read at 550 nm. The hydroxyproline content in each sample was determined from a standard and was expressed as micrograms per gram of wet weight (μg/g). Lipid peroxidation assessment A 10% liver homogenate [tissue weight (mg): saline (μL)=1:9] was used for the determination of the levels of MDA according to the protocols of a commercially available kit (Nanjing Jian Cheng Bioengineering Institute, Nanjing, China). After centrifugation at 3000×g for 15 min at 4 °C (2-16PK, SIGMA, Germany), the supernatants were collected to detect the content of MDA. This assay is based on the reaction of MDA with thiobarbituric acid (TBA). The procedure was performed as previously described [24] . Antioxidant enzymes activity in liver tissues A 10% liver homogenate was used for the determination of the levels of SOD, CAT, and GPx activities in liver tissues. The levels were detected colorimetrically with UV-visible spectrophotometer using commercial kits (Nanjing Jian Cheng Bioengineering Institute, Nanjing, China) according to the manufacturer's protocol as previously described [24] . Preparation of nuclear protein fractionation Nuclear protein fractionation was prepared from rat liver tissue using the nuclear-cytosol extraction kit (Applygen Technologies Inc, Beijing, China) according to the manufacturer's instructions. Cut-up tissue was washed with PBS twice and treated with cell lysis buffer. Tissue was homogenized for 20-40 strokes and nuclei were visualized under a microscope. After centrifugation, the nuclear pellet was washed and lysed with nuclear lysis reagent. After centrifugation, the cleared supernatant was used as the nuclear protein extract. The protein concentrations of each of the nuclear lysates were measured and all concentrations were adjusted to be the same. After mixing with 4×sample buffer containing β-mercaptoethanol, the samples were heat-denatured at 90 °C for 10 min. Western blotting analysis Total protein samples were prepared according to a standard protocol. The total protein was used for the detection of α-SMA, SOD, HO-1, and NQO1. The protein levels were determined using a BCA assay kit (Applygen Technologies Inc, Beijing, China). Proteins were separated by SDS-polyacrylamide gel, transferred to a PVDF membrane (Millipore, MA, USA), and blocked with 5% BSA in Tris-buffered saline containing 0.5% Tween 20. Target proteins were detected by corresponding primary antibodies, and subsequently by horseradish peroxidase-conjugated secondary antibodies. Protein bands were visualized using chemiluminescence reagent (Applygen Technologies Inc, Beijing, China). Equivalent loading was confirmed using an antibody against β-actin or Histone H3 (for nuclein Nrf2 protein). The variation in the density of bands was expressed as fold changes compared to the control in the blot after normalization to β-actin or Histone H3. MTT assay The HSCs were isolated from male Wistar rats (normal rats). The isolation and culture of HSCs was carried out as previously described [21] . HSCs were seeded in a 96-well plate at an initial density of 2×10 5 cells/mL. After 24 h, the cells were treated with control (0.1% DMSO) or CCl 4 (10 mmol/L) and Rg1 (10 -8 , 10 -7 , and 10 -6 mol/L) solutions for 24 h. Following treatment, the cells were incubated with 5 mg/mL MTT tetrazolium (10 μL/well) for 4 h at 37 °C. The reaction was terminated by the addition of 100 μL DMSO. The absorbance of the dissolved formazan grains within the cells was measured at 570 nm using a microplate reader (SpectraMax-190, Molecular Device, USA). Statistical analysis Results are expressed as the mean±SD. SPSS13.0 statistical software was used for statistical analysis. Statistical evaluation was performed using one-way analysis of variance (ANOVA) followed by a Tukey test. The results of histopathological scoring system were analyzed with SPSS 13.0 using a nonparametric ranking analysis (Kruskal-Wallis test). Statistical significance was set at P<0.05. Rg1 alters serum biochemical parameters The increase of ALT, AST, LDH, and ALP in the serum demonstrates the damage to the liver [25] . To evaluate the degree of liver damage, the serum levels of several functional liver enzymes were determined. Compared with the control group, the serum levels of ALT, AST, LDH, and ALP in the HF rats were profoundly increased (P<0.01, P<0.01, P<0.01, and P<0.01, respectively, Table 1). Rg1 ameliorates liver pathology To determine the protective effects of Rg1 against CCl 4induced injury, we conducted histological examination of the extent of hepatic injury. According to microscopic examination, the administration of Rg1 alleviates severe hepatic lesions induced by alcohol and CCl 4 . Control rats had no pathological changes in either the lateral or median lobes of the liver (Figure 1A). Rats in the untreated fibrosis group had degenerative changes in the liver: centrilobular necrosis including ballooning of hepatocytes, deposition of lipid droplets in hepatocytes and infiltration of inflammatory cells, as well as collagen deposition [17] . In rats in the Rg1 10 mg/kg treated group, severe hepatocyte necrosis and ballooning degeneration were observed, as well as numerous inflammatory cells around the necrotic tissue ( Figure 1D). Moderate hydropic degeneration of hepatocytes was shown in Rg1 20 mg/kg treated group. In the Rg1 40 mg/kg treated group, hepatocyte necrosis nearly disappeared ( Figure 1F), showing a significant reduction in necrosis and hydropic degeneration. The inflammation score of each group was evaluated. As shown in Figure 1G, the inflammation score was 0.18±0.01 in the control group, while in the CCl 4 group, the inflammation score was markedly increased (3.17±0.32, P<0.01). In contrast, compared with the untreated fibrosis group, treatment with Rg1 (10, 20, and 40 mg/kg) significantly reduced the increased inflammation scores (2.55±0.33, 2.05±0.19, and 1.62±0.50; P<0.05, P<0.01, and P<0.01, respectively). The balloon degeneration score ( Figure 1H) was also significantly reduced by Rg1 (20 mg, 1.95±0.20 and 40 mg, 1.45±0.18; P<0.01, and P<0.01, respectively) compared with rats in the untreated fibrosis group (2.53±0. 19). Rg1 ameliorates hepatic fibrosis Hepatic fibrosis was evaluated by Masson's trichrome staining. According to microscopic examinations, obvious bridging fibrosis was observed in the livers of untreated rats, masses of collagen deposition surrounded the portal area, and the divided liver fibrosis formation staggered and formed a large number of false lobules ( Figure 2B). Rg1 significantly alleviated the extent of hepatic fibrosis ( Figure 2D-2F). In the Rg1 20 mg/kg group ( Figure 2E), liver fibrosis was mildly diminished, and merely disappeared in the 40 mg/kg group ( Figure 2F). Further analysis showed that ( Figure 2G) the Figure 2G). The level of hydroxyproline was 70.2±19.5 μg/g liver in the control group. In the CCl 4 group, hydroxyproline content was markedly increased to 4.7 times that of the control group (P<0.01). In contrast, treatment with Rg1 significantly reduced the increase in hydroxyproline content at 10, 20, and 40 mg/kg (224.5±47.2, 170.0±59.4, 118.4±36.3 μg/g liver, each P<0.01; Figure 2H). Rg1 attenuates HSCs activation The protective effect of Rg1 in hepatic fibrosis is associated with the inhibition of HSC activation, which was determined by the inhibition of α-SMA + myofibroblast transition. Almost no expression of α-SMA was observed in the control rat livers ( Figure 3A). Many α-SMA positive cells appear in untreated fibrotic liver and the positive areas are connected, dividing the liver into many annular lobules ( Figure 3B). Rg1 reduced the α-SMA positive area in a dose-dependent manner ( Figure 3D-3F). Much of the α-SMA expression remained evident in the Rg1-10 mg group, spreading from the portal area and linking positive areas together, occasionally forming false lobules ( Figure 3D). Most of the α-SMA positive areas in the Rg1 20 mg group concentrated around the portal area, with a rare false flocculus structure. In the Rg1-40 mg group, almost all of the α-SMA-positive areas concentrated around the portal area with a shorter distance and a smaller area, and no false flocculus structures were observed. α-SMA positive area analysis showed that rats administered with 20 and 40 mg/kg Rg1 significantly block the α-SMA + cell accumulation (6.33%±2.08% and 4.21%±1.97%; P<0.05, P<0.01) compared with the control group (9.26%±2.28%, P<0.01 vs control group. Figure 3G). Western blot analysis showed a similar trend as the α-SMApositive area analysis, revealing that the α-SMA levels were 50 times greater in CCl 4 -treated rats than in control rats. CCl 4treated rats receiving Rg1 (10, 20, and 40 mg/kg) had signifi- Figure 3H). Effect of Rg1 on CYP2E1 mRNA and ADH activity in liver CYP2E1 is the most important enzyme of the liver cytochrome P450 system related to alcohol metabolism. Activity of ADH directly reflects the ability to eliminate alcohol. In the present study, we measured the CYP2E1 mRNA level and ADH activity in liver tissues. The average daily ethanol intake of the rats was 3.8-5.6 g/kg during the modeling period. There was no significant difference among the ethanol water groups ( Figure 4A). Compared with the control group, the level of CYP2E1 mRNA in the alcohol-only group and Rg1 groups showed no significant difference ( Figure 4B). There was also no distinct difference observed on the ADH activity among these groups ( Figure 4C). Rg1 decreased lipid peroxidation and modified antioxidant enzyme activity The levels of MDA were monitored to evaluate the effect of Rg1 treatment on alcohol-and CCl 4 -induced liver lipid peroxi-dation. A significant increase of MDA showed that oxidative damage was induced in the untreated fibrosis group (8.61±0.94 nmol/mg protein, P<0.001, Figure 5A). Treatment with Rg1 (10, 20, and 40 mg/kg) significantly decreases the lipid peroxidation compared with the model group (7.14±0.65, 6.61±0.41, and 6.30±0.35 nmol/mg protein; P<0.05, P<0.01 and P<0.001, respectively. Figure 5A ). SOD, GSH-Px, and CAT play pivotal roles in the scavenging of free radicals and the prevention of liver damage caused by ROS. The activities of these antioxidant enzymes were measured to evaluate the relationship between the antifibrotic and antioxidant effect of Rg1. As shown in Figure 5B www.chinaphar.com Li JP et al Acta Pharmacologica Sinica npg 5B and 5C). Interestingly, the SOD activity of the model group is higher than that of the control group (293±46 U/g tissue, P<0.05. Figure 5D), while that of the Rg1-treated groups were even higher than the untreated fibrosis group (20 mg, 461±40 U/g tissue, P<0.05; 40 mg, 473±26 U/g tissue, P<0.01. Figure 5D). This result was confirmed by Western blot detection of SOD in the liver ( Figure 5E). Rg1 activates the Nrf2 pathway Nrf2 plays an important role in the activation of antioxidant enzymes by regulating their transcription [26] . As shown in Figure 6A, the nuclein Nrf2 protein of the untreated fibrosis rats showed a slight higher expression compared to the control group but no statistical significance. Compared with the control rats, the rats given with Rg1 (10, 20, and 40 mg) had markedly increased levels of Nrf2 (P<0.01, P<0.01, and P<0.01. Figure 6A). When compared to the untreated fibrosis rats, the Rg1 groups (10, 20, and 40 mg) had significantly increased lev-els of Nrf2 (P<0.05, P<0.01, and P<0.01. Figure 6A). The result implies that Rg1 promotes the nuclear translocation of the Nrf2 protein, which leads to recovery from alcohol-mediated acceleration of CCl 4 -induced liver fibrosis. The expression of Ho-1 and Nqo1 show the same tendency: Ho-1 was dramatically increased in three of the Rg1 groups (P<0.01, Figure 6A), as was Nqo1 (P<0.05, Figure 6A), in comparison to the expression in the control group. We also measured the level of nuclein Nrf2 protein in rats treated with Rg1 (40 mg/kg). Interestingly, we found that Nrf2 protein was significantly increased in the Rg1 administration group compared to the saline group (P<0.05, Figure 6B). Nrf2 plays a key role in the anti-fibrosis mechanism of Rg1 The CCl 4 -induced HSC proliferation was inhibited by Rg1 (10 -7 , 10 -6 mol/L) compared with that of the CCl 4 only group (P<0.05, P<0.01, respectively; Figure 7A). As 10 -6 mol/L of Rg1 showed a better and more stable effect, this concentration To determine the role of Rg1 on Nrf2 activation and to verify the inhibition efficiency of Nrf2 siRNA, the Nrf2 protein was measured by Western blot. The results indicated that Nrf2 siRNA significantly diminished Rg1-induced Nrf2 translocation ( Figure 7B). The content of MDA and GPX in the supernatant fluid of cultured HSCs was evaluated. Rg1 significantly decreased the MDA level (P<0.01) and increased the GPX level (P<0.01) compared with CCl 4 -only group. This protective effect can be partially reversed by Nrf2 siRNA (Figure 7C, 7D). PCⅢ and HA are important markers of hepatic fibrosis. α-SMA is the marker of HSCs activation. The level of PCⅢ and HA in the supernatant fluid of cultured HSCs were both significantly decreased in Rg1-treated group (P<0.01 and P<0.01, respectively) compared with CCl 4 -only group. And in Nrf2 siRNA group, there was no significant difference of the level of anti-fibrotic markers (PCIII and HA) compared to the CCl 4 -only group ( Figure 7E, 7F). The expression of α-SMA protein showed a similar tendency as PCIII and HA ( Figure 7G). This indicates that Rg1 can inhibit the CCl 4 -induced activation of HSCs, and Nrf2 siRNA can partially reverse the effect. Discussion Hepatic fibrosis has received global attention because of the high morbidity, severe economy burdens and psychological pressure that it causes. However, a lack of effective drugs and therapeutic strategies render it difficult to treat. Therefore, it is imperative to find new drugs to treat hepatic fibrosis. As the main collagen producing cells in the chronic liver injury [27] , hepatic stellate cells are considered the key therapeutic target for hepatic fibrosis [5] . Oxidative stress represents a direct or indirect pro-fibrogenic stimulus for HSC activation [28][29][30] . In the search for therapeutic strategies for hepatic fibrosis, many antioxidants show promising results [24,[30][31][32][33][34][35][36] , including ginseng extracts [22,37] . However, there are few reports that have evaluated the effects of Rg1 on hepatic fibrosis by inhibiting oxidative stress [21] . Thus, the aim of our present investigation was to explore the antifibrotic capacity of Rg1 by attenuating oxidative stress and the underlying mechanisms. Alcohol-related hepatic fibrosis is clinically common, but experimental models of alcoholic hepatitis that effectively mimic the human pathological findings require further exploration [38] . Existing animal models are limited because of the long modeling period and low success rate. CCl 4 has been widely used to induce chronic liver damage, especially in models of hepatic fibrosis and primary hepatic cirrhosis [39] . ROS play an important role in the activation of HSC and collagen accumulation in alcohol-related hepatic fibrosis [38,[40][41][42][43][44] . It is now generally accepted that liver fibrosis produced by CCl 4 is induced by oxidative stress. Chronic alcohol intake markedly upregulates the activity of CYP2E1, which is a major isozyme involved in catalyzing CCl 4 into the trichloromethyl free radical (·CCl 3 ) [9,10] . In the present study, a shortened modeling period with a high success rate were achieved by the complex modeling method. All of the animals in the model group developed severe liver fibrosis and few animals died. The pathological outcomes were similar to those of a recent study [45] , but our method is shorter in duration. Ginseng extracts have been reported to exert protective effects, in in vitro studies as well as in various animal and clinical models, in models of liver injury induced by a variety of hepatotoxins, including CCl 4 and alcohol [17,19,22,37,46] . However, research on the effect of Rg1 on liver fibrosis is limited [21] . Increases in serum AST, ALT, LDH, and ALP levels have been used as biomarkers of damaged structural integrity of the liver. In Geng's study [21] , the increased levels of serum AST, ALT, and ALP induced by thioacetamide were reduced by Rg1. In our study, treatment with Rg1 inhibited hepatotoxininduced liver damage in a dose-dependent manner, as evidenced by decreased AST, ALT, ALP, and LDH levels. In our study, the level of CYP2E1 mRNA in the untreated fibrosis group showed no difference compared with the control group. This finding is consistent with a previous study [47] that showed mRNA expression levels were unchanged in a 2-month ethanol intragastric feeding model. There are also studies that have reported that moderate ethanol intake may not induce CYP2E1 [48,49] . These studies [48,49] concluded that moderate ethanol intake exacerbated liver fibrosis, but did not affect the hepatotoxicity of CCl 4 . Taken together, these data demonstrated that the 6% alcohol water in our study may not induce CYP2E1 mRNA expression, while it may exacerbate liver fibrosis in combination with CCl 4 . The ADH activity in the liver was not different among these groups, suggesting that Rg1 has no effect on ADH. Activated HSCs are the major source of ECM [50][51][52] . Increased expression of α-SMA, a marker of transdifferentiation, is one of the major phenotypic changes that activated HSCs display [53][54][55] . In the present study, compared to the CCl 4 group, Rg1 decreased the number of α-SMA positive cells in the liver. Additionally, this result was confirmed by Western blot analysis of α-SMA protein. The results indicated that Rg1 inhibited transactivation of HSCs in injured liver. Together with the results of Masson's trichrome staining, these results imply that the antifibrotic effects of Rg1 may result from inhibition of transdifferentiation of HSCs. The antioxidant defense systems include SOD-, CAT-, and GSH-related enzymes (GPx, GR, and GST) [56] . A previous study [18] found that Rg1 is capable of buffering excessive free radicals and attenuating the oxidative damage in liver in an exhaustive exercise model. In our study, alcohol and CCl 4 induced significant modifications to these antioxidant enzymes. We found that the activities of CAT and GSH-Px were lower in the model group than in the control group, and Rg1 was capable of elevating the activities of these antioxidant enzymes. Interestingly, in contrast with some previous reports, the SOD activity in the model group was higher than in the control group. It appears that SOD was spontaneously activated to protect the liver from oxidative damage and that Rg1 could further enhance the SOD activity. As a downstream effector of the Nrf2 pathway, SOD activity is closely related to the level of Nrf2 in the nucleus. Given that the nuclear Nrf2 level increased in the model livers (though without significant differences), the elevation of SOD activity is expected. This finding is consistent with a study [57] that investigated the neuroprotective effect of melatonin in a rotenone model of PD. It appears that Rg1 relieves liver injuries by up-regulating the activities of CAT, SOD, and GSH-Px to scavenge the free radicals induced by alcohol and CCl 4 . The up-regulation of many antioxidant enzymes, or the inhibition of lipid peroxidation in the liver is mediated by Nrf2. Upon exposure to oxidative or electrophilic stress, Nrf2 dissociates from Keap1 and translocates to the nucleus, where it binds to the ARE and leads to an array of transcriptional regulatory proteins [58][59] , including Ho-1, Nqo1, CAT, SOD, and GSH-Px [60][61][62][63][64] . In our study, Western blot analysis showed that the enhanced expression of Nrf2 in the nuclear transplantation by Rg1 is consistent with the increased activities of antioxidant enzymes by Rg1 (Figure 5, 6A). This result demonstrates that the Nrf2 pathway contributes to the anti-fibrotic effect of Rg1. To our surprise, the rats treated with Rg1 also showed increased expression of nuclear Nrf2 ( Figure 6B). To validate the key role of Nrf2 in the anti-fibrotic mechanism of Rg1, siRNA-induced knock down of Nrf2 in HSCs was conducted. Not surprisingly, the protective effect of Rg1 in CCl 4 -treated HSCs was reversed in the Nrf2 siRNA group. Taken together, these results suggest that Rg1 is an Nrf2 activator and that Nrf2 plays a key role in the anti-fibrotic mechanism of Rg1. Some of the Nrf2 activators have progressed to clinical trials for treatment of conditions such as skin cancer, multiple sclerosis and chronic kidney disease [65] . Nrf2 is considered to be involved in protection against ethanol-induced oxidative stress [65][66][67] . Evidence from studies in which Nrf2 has been knocked down in cells, and in studies that used Nrf2-null mice [66] , show that Nrf2 plays a protective role against ethanolinduced liver damage. As a potential activator of Nrf2, Rg1 increases the activity of antioxidant enzymes such as Ho-1 and CAT. Therefore, Rg1-induced activation of Nrf2 renders the liver more resistant to the oxidative stress induced by alcohol. Nrf2 activation is generally considered to have a beneficial effect [68] , especially in liver disease [65,69] . In Nrf2 knockout mice, increased death and delayed proliferation of hepatocytes were observed [70] . After long-term CCl 4 treatment, liver fibrosis was strongly aggravated in the Nrf2 knockout mice, and inflammation was enhanced [13] . A recent publication reported that there was no beneficial effect of Nrf2 activation on CCl 4induced liver injury and fibrosis in caNrf2-transgenic mice [71] . This result is interesting and very different from the conclusions of many other publications. Most studies, either by pharmacological activation or in Nrf2 gene knockout models, has shown that Nrf2 activation was beneficial to liver injury and fibrosis. However, in this paper, the author genetically activated Nrf2 only in hepatocytes, although inflammatory cells are not as protected as the hepatocytes, which may have contributed to the observations. Pharmacological activation not only targets hepatocytes but also all of the inflammatory cells involved in mediating CCl 4 -induced damage. Thus, cells other than hepatocytes could also be involved in mediating a protective effect. Additionally, pharmacological activation of Nrf2 also targets many different pathways, such as NF-κB inhibition, possibly contributing to a protective effect in cells other than hepatocytes. Similarly, the Nrf2 knockout mouse studies used mice with a global Nrf2 knockout, meaning that the effect could also be mediated by inflammatory cells. Both alcohol and CCl 4 contribute to the oxidative stress in the liver that is relevant to HSC activation in the present study. Augmented antioxidant defense systems, including Ho-1, Nqo1, CAT, SOD, and GSH-Px, restrain the HSC activation, and therefore suppress liver fibrosis. The levels of these antioxidant enzymes are consistent with the nuclear Nrf2 in the Rg1 groups. It is clear that further research is required to determine the exact role of Rg1 on the activation of Nrf2. In summary, the present study demonstrated that Rg1 has a protective effect on ethanol-mediated acceleration of CCl 4induced liver fibrosis in rats. Treatment with Rg1 blocked the expression of α-SMA. Rg1 is capable of upregulating the ability of antioxidases and down-regulating the level of lipid peroxidation. We demonstrated that Rg1 may be an Nrf2 activator. The protective effect of Rg1 may be related to its ability to promote Nrf2 nuclear translocation and enhance the expression of Nrf2 target genes.
2016-05-18T15:57:42.513Z
2014-06-30T00:00:00.000
{ "year": 2014, "sha1": "6580ec89afa21f8d916bb2dd6907cfa6b0d96f78", "oa_license": null, "oa_url": "https://www.nature.com/articles/aps201441.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6580ec89afa21f8d916bb2dd6907cfa6b0d96f78", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
241583469
pes2o/s2orc
v3-fos-license
Two Heads are Better than One? Verification of Ensemble Effect in Neural Machine Translation In the field of natural language processing, ensembles are broadly known to be effective in improving performance. This paper analyzes how ensemble of neural machine translation (NMT) models affect performance improvement by designing various experimental setups (i.e., intra-, inter-ensemble, and non-convergence ensemble). To an in-depth examination, we analyze each ensemble method with respect to several aspects such as different attention models and vocab strategies. Experimental results show that ensembling is not always resulting in performance increases and give noteworthy negative findings. Introduction Ensemble is a technique for obtaining accurate predictions by combining the predictions of several models. In neural machine translation (NMT), ensembles are most closely related to vocabulary (vocab). In particular, by aggregating the prediction results of multiple models, the ensemble averages the probability values over the vocab of the softmax layer (Garmash and Monz, 2016;Tan et al., 2020). Most existing studies on ensembling for NMT focus on improving the performance of shared tasks. For example, in WMT's shared task, almost every participating team applied the ensemble technique to improve performance (Fonseca et al., 2019;Chatterjee et al., 2019;Specia et al., 2020). However, in most cases, only experimental results that improved performance by applying the ensemble technique are introduced; in-depth comparative analysis is rarely conducted (Wei et al., 2020;Park et al., 2020a;Lee et al., 2020). In this study, we attempt to investigate three main aspects regarding ensembles for machine translation. First, we investigate the ensemble effect when using various vocab strategies and different attention models. For the vocab that plays the most important role in the machine translation ensemble, three dif-ferent experimental conditions-independent vocab, share vocab, and share embedding-are applied to two different attention networks Vaswani et al., 2017). Second, we investigate which among intraensemble and inter-ensemble is more effective for performance improvement. Notably, intraensemble is an ensemble of identical models, while inter-ensemble represents an ensemble between models that follow different network structures. Third, we analyze the effect of the nonconverging model on ensemble performance. Most existing studies create an ensemble using only those models that have been fitted. However, we perform in-depth comparative analysis experiments, raising the question of whether the nonconverging model has only negative effects. Ensemble in NMT Ensemble prediction is a representative method for improving the translation performance of NMT systems. A commonly reported method involves aggregating predictions by training different models of the same architecture in parallel. Then, during decoding, we average the probabilities over the output layers of the target vocab at each time step. In this study, we follow the above method for ensembles using the same model architecture (i.e., intra-ensemble). Because the target vocabs are the same, ensembles of components with different model structures (i.e., inter-ensemble) also follow the same method. We conduct experiments on intra-and inter-ensemble effects on LSTM-Attention and Transformer (Vaswani et al., 2017) networks, combined with various vocab strategies. A detailed description of the vocab strategies is provided in the next section. Vocab Strategies Independent vocab means learning separate weights from each encoder and decoder without any connection or communication between the source and target languages. Most NMT research follows this methodology Vaswani et al., 2017;Park et al., 2021b). Share vocab means that the model uses a common vocab for a combination of the source and target languages (Lakew et al., 2018). That is, the encoder and decoder interact within the same vocab, and can refer to each other's vocabs, thus making the model more robust. Share embedding goes a step beyond sharing the source-target vocabs, and shares the vocab embedding matrix of the encoder and decoder (Liu et al., 2019). It enables the sharing of vocab from various languages through one integrated embedding space. Consequently, it has been widely used in recent multilingual NMT (Aharoni et al., 2019). Design of Intra-and Inter-ensemble Intra-ensemble is an ensemble of identical models. We use the LSTM-Attention and Transformer networks with three different weights for the combinations to average the probabilities of ensemble. Inter-ensemble represents an ensemble of models that follow different network structures. We experiment with different combinations of the two attention-based models and vocab strategies. In this experiment, we aim to suggest directions for creating a better ensemble technique by analyzing the effect of intra-and inter-ensemble combined with the vocab strategy and size of vocabs. Moreover, all experiments compare vocab size (i.e., 32k and 64k) by considering performance difference with respect to vocab capacity. Design of Non-convergence Ensemble In general, ensembles comprise well-fitted models; however, we conduct experiments to examine how models with less convergence affect the ensemble. Non-converging models are trained using ¼ of the iterations needed for convergent models. Consequently, we can determine whether non-converging models will cause only negative effects on the ensemble. Table 1: Performance of intra-ensembles (combinations of vocab sizes and attention networks). The baseline score is the average of the three models that have different weights. Note that the bold numbers indicate the best score in each case. Experimental Setup In this study, we use the Korean-English parallel corpus released on AI Hub 1 as the training data (Park and Lim, 2020). Several studies (Park et al., 2020b(Park et al., , 2021a) have adopted this corpus for Korean language NMT research. The total amount of sentence pairs is 1.6M. We randomly extract 5k sentence pairs twice from the training data, and use these data for the validation and test sets. We employ sentencepiece (Kudo and Richardson, 2018) for subword tokenization. The performance evaluation of all the translation results are proceeds with BLEU score by leveraging multibleu.perl script given by Moses. Results Our negative findings and their insights are illustrated by NF and Insight, respectively. The performance results of the baseline models (seen as recipes of an ensemble) are shown in Tables 1 to 4. Comparison of Intra-ensemble Effect We show the results of applying the vocab strategies to two different models, namely LSTM-Attention and Transformer with three different weights (i.e., w 1 , w 2 , and w 3 ) for intra-ensemble in Table 1. Additionally, we compare the combinations of those weights to investigate the apparent intra-ensemble effect. Table 1 shows the significant variation in ensemble effect, according to the vocab strategies. The Transformer and LSTM-Attention models exhibit the highest performance in the order of independent vocab (ind), share embedding (se), and share vocab (sv) in both vocab sizes (32k and 64k, respectively). NF1: Although Lakew et al. (2018); Park et al. (2021a) found that share vocab (sv) is effective when subword tokenization is applied as a pretokenize step during training, it has a negative effect in model training. However, we find that sharing the vocab improves performance; nevertheless, sharing the embedding space is more helpful. However, training with independent vocab strategy shows the highest performance without interference. To an in-depth examination, we analyze the intraensemble performance with respect to four aspects: i) different attention models, ii) vocab strategy, iii) vocab size, and iv) the number of models in the ensemble. i) Different attention models We investigate the influence of the different attention networks on an ensemble. Self-attention-based networks refine ( ) all vocab strategies; however, there are more cases without performance improvement than those with performance improvement using the Bahdanau attention-based networks. That is, NF2: specifically, with the Bahdanau attention network, there is a case in which a negative result ( ) occurred in an ensemble. This result is interpreted as a difference in the robustness (i.e., with minimum performance degradation) and capacity (i.e., parallelism) of the model, as the following interpretations show. The Bahdanau attention network is exposed to problems with long-term dependencies (Bengio et al., 1993), resulting in the weak processing of long-sequences and requiring more data than self-attention. Furthermore, the Bahdanau attention network is wellknown for not being context-aware, leading to variance in model prediction (Gao et al., 2021). Thus, Insight: it can be seen that there is a lack of capacity and robustness in the Bahdanau attention network. Owing to this, it can be inferred that this network has a negative influence on the ensemble effect. ii) Vocab strategy We observe that there is performance variation among the vocab strategies. Our finding is in line with the aforementioned result in terms of the ensemble effect being the same as the ordering in LSTM-Attention, which is ind, se and sv. This is reasonable because of the previous result; however, NF3: mixing the vocab (i.e., sv) has a negative effect on the ensemble performance. Table 2: Performance of inter-ensembles (combinations of vocab sizes and attention networks). Here, the column "Intra" records the highest score among the two different models, according to each vocabulary strategy in Table 1. iii) Vocab size As illustrated in Table 1, the performance of intra-ensemble models shows vast differences owing to vocab sizes. We confirm that a vocab size of 64k is more effective than that of 32k; consequently, we theorize that vocab size is closely related to the effect of ensemble. In the Transformer ensemble with independent vocab (i.e., Transformer ind ), the BLEU score is improved by 0.73 in the baseline model at 32k; in contrast, the BLEU score is improved by 1.52 at 64k, which is an improvement of more than two times. In other words, NF4: even a slight alteration of vocab size significantly affects the ensemble performance, and we know that a broader capacity leads to better performance when conducting vocab prediction using softmax. iv) Number of ensemble models We explore the number of ensembles, and further validate the performance using the model combinations. NF5: Contrary to the expectation that the number and performance of the ensemble models would show a positive correlation, this was not the case. As shown in Table 1, only six cases, i.e., 50% of the 12 cases, demonstrate a good score in the three models ({w 1 , w 2 , w 3 }) of the ensemble. The remaining six cases demonstrate a good score in two models ({w 1 , w 3 }, {w 2 , w 3 }). This result proves the statement of NF5. Intra-ensemble or Inter-ensemble? Inter-ensemble is feasible if the same vocab is used across the two models. Therefore, an ensemble of Transformer and LSTM-Attention model with the corresponding vocab strategy can be created; a comparison of the performance results with intraensembles is presented in Table 1. The results for inter-ensembles are shown in Table 2. This result shows that the baseline (i.e., Intra) exhibits better performance than inter-ensembles. Notably, inter-ensembles show a negative effect. Table 3: Performance of combinations of intra-ensembles using non-convergence models (w nc ) with vocab sizes and attention networks. ∆% represents the average relative rate (i.e., the difference) {w nc , w 1 } to {w nc , w 1 , w 2 , w 3 } over "Best Intra." Note that the bold numbers represent the best score in each case. Table 4: Performance of combinations of inter-ensembles with non-convergence (NC) and convergence (C) conditions along with vocab sizes and attention networks. ∆% represents the average relative rate (i.e., the differences), from first to third columns, of inter-ensembles over "Best Inter." Note that the bold numbers indicate the best score in each case. That is, NF6: inter-ensemble exhibits a negative effect on performance, resulting in performance degradation in all cases. It seems that the heterogeneous model architecture from the two different models acted as a hindrance to performance improvement. Does Non-convergence Ensemble Cause Negative Results? In this section, we investigate the effect of nonconvergence on intra-and inter-ensembles. We choose the model with the best score (intra-and inter-ensembles) from Table 1 and Table 2, respectively, as target models for comparison. The performance results of intra-and interensemble with non-convergence models are illustrated in Table 3 and Table 4, respectively. Table 3, intra-ensemble with a non-convergence model leads to negative results compared to the baseline model (i.e., Best Intra) in LSTM-Attention. Using the Transformer model as a baseline generally lead to performance degradation; however, the decrease is relatively small. There are a few exceptions ( ) that show that nonconverging models with Transformer sometimes perform better when ensembled together. Intra-ensemble In These results revealed that NF7: the Trans-former model is more robust than the LSTM-Attention model and stronger under adverse conditions. Additionally, it is inferred that the underfitted model plays a role in noise injection, boosting performance. Insight: This result is a meaningful in that even a non-convergence model, which many researchers neglect, can help improve performance. Inter-ensemble As detailed in Table 4, the performance decreased in all cases, and NF8: nonconverging model causes a highly negative result in inter-ensembles compared to intra-ensembles. In conclusion, inter-ensemble provide negative results in all cases for the experiments conducted in this study. Conclusion Most researchers consider it common sense that ensembles are better; however, few studies have conducted any type of close verification. In this study, we perform various tests based on three experimental designs related to the ensemble technique, and demonstrate its negative aspects. Thus, we provide insights into the positives and negatives of ensembling for machine translation. In the future, we plan to conduct expanded experiments based on different language pairs.
2021-11-04T13:14:44.573Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e35221ec97fef9ee4578756125a00bb82f96831d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "e35221ec97fef9ee4578756125a00bb82f96831d", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [] }
135150685
pes2o/s2orc
v3-fos-license
Host preference and eco-friendly management of cucurbit fruit fly under field condition of Bangladesh : Experiments were carried out to investigate the host preference of cucurbit fruit fly, Bactrocera cucurbitae and its management to cucurbitaceous vegetables namely, bitter gourd, ridge gourd and snake gourd. Among the vegetables, bitter gourd was found as mostly preferable (upto 40.69% fruit infestation) and snake gourd was found as the least preferable (18.64% fruit infestation). Among management tactics (fruit bagging, neem oil, mahogany oil, alamonda leaf extract, pheromone trap and cider vinegar traps), bagging reduced maximum fruit infestation (upto 75.51%) compared to controlled plots. Among the botanicals, neem oil was mostly effective (21.64% reduction of fruit infestations) followed by alamonda leaf extract and mahogany oil. Both traps were found effective and more insect were trapped in the pheromone trap. Introduction The cucurbitaceous vegetables are form one of the largest groups in the vegetable kingdom with their wide adaptation from arid to the humid tropic environments.The cucurbits such as cucumber, bitter gourd, sponge gourd, ridge gourd, bottle gourd, sweet gourd, snake gourd, ash gourd, pointed gourd, and pumpkins are some of the major vegetables are principally grown year round across the Bangladesh.But it becomes principal vegetables during the summer season due to the scarcity of other vegetables in the market.Irrespective to other causes, enormous yield losses of cucurbitaceous vegetables are causing from the attack of insect pests in every year.Cucurbit fruit fly, Bactrocera cucurbitae (Coquillett) is one of the most devastating pests of cucurbitsin many parts of the world which may cause more than 60% yield loss (Kapoor, 1993).The female fruit flies insert their eggs into the developing fruits while visiting as a result, fruit juice oozes come out form the injury which ultimately transformed into resinous brown deposit.The eggs hatch inside the fruit into maggots which feed on the flesh (pulp) and make tunnels.The infested fruits may be rotten, dry up, shed up prematurely and sometimes, become deformed which ultimately reduce the market value.Infestations vary with the environmental conditions and the crop species.Depending on the variation of environmental parameters and host crops, the extent of losses varies between 30 to 100% (Gupta and Verma, 1992;Dhillon et al., 2005a,b,c;Shooker et al., 2006).Conventionally farmers apply synthetic chemical insecticides to control cucurbit fruit fly.However, inappropriate use of broad spectrum synthetic insecticides are responsible for the development of insecticide resistance, pest resurgence, outbreak of secondary pests, destruction of non-target organisms (Hagen and Franz,1973), environment hazards (Devi et al., 1986;Fishwick,1988) and human health risks.On the other hand, knowledge on host preference and non-chemical management approaches can be effective tools in the eco-friendly and sustainable management of cucurbit fruit fly.Considering the aforesaid issues the present study was conducted to perform comparative study on the host preference of cucurbit fruit fly and its management with non-chemical management practices. Materials and Methods The experiments on host preference and management of cucurbit fruit fly (Bactrocera cucurbitae) were carried out in the field laboratory of department of Entomology, Bangladesh Agricultural University during the period of April to July 2013.The land was ploughed and cross-ploughed for several times with a power tiller to obtain good tilth.All ploughing operations were followed by laddering for breaking up the clods.All weeds and stubbles were removed from the field and then it was divided into 24 equal plots of 4 feet by 4 feet.Finally, the unit plots were prepared as 10cm raised beds along with basal doses of recommended fertilizers maintaining single pit in each for experiments.Among the plots, 9 were used the host preference tests in which three selected cucurbitaceous vegetables such as bitter gourd (BARI Karola-1), ridge gourd (BARI Jhinga-1) and snake gourd (Local Chichinga) collected from Bangladesh Agricultural Research Institute (BARI) and local market of Mymensingh town were sown maintaining three replications of each.Management experiments were performed in rest 15 plots where seeds of bitter gourd were sown.Before sowing, seeds were soaked overnight for proper germination.Three seeds were sown in each pit and one healthy seedling per pit was maintained through thinning at 7 days after germination.Each plant was supported by bamboo platform (bamboo macha) for easy creeping and preventing from lodging.Proper growth and development of each plant was maintained with all recommended horticultural practices.Total number of fruits and infested fruits were recorded at 5 days intervals which were started at 10 days after first flowering.Relative preference of cucurbit fruit fly to the vegetables was then compared using the percentage of infested fruits generated from the following formula. Comparative efficacy of management tactics (bagging with paper bags, three botanical insecticides viz.neem oil, mahogany oil and allamanda leaf extract and two traps viz.pheromone trap and cider vinegar traps) were evaluated based on the percent infested fruits using the aforesaid formula and compared with untreated control plots.Each management practice and control plot was replicated for three times.For bagging, tender fruit was covered by a paper bag to avoid fruit fly contact.Fruits were observed after 5 days of bagging and the total number of fruits and infested fruits were counted.The infested fruits were removed after each data counting.Botanical insecticides were sprayed at five days interval @ 5mlL -1 of water.After 5 days of each spraying, total number of fruits and total number of infested fruits per plot were recorded.After completion of management experiments with bagging and botanicals, pheromone traps (designed by BARI with culure and soapy water) and cider vinegar traps were hung up under bamboo platform (randomly selected plots).Numbers of fruit flies trapped were recorded at alternative days.Total fruits and infested fruits were also recorded for the selected plots.The old cider vinegar and soapy water were replaced at 7 days intervals.Data were analyzed by MSTAT-C and SPSS programs and DMRT was performed when it was necessary. Results and Discussion a. Host preference of fruit fly Percent fruit infestation by cucurbit fruit fly were significantly (P<0.01)different among bitter gourd, ridge gourd and snake gourd (Table 1).The highest percentages of fruit infestation at different times were found on bitter gourd (40.13%, 40.95%, and 36.98%respectively at 3 consecutive counting).On the other hand, the lowest fruit infestations were found on snake gourd (18.64%, 21.55%, and 19.91% at 1 st , 2 nd and 3 rd counting respectively) which was almost half comparing to the bitter gourd infestations.Ridge gourd was found moderately preferable to cucurbit fruit fly because fruit infestation level were found in between bitter gourd and snake gourd at different counting.Therefore, among three selected cucurbitaceous vegetables, bitter gourd was the mostly preferable and snake gourd was the least preferable host to cucurbit fruit fly.Based on the observations, it was most likely that fruit fly lay eggs in all three vegetables but they did not choice equally for egg laying and infestation.The present study was in full agreement with observation of Hollingsworth et al. (1997) and Singh et al. (2000), they stated bitter gourd as the most susceptible vegetable to cucurbit fruit fly among three host namely bitter gourd, snake gourd, and pumpkin fruits.Likewise, Krishna et al. (2006) reported that, maximum fruit fly infestation occurred in bitter gourd and lowest infestation occurred in cucumber followed by ridge gourd.Moreover, Doharey (1983) described bitter gourd (Momordica charantia), musk melon (Cucumis melo), snap melon (Cucumis melo var.momordica) and snake gourd (Trichosanthes anguina and T. cucumeria) as more preferred hosts among 70 vegetables he evaluated.However, the present result is contradictory with Humayra et al. (2010) where they stated snake gourd as more preferable host than bitter gourd and cucumber.Again, Gaine et al. (2013) found more or less similar infestations of fruit fly in bitter gourd and ridge gourd. b. Management of cucurbit fruit fly i. Comparative efficacy of bagging and botanicals Efficacies of bagging and botanical insecticides were evaluated on the basis of percent fruit infestations on bitter gourd.Percent fruit infestation varied significantly (p<0.01%) for different treatments.The highest fruit infestations occurred in control plots which were significantly different than all treated plots (39.32%, 40.13%, 40.95% and 36.98% at 10 days after first flowering as pretreatment data, 5 days after treatment application, 5 days after 1 st counting and 5 days after 2 nd counting respectively) (Table 2).Among treatments, the least fruit infestations were recorded for bagging, 13.43%, 7.50% and 9.26% at three consecutive counting respectively which almost 1/3 rd compared to botanicals.Upto 76% reduction of fruit infestation over control were found in case of bagging which was the most reduction of infestation among all treatments used in the experiments.The result was alike with Akhtaruzzaman et al. (2000), they stated bagging of cucumber fruits at 3 days after anthesis for 5 days can reduce fruit fly infestations effectively and safely.Among three botanicals, the least fruit infestations were recorded in alamanda leaf extract treated plots (28.62%, 30.16% and 30.64%, at three successive counting respectively) but the percent reduction of fruit infestation was highest in neem oil treated plots (21.64%, 15.04% and 13.93%, at three successive counting respectively) over pretreatment data.On the other hand, mahogany oil was the least effective against fruit fly because highest percentage of fruit infestation was observed on (36.18%, 37.95%, 33.48%, at three consecutive counting) mahogany oil treated plants with least reduction of percent fruit infestations.Therefore, neem oil was the most efficient in reduction of fruit fly infestation followed by alamonda leaf extract and mahogany oil.These observations are supported by Singh (2003), he stated that neem extract can be used effectively as an excellent alternative to synthetic insecticides.Similarly, Khalid (2009) reported neem oil and neem seed extract can reduce the fruit fly infestation significantly. ii. Comparative efficacy of fruit fly traps The mean number of fruit fly trapped in pheromone trap and cider vinegar trap were compared to ascertain their efficacy (Figure 1).According to the graph, more fruit flies were trapped in pheromone traps compared to cider vinegar trap in every of three successive counting.Initially, almost half number of fruit flies was trapped in cider vinegar trap which gradually increased up to two third of trapped in the pheromone trap.Therefore, pheromone trap was the more effective than cider vinegar trap for cucurbit fruit fly trapping.The present study was in agreement with observation of Nasiruddin et al. (2002), where they stated that, pheromone trap performed more effectively than other traps he used.A few little numbers of fruit flies were trapped during this experimental period.This circumstance might be due to variation in agro-ecological system as well as cropping pattern of the experimental location, species diversity and richness.It might also be due to the large cultivation of rice surrounding the experimental field. iii.Fruit infestations after setting pheromone and cider vinegar traps Data on fruit fly infested fruits and total number of fruits were counted at 5 days intervals from the plots where pheromone and cider vinegar traps were set.The data were then calculated as percentage and presented in the Table 3.Both pheromone and cider vinegar traps were found statistically alike but significantly efficient in reducing fruit infestations comparing to untreated plots (up to 39.17%).Pheromone traps was the most effective where less fruit infestations was recorded at every successive counting.Comparatively more fruit infestations (upto 32.73%) were found in plots with cider vinegar traps. Conclusions Bitter gourd was mostly preferable and snake gourd was found lest preferable variety followed by ridge gourd to cucurbit fruit fly.Bagging was the most effective measure among the management tactics used in the experiment and neem oil was mostly effective followed by alamonda leaf extracts and mahogany oil among botanical insecticides evaluated in the experiment.Both traps were observed effective but pheromone trap was comparatively better than cider vinegar trap to control fruit fly.Therefore, host preference ranking among the selected vegetables were bitter gourd> ridge gourd> snake gourd to the cucurbit fruit fly and the ranking of fruit fly control measures based on the efficacy was bagging of fruits>neem oil>alamonda leaf extract>mahogany oil> pheromone trap> cider vinegar trap for the management of cucurbit fruit fly. Figure 1 . Figure 1.Efficacy of pheromone trap and cider vinegar trap. Table 2 . Fruit infestation (%) at different counting under various treatments. Different letters at same counting are significantly different.Figures in parenthesis with -/+ represent percent reduction/increase of fruit infestation. Table 3 . Efficacy of pheromone and cider vinegar traps in reducing fruit infestation caused by fruit flies. Traps Mean number of infested fruits 1 st counting 2 nd counting 3 rd counting Different letters at same counting are significantly different.
2019-04-27T13:05:40.329Z
2017-04-30T00:00:00.000
{ "year": 2017, "sha1": "2a244851982497377c23ff09a1bcda9200115d9d", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/AAJBB/article/download/64047/43655", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fcd93c6968367e2eee6aa821db5fa0d4f2c27b34", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
256096578
pes2o/s2orc
v3-fos-license
Functional coherence metrics in protein families Background Biological sequences, such as proteins, have been provided with annotations that assign functional information. These functional annotations are associations of proteins (or other biological sequences) with descriptors characterizing their biological roles. However, not all proteins are fully (or even at all) annotated. This annotation incompleteness limits our ability to make sound assertions about the functional coherence within sets of proteins. Annotation incompleteness is a problematic issue when measuring semantic functional similarity of biological sequences since they can only capture a limited amount of all the semantic aspects the sequences may encompass. Methods Instead of relying uniquely on single (reductive) metrics, this work proposes a comprehensive approach for assessing functional coherence within protein sets. The approach entails using visualization and term enrichment techniques anchored in specific domain knowledge, such as a protein family. For that purpose we evaluate two novel functional coherence metrics, mUI and mGIC that combine aspects of semantic similarity measures and term enrichment. Results These metrics were used to effectively capture and measure the local similarity cores within protein sets. Hence, these metrics coupled with visualization tools allow an improved grasp on three important functional annotation aspects: completeness, agreement and coherence. Conclusions Measuring the functional similarity between proteins based on their annotations is a non trivial task. Several metrics exist but due both to characteristics intrinsic to the nature of graphs and extrinsic natures related to the process of annotation each measure can only capture certain functional annotation aspects of proteins. Hence, when trying to measure the functional coherence of a set of proteins a single metric is too reductive. Therefore, it is valuable to be aware of how each employed similarity metric works and what similarity aspects it can best capture. Here we test the behaviour and resilience of some similarity metrics. Electronic supplementary material The online version of this article (doi:10.1186/s13326-016-0076-y) contains supplementary material, which is available to authorized users. Background Over the last two decades functional annotation systems have been providing annotations for numerous proteins as well as other gene products. One of the most common steps used in functional annotation is the use of sequence alignment algorithms to compare sequences and find homologies from which functions can be extrapolated. Usually, lists of proteins (or other gene products) result from the output of many high-throughput technologies. Therefore, not only is it important to identify common functions in those sets of proteins but also to quantify how functionally related the proteins are in order *Correspondence: fcouto@di.fc.ul.pt 1 LaSIGE, Faculdade de Ciências, Universidade de Lisboa, Lisboa, Portugal Full list of author information is available at the end of the article to increase understanding of the involvement of biological systems [1,2]. The Gene Ontology (GO) project aims to provide generically consistent descriptions for the molecular phenomena in which gene products are involved [3]. For over a decade the increasing popularity and consequent growth of GO has led to its adoption and prevalent use in annotation projects. Consequently, this pervasiveness has enabled and motivated the development of several semantic similarity metrics [4][5][6]. Semantic similarity can be defined as the quantity that reflects the closeness in meaning of two concepts in an ontology. However, the semantic similarity between two proteins, which can be annotated with several GO terms is commonly called "functional similarity" since it is the functional annotation terms that are being compared. More recently, several metrics focusing specifically on measuring the functional cohesiveness of a set of proteins (or gene products) through their annotations have been developed. These metrics for the assessment of functional coherence using annotations are commonly based on the previously developed groupwise semantic similarity approaches. One of those metrics, GS 2 [7], uses a set-based approach and was developed with computational efficiency in mind to measure gene set functional similarity based on GO terms. The GS 2 algorithm ranks annotation terms using a simple gene counting method and then compares each gene with the remaining genes with respect to the distribution of functional annotations. This simple measure can only capture similarity trends within gene sets and can not precisely assess similarity. Despite that, GS 2 has performed well when compared with the semantic similarity pairwise measure of [8]. On the other hand, another set of three different metrics: average seed degree, total length and relative seed degree were developed by [9], for the assessment of functional coherence in gene sets based on the topological properties of GO-derived graphs. The procedure leading to these metrics relies on building GO subgraphs that subsume each gene set annotation (for each GO aspect), whereas each node is a GO term and each edge is an is_a relationship between terms. Subsequently, those graphs are further enriched by adding genes, as a new type of node, associated to the original GO nodes, and additional new edges are created between GO terms whenever these share gene annotations. The original term-to-term edges are weighted using the Information Content [10] difference between both terms while the new edges created after addition of the gene nodes to the graph are statistically weighted based on the total number of edges in the graph and the number of supporting genes for each particular edge. Hence, this approach handles the issue at hand both from an annotation enrichment perspective and an annotation relationship perspective. Steiner trees are then extracted from the graphs and the sum of all edge lengths is minimized for all possible subgraphs. The aforementioned three metrics are then applied to these trees. The average seed degree averages, for a full tree, the counts of the number of genes associated to the seed terms thus reflecting a global measure of enrichment. On the other hand the total length metric reflects the overall relatedness of functions by performing the sum of the length of all edges in a tree. The relative seed degree metric combines the aspects described above as a ratio. The methodology performs well, but like other GO-evaluation methodologies, its metrics are dependent on the gene annotation state. The GO-based functional dissimilarity (GFD) metric [11] approaches the problem of functional coherence in gene sets by considering that each gene can encode several proteins with different functions. In this metric, for each gene set, only the most common and specific function is chosen as being the most globally cohesive function. In this approach, genes are represented as sets to which a simple counting edge-based measure ratio is applied and that aims at equating both gene relatedness and specificity. The actual GFD is then the minimum of dissimilarity possible for all representations of a given set of genes. Like the previous metrics this one also depends on the completeness of the annotations used in order to provide accurate measurements. Furthermore, by considering only the most common and specific function in a gene set the authors are effectively discarding potential nonrelated functions that would cause noise, however at the cost of disregarding multi-functional associations in gene sets. Furthermore, and despite not being exactly a system for measuring functional coherence in gene sets, RuleGO [12] provides a service that statistically compares and characterizes two disjointed gene sets. Underneath it runs a rule-based system that incrementally iterates the list of GO terms annotating the two input gene sets and verifies at each step if a new co-occurrence rule can be created. Much like the typical gene enrichment systems, this system also performs over-representation tests on the rules created and only rules corresponding to a p-value below a given statistical significance threshold (after multiple testing correction) are considered. This process results in multi-attribute rules containing annotation terms and respective support indexes and evaluation parameters that can be used in the characterization of the disjointed gene sets. In this methodology rules are evaluated by length (number of genes in a rule premise) representing support, by depth (normalized sum of the GO graph levels where terms in the rule appear) representing specificity and by an additional quality measure. A different approach is taken by [13] where functional coherence in gene sets is assessed with the help of the biological literature. Here, term-by-gene matrices are constructed with entries derived from weighted frequencies of the terms across a collection of abstracts (biological literature). The genes are then represented as vectors and the similarity between them is calculated as the cosine of the vector angles. Thus, a pair of genes would have a cosine score of 1.0 if they shared the exact same abstracts in the collection. Gene sets in this method were deemed functionally coherent when cosine values above a given threshold (0.6) were often found with significances measured by a statistical test (Fisher's exact test). This threshold was chosen based on the distribution of similarity cosine scores in 1,000 random gene sets. Hence, functional coherence here is derived essentially from the supporting literature, thus making the method sensitive to the quality of the document corpus used. Regardless, the method was used to obtain results similar to those produced by another literature-based functional coherence assessing method [14]. Since functional annotation quality is paramount, [15] developed a system to provide an annotation confidence score for genome annotations. The system operates on the basis of a genome comparison approach whereby annotations in a target genome are scored in comparison with a reference genome. The gene alignments across genomes are made via the BLAST tool with adjustments for expected number of genes (different organisms have different gene counts) and phylogenetic distance (closer genomes typically share more genes than distant ones). However, actual annotation similarity is derived from free-text annotations which are converted into word vectors that enable the calculation of a simple cosine similarity measure. Both sequence similarity and annotation similarity are combined into a single metric by applying statistical techniques. Despite the existence of these types of metrics the protein annotation landscape is often very heterogeneous in terms of quality, specificity and completeness. Annotation quality is related with the annotation method and source used, e.g. defined by the different GO evidence codes associated to each annotation. Annotation specificity relates to how specific or general an annotation term is, and when in a protein set there is a clear disproportion between general term annotations and specific annotations, that set can be said to suffer from annotation incompleteness. In this work we concern ourselves mostly with the aspects of annotation completeness and specificity. Given that functional similarity is derived from semantic similarity approaches over the annotation terms, it is also relevant to define the concept of annotation agreement as a measure of annotation homogeneity for a given set of proteins. This metric, will naively measure the coherence of a given set based on the fraction of shared annotation terms between all proteins in the set, and thus will be highly susceptible to the lack of annotation completeness. We use this measure as a baseline whereas we introduce other metrics to characterize the state of known functional similarity of a given set and gauge the potential state of annotation incompleteness. Hence, in this work functional coherence is defined as a measure of functional closeness (similarity) among all proteins in a set given the current functional annotations within that protein set. Methods A functional annotation is defined as a pairing between a gene product (protein) identifier and a term providing some functional description. In this study, only the molecular function term annotations from GO were considered because the aim of this work lies closer to studying one-dimensional annotation (as proposed by [16]) at the molecular functional level in enzymes. Ideally, the functional annotations over a given protein set should allow us to infer biological relationships within the set. In order to achieve that, it is convenient to have metrics that enable us to compare how similar (or dissimilar) annotations are within a given protein set. However, considering the GO DAG structure it becomes apparent that measuring functional relatedness via annotation is not a trivial matter. Therefore, in order to help make such assertions regarding functional relatedness, three main annotation aspects were considered: completeness, agreement and coherence. Completeness Any set of functionally related proteins, in which not all proteins are annotated to the same specificity level, can be considered to incur in a form of annotation incompleteness. Figure 1a) illustrates such a situation. For a hypothetical set of one hundred proteins, only one of the hypothetical annotation terms (besides the root) annotates all the proteins in that set. As the nodes get further away from the root term, it can be seen that the number of annotations dwindles until it reaches the leaf terms. And while any given protein does not need to have its most specific function represented by a leaf term, it is unlikely that a very generic term (such as a direct child of the root term) is a full descriptor of its activity. However, it is not trivial to determine this kind of incompleteness, and only after determining or predicting new functional activities can we definitively say that any given protein (or set) was incompletely annotated. Thus, in this work we limit the definition of completeness to the minimal set of annotations evenly distributed among the proteins in a set that characterizes the unifying functions of that set. This kind of annotation incompleteness can derive from the fact that different protein annotation methods are used, which provide different degrees of annotation confidence. Therefore, annotation heterogeneity is created accordingly to the annotation confidence level given by each annotation method. For instance, a majority of the automatic annotation methods typically create more generic annotations. On the other hand, manual curation is more likely to lead to more highly specific annotations. Additionally, the inherent research bias towards more intensively studied model organisms and biological processes can also help further this state of incompleteness. Agreement Annotation agreement can be defined as the fraction of annotations that are shared in a set of proteins. Hence, with x i as the number of annotations for a term i, N as the total number of proteins annotated and t the total number of distinct annotation terms. Therefore, the greater the amount of shared annotations the greater is the annotation agreement. Figure 1b) illustrates a hypothetical full annotation agreement situation. In this situation, each one of the one hundred proteins is annotated to the same exact annotation term set and thus that hypothetical set achieves maximum or total annotation agreement. However, this is a naive metric that is also overly sensitive to annotation incompleteness and even small amounts of noise. Coherence Naturally, a set of proteins having a total annotation agreement is also functionally similar, to the extent of its most specific annotation terms. On the other hand, functional similarity may not need to be so strictly defined. Additionally, due to the above mentioned incompleteness issue and the multi-functional nature of proteins, when measuring functional similarity through annotation, it may be useful to consider just some of the annotations as being functionally characteristic of a given protein set. Furthermore, for the purposes of this work, the concept of annotation coherence is further refined and defined as the fraction of shared annotations that define the core of the functional activities that is common and most relevant and thus able to characterize a given protein set, as a functional cohesive group. Figure 1c) illustrates a hypothetical full annotation coherence situation, where the grey shaded nodes represent the functionally more relevant terms, or the central functional cohesiveness of that set. However, a single metric is too reductive in assessing these (and other) different aspects of annotation that can dictate the functional coherence of the annotation space in protein sets. Therefore, in this work, we use a set of metrics and respective interpretation strategies relating to these three aspects of annotation described above in order to explore protein (enzyme) annotation spaces. Methodology When it comes to capturing the relationship between functional and sequence similarity, the different semantic similarity metrics often present a similar behaviour, with the main distinction among them being their resolution. A comparison of several GO-based semantic similarity metrics [17], found the graph-based measure simGIC consistently showing a high resolution (and providing about 19-44 % increased resolution over the metric it derives from, the simUI metric). In the work presented here, both the simUI [18] and the simGIC [19] metrics are used for assessing functional coherence and establishing similarity baselines. Both metrics are pairwise, and as such calculate the similarity between protein pairs through their extended set of annotations (direct annotations and ancestral terms). Therefore, for a pair of proteins A and B with their extended GO term annotation sets being GO As previously mentioned, in the [11] methodology, only the most common and specific function of a set is chosen as the most globally cohesive function. In this work it is also assumed that not all functional annotations in any given protein set (family) should characterize that set. On the other hand, considering the frequent multifunctional nature of proteins, in this work, a set of annotation terms are selected in each protein set or family as being its functional characteristic core. Therefore, the strategy employed in this work to isolate the functional characteristic cores in protein families was to resort to term enrichment analysis. In particular, a Python implementation of the ubiquitous term-for-term enrichment approach was developed. Since most of the study sets used here are relatively small, and with several terms having low expected frequencies (up to five expected observations) the Fisher exact test was used to determine enrichment. Hence, for each annotation term t in a given protein set a 2x2 contingency table was generated according to Table 1 with N being the number of proteins in the set, M the number of proteins in all considered sets, nt the number of proteins annotated with term t in the set and mt the number of proteins annotated with term t in all considered sets (mt). The statistical evidence of enrichment was then postulated on the basis of the p-values obtained from the Fisher exact test being smaller than the chosen statistical significance (alpha). It should be noted that on the term-for-term approach the graph nature of GO will lead to a statistical dependency issue. That is, for a given term annotating a certain number of proteins, at least that same number of proteins or more will also be annotated by the parental terms. Among the several strategies available to mitigate this issue, here, the Topology-based Elimination (Elim) strategy [20] was used. The strategy consists in targeting significant leaves in an annotation graph and iteratively subtracting the proteins annotated there from parent terms up to the root term. After all terms are processed new p-values are computed for each term. Thus, this mitigates the statistical dependencies between nodes by downplaying the statistical significance (and thus importance) of ancestor nodes. This is a desired effect, since (for a similar level of annotation quality) a more specific annotation is preferable to a general annotation. Therefore, the Elim method favours leaf terms found to be significant and at the same time removes proteins annotated to significant child terms from the parent terms annotation counts, which in turn attenuates the childrens' influence on the parental terms. Additionally, it should be noted that the computed p-values for the GO terms under this strategy are conditioned on their children terms, and thus not independent. Therefore, direct application of the multiple testing theory is not possible. It is then preferable to interpret the returned p-values as corrected or not affected by multiple testing [20]. Coherence resilience assays In order to test our approach Polysaccaride Lyase (PL) families of the CAZy [21] database were used as a study case. The protein (module) families in this database are organized into 5 different classes: Glycoside Hydrolases (GH), GlycosylTransferases (GT), Carbohydrate Esterases (CE), Polysaccharide Lyases (PL) and Carbohydrate Binding-Modules (CBM). The CAZy database version (c7-2011) that we used in our analysis has about 138,000 distinct UniProt identifiers distributed through the families in these classes as shown in Table 2. The performed assessments were limited to using the UniProt identifiers in those families because of their direct mapping to GO term annotations. Thus, for the annotation mapping we have used the GOA [22] annotation files (version 2013-03). Within the CAZy database the PL class is the one that is better characterized by the Glycobiology community, in part due to its more tractable Table 3. For this reason we also have elected to perform our assays using this class of families. We have run the coherence resilience assays that we describe below only for families PL1 to P12, PL16, PL17 and PL22 because these are the only ones that met the minimal size requirement for our assaying. In order to perform our assays we subjected each of these families (sets) to a degeneration procedure as illustrated by Fig. 2 where x % proteins in an original protein set are replaced by random proteins. In our assay these protein replacements were obtained randomly from the remainder of the CAZy families. The degeneration procedure was applied in discrete levels of 10 % protein replacement (from 0 % up to 100 % protein replacement) to each of the sets. Hence, each original protein family (0 % replacement) would gradually turn into a complete random set (100 % replacement). Consequently, for each family the functional similarity is expected to degrade progressively as the percentage of random replacement (noise) rises. Subsequently, we have used these gradual degeneration sets to assay the behaviour of each of the Agreement, simUI [18], simGIC [4] and GS 2 [7] metrics along with two novel hybrid metrics, mUI and mGIC that we introduce further ahead. For each of the discrete levels of degeneration (noise) one hundred iterations were run per family. During each iteration, both the original family and the noise source were randomly sampled for the proteins to keep and the replacement proteins, respectively. The similarity was computed at the end of each iteration for each of the assayed metrics and then averaged for the total one hundred iterations. For all the assayed metrics (simUI, simGIC, mUI and mGIC), the global set results were obtained by averaging all the term pairwise results within each protein set. The resulting average scores are shown in Fig. 3 as plots of similarity (functional coherence) as a function of the percentage of randomly replaced proteins. Hybrid metrics For this work two novel functional coherence metrics, mUI and mGIC were developed. They are based on the combination of semantic similarity metrics simUI and simGIC and a term-for-term enrichment analysis as described by the following algorithm: The annotation graph for a protein set (family) being measured is generated (line 1). For each term in the annotation graph (line 2) enrichment analysis using a termfor-term (with Elim adjustment) strategy is performed as previously described. If a term is found to be statistically enriched (line 4) it is added to a derived annotation graph (line 5). When both annotation graphs are processed (line 6) the simUI and simGIC are applied to the shadow graph (annotationGraph') resulting in the values for the mUI and mGIC metrics, respectively (line 7 and 8). Results and discussion From the analysis of Fig. 3 it can be seen, as expected, that the similarity reported by each metric generally decreases as noise (in the form of random proteins) is increasingly added (replacing the original proteins) in each of the tested PL families. In this study, each of considered metrics is scaled on a [0, 1] theoretical range. The aim of our protein family degeneration assays is to observe two main aspects for each of the metrics, noise resilience and resolution. With noise resilience we check by how much the reported values can vary given the same amount of noise. As for resolution we register the difference between the maximum and minimum values it can actually report during our assays. The Agreement metric is the least noise resilient metric, as can be seen by both the generally low values it reports and the steep declines after adding small amounts of noise to family sets with previously high agreement. This property is most evident in mono-functional families like PL5, PL16 and PL17 and also PL12 where the introduction of 10 % random proteins produces a sharp decline in the reported values. This occurs because this naive metric only equates the average of annotation term frequencies in each protein family (or set). This metric was chosen and used as the overall baseline. The simUI and its derivative simGIC, as expected, have a similar behaviour because simGIC is a IC-weighted version of simUI. Furthermore, in the obtained results (Additional file 1) it is noticeable that simGIC presents a greater resolution than simUI (average range of 0.57 against a range of 0.46, respectively, as can be computed from Table 4), a behaviour that was also previously reported by [17] in their assessment of semantic similarity metrics. In contrast, the GS 2 metric has the smallest resolution (for the tested sets) of all the tested metrics showing an average range of 0.18. In addition, to offering a smaller range of values (and a thus lower resolution) it is important to notice that reported values for this metric fall within the 0.75-1.0 range of similarity. Given that it is expected for protein (enzyme) families to have functionally similar proteins it would also be expected (and optimal) that these families would display higher coherence. However, when the unadulterated families are considered some of them do not provide the necessary annotations supporting such high global set functional coherence values, especially when considering values produced from the 100 % randomized sets. The mUI and mGIC (such as the metrics they are derived from) also display, as expected, similar behaviours to each other. Their results measure the enrichment contribution relative to the original semantic similarity metrics. In fact, for most of the tested PL families and their respective degenerate sets the reported values are very similar. However, unlike the other tested metrics mUI and mGIC are resilient to noise (replacement with random proteins). That is evident from the gradual curves in Fig. 3 which in most families plateau until higher levels of randomization and typically only fall abruptly after addition of 90 % random proteins. This resilience to noise is conferred by the term enrichment step which pre-selects only the subset of proteins that are annotated with the terms found to be statistically significant by the enrichment procedure. Thus, this is an important factor to consider when analysing the results provided by these two metrics. As they were engineered to capture local (subset) functional coherence, for a comprehensive evaluation they should only be used in an analysis that also simultaneously considers the annotation coverage within the analysed set. This also explains the observed peaks at high noise levels in some of the families (PL2, PL6, PL9, PL11) where a small number of terms annotates a small subset of proteins and thus creates a local similarity effect. That is, at high levels of random protein replacement the original families are greatly degenerated because they lose the proteins that were characteristic for the identity of that family while, on the other hand, randomly gaining less related proteins. Hence, if a couple of random proteins being introduced happen to be very similar in terms of annotations and those terms are also found to be statistically enriched, then a new similarity core is introduced which results in the appearance of those peaks of high similarity. However, for this work this behaviour is advantageous because the underlying assumption is that each protein family shares core annotations that define the group role of that set of proteins. Thus, by using a term enrichment technique the purpose is to target and select these core annotation terms. The proteins annotated by these identified core annotation terms can then, for instance, be used for annotation extension within that set as previously proposed [23]. Thus, according to that proposal, for an hypothetical partially annotated protein set (with an expected degree of functional relatedness) the mUI/mGIC metrics can be used to identify the functional core of that set while reporting its functional similarity. If that core, reports a high similarity value and also provides enough statistical power (number of associated protein sequences) it can be used to create, for instance, a Hidden Markov Model profile model. Subsequently, that model can potentially be used as a classifier in order to extend annotations from the core to the sub-annotated sequences in the original measured protein set. Defining a completeness state and quantitatively measuring it is a challenging task considering the complexity in generalizing rules needed to detect it. Instead we approach it only qualitatively by analysing each different protein set, case-by-case by relying on domain knowledge (confirmed and expected functional associations) and then making empirical assertions about the state of annotation completeness of each protein set. For that purpose we use GRYFUN [24], a web application that we have previously developed. This application allows for annotation visualization coupled with statistical assessment (term-by-term enrichment) and is used to produce annotation graphs like the one shown in Fig. 4. The graph portrayed in Fig. 4 subsumes all the GO terms (from the molecular function ontology branch) annotating a set of PL10 family proteins. Unlike the typical GO graph where the edges point towards their parent terms, here the edges point towards their children and have widths proportional to the number of proteins annotated to each successive child term. The purpose is to convey the "annotation flow" and easily be able to identify "annotation bottlenecks", or terms where annotation might have stopped despite the expectation that more proteins in that set could have been annotated to children terms of these "bottlenecks". For the case of the PL10 family set portrayed in Fig. 4 the "bottleneck annotation" is on the term "lyase activity". Domain knowledge indicates that this term should annotate each protein in this family (e.g. the PL10 family is part of the Polysaccharide Lyases). However, this annotation term is relatively generic and considering the proportion of proteins not annotated with children of this term (as can be easily seen from the graph) it is fair to assume substantial annotation incompleteness. Additionally, considering the plot in Fig. 3 that represents the degeneration of the PL10 set, it can be seen that the values for mUI and mGIC actually increase along with the degeneration of the set. As previously explained the enrichment process of the mUI/mGIC algorithm considers only a protein subset of the target set being measured. Hence, it is important to consider other metrics (for instance the parent metrics simUI/simGIC) in tandem with these novel metrics for a global assessment of functional coherence in a set. Nevertheless, these novel metrics allow the identification of core activities which can potentially be extended to more sequences within the original set. Conclusions Measuring the functional similarity between proteins based on their annotations is a non trivial task. Several metrics exist but due to characteristics both intrinsic to the nature of graphs and extrinsic natures related to the process of annotation each measure can only capture certain functional annotation aspects of proteins. Hence, when trying to measure the functional coherence of a set Fig. 4 GRYFUN annotation graph. Annotation of GO molecular function ontology graph generated by the GRYFUN web application for a set of proteins from the PL10 family of proteins a single metric is too reductive. Therefore, it is valuable to be aware of how each employed similarity metric works and what similarity aspects it can best capture. Here we test the behaviour and resilience of some similarity metrics. Additionally, we propose a comprehensive approach at determining functional coherence in protein sets (families) based not only on metrics but also statistics (term enrichment) and visualization coupled with domain knowledge-based empirical assessments. Furthermore, we propose two novel metrics mUI and mGIC that combine two of the above mentioned approaches, semantic similarity metrics and term enrichment. The goal is to capture protein subsets within families (or other functionally related sets) that characterize that family (or set), which can subsequently be used for annotation extension for potentially sub-annotated proteins within the same family (or set). The proposed approach is modular and can be integrated with other annotation methodologies mostly as a pre-processing step. In the future, we will be implementing both mUI and mGIC (along with other) metrics into our web application GRYFUN. This will more easily capture the annotation functional cores in protein sets and pipeline them to a custom annotation extension module based on HMM profiles that we are currently developing. Additional file Additional file 1: Average similarity as measured by six different metrics for each of the discrete levels of noise. (XLS 50 kb)
2018-04-03T00:00:38.996Z
2016-06-23T00:00:00.000
{ "year": 2016, "sha1": "25927fc2d0478597ee016c44053b6e089a8dcf9b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13326-016-0076-y", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "25927fc2d0478597ee016c44053b6e089a8dcf9b", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
214375418
pes2o/s2orc
v3-fos-license
An Analysis of the Characters of Chinese Calligraphy Art Based on Mathematical Elements Mathematics is an effective way to measure or describe things objectively and accurately. It is of great significance to accurately master stroke trend, font structure and component position to improve the level of calligraphy art. This paper tries to use mathematical elements to analyze the art of calligraphy. It is concluded that through the knowledge of angle, golden section, structure and curve in mathematics, the rule of stroke trend, font structure, component position, and work shape as well as the overall layout of characters can be identified. Introduction Chinese characters belong to the few remaining ideographic characters around the world, and they are also few characters that can be used as artworks. Chinese calligraphy is an ancient art, and its developments can be divided into several periods as follows: oracle bone inscriptions, golden inscriptions, great usurpation, minor usurpation, and official script. It is nearly in the Eastern Han dynasty and Wei and Jin dynasties that the Chinese characters were shaped by cursive script, regular script and running script which generally keep different characters in shapes and structures. As an art, the traditional Chinese calligraphy has been widely loved and studied since its formation. The skills of calligraphy art can be roughly divided into stroke, font structure, font shape, white cloth, text layout and other types that are considered the basic elements of calligraphy art. The rule of artistic treatment of these elements is the skill of calligraphy art. A large number of calligraphy works are included in various calligraphy copybooks and reference books published at present. At the same time, calligraphers' professional discussions and comments on writing skills have also been involved. But there are lots of professional terms, causing most of the practitioners hard to understand and use those books. Thus, the calligraphy practitioners can only repeat their practices to build up their own experience producing better calligraphy works, who has gradually groped out the rules of calligraphy writing. This kind of practicing method is time-wasting and not efficient at all. Karl Marx said: Science, only successfully using mathematics can reach perfection. As a basic subject, the paper thus attempts to dissect the calligraphy art from a mathematical lens, which always lays a solid foundation for scientific research. The aim of this research rests on how to demonstrate the regular skills in calligraphy with relevant mathematical elements and make these skills easier to understand, master and apply for the public, especially for calligraphy learners. Ultimately, this research is based on the relevant theories in the field of calligraphy, explores the effect of mathematical knowledge on discovering the beauty of calligraphy works, and tries to fulfill the purpose of improving the artistic level of calligraphy with mathematical knowledge. This kind of artistic analysis method on the basis of mathematical thinking is of practical significance not only in helping practitioners to master skills quickly, but also in providing new ideas for deeper analysis of the nature of calligraphy art. It also proves again that mathematics, as a fundamental discipline, plays an important role in all fields of research. Related Works To make the study more extensively applicable, the paper analyzes various elements of the calligraphy art, including stroke, point (formed by crossing strokes), components (made up of strokes), the different carriers of Chinese characters and the different spatial layouts of calligraphy works. It also analyzes different types of calligraphy elements and summarizes the rules based on ideas of point-to-line, line-to-surface and plane-to-space in mathematics. According to analysis of calligraphy copybooks and reading, combined with math elements like perspective, proportion, center, and the curve function, the paper employs simple graphics and measurable data to present the obscure expressions and abstract concepts relevant to Chinese calligraphy. Thus, the practitioners can rely on the simplest mathematical knowledge to quickly master the skills of calligraphy, understand the important role of mathematics in art aesthetic, and take their understanding of calligraphy art to the next level. The Progress of Research In the early stage of human civilization, people carved images on tortoise shells, animal bones, and these simple strokes [1], some of which even bent so signifi-cantly that people could hardly identify even now, have ever since undergone a fairly long period of developments. These images form the earliest strokes of Chinese characters that we can see nowadays. The basic strokes of Chinese characters are mostly geometric curves in mathematics [2]. Based on relevant theories in mathematics, the academia generally holds that the bending position of Chinese characters is based on some special angles such as 30˚, 45˚, 60˚, 90˚, 120˚, and 150˚, and the curve position of Chinese characters are accordingly based on arc lines. The flexible use of those angles further contributes to the formation of the general structure of Chinese characters. Horizontal "Flat" and Obtuse Angle Horizontal is the main component of cuneiform and the most basic stroke in Chinese calligraphy, Figure 1 shows the trace of pen movement. When the characters are written horizontally, their strokes should be made as smooth as they can be, the end of the strokes should be like a horse's head with the reins tightened [3]. In this paper, the horizontal is viewed as a straight angle of 180˚, in order to avoid the edge of the pen shake. At the same time, when the pen used to write the relevant characters is held, it needs to be adjusted to form a 150˚ angle, and the short edge on the right side of the angle should then be adopted in order to create a supporting effect and make the strokes turn out to be more stable, because symmetry and fullness are the most basic aesthetic requirements [4], as shown in Figure 2. The Turning and Included Angle of Horizontal and Vertical Paintings With the progress of society, there have been increasingly rich contents to be recorded by Chinese characters. This demand has greatly boosted the growth of both the types and numbers of Chinese characters. The strokes that make up Chinese characters have thus also become more complex. In order to write Chinese characters better, some softer and finer brushes, as well as flat animal skins, bamboo slips and silk cloth started to emerge. This has further provided the conditions for more angle to derive accordingly, making a number of angles produced even in one single stroke of each character. As Figure 3 shows below, the horizontal line and vertical line of the upper right part of the Chinese character "国 (means country or nation)" together form an angle of 90˚ internally and an angle of 150˚ externally. As Figure 4 shows as follows, the starting stroke and middle stroke of the upper right part of the Chinese character "仍 (means still)" form an angle of 60˚ while the middle stroke and the horizontal line form an angle of 75˚, tip first right up and then left down, not horizontal or vertical [5]. Those angles make the font look upper wide and lower tight, one that is generally recognized to be written squarely and made beautifully done. Circles and Arcs in Strokes The appearance of the curve is a significant improvement of the art of calligraphy. The Greek Pythagorean schools believe that the circle is the most beautiful geometric figure. Take Figure 5 and Figure 6 for example, which shows the Chinese character "民 (means people)" and "礼 (means rites or ritual)" respectively, the beauty of harmony between circle and arc from the strokes of the zero curve can be easily seen, with the greater radian and bend of the stroke lines. Font Structure and Golden Ratio With the appearance of ideographic characters, the forms of Chinese characters cast off the shackles of the image of natural things. Such changes, on the one hand enriched the connotation of the texts, while on the other hand, they also improved the ability of expression. Through generations of human's exploration, the golden section has been identified effective in terms of making the Chinese characters seem more beautifully written, and thus this concept, widely used in making potteries look more proportionate or optimizing the structure of the buildings we live in, has been naturally applied to make the Chinese characters look as genuine as the original Characters. For the convenience of analysis, this paper calculated the different golden sections of Chinese characters including 0.618 as 0.6 and 0.382 as 0.4, and explored the rules behind each golden section based on the ratios such as 3:2 or 2:3. Golden Section in Font Contour By comparing Figure 7 with Figure 8, it can be seen that the outline of most Chinese characters is just like Da Vinci's man "Vitruvius", like a standing man with a frame close to the golden ratio. Study on Calligraphy by Qi Gong is the most representative work. It is described in many languages and can be simply summarized as follows. The Golden Section in the Strokes In Figure 9, the connecting point of the stroke falls exactly on the golden section point of the center line. On the whole, the stroke at the golden section point can make the whole glyph present the basic outline of an equilateral triangle, which looks more stable and beautiful. There are two golden section points in Figure 10. It can be seen from the schematic diagram that the two intersections are 0.6 and 0.4 of the whole stroke respectively. Although there is a slight change in the whole with the standard of golden section. Even if the strokes belong to the same category, there are also different lengths in writing which basically meet the golden ratio. For example, the horizontal line in Figure 11 has an upper and lower ratio of 3:2. The ratio of the same stroke in Open Journal of Applied Sciences Figure 12 is also 3:2. The most obvious example is Figure 13. Many posts mention that "the middle stroke should be a little shorter", but there is little specific description of how short it is. Then, according to the standard of golden section, a ratio of 1:0.61 formed with the last stroke presents the best effects of visual beauty. The Changes in Calligraphy Form and Center Position The tools of writing have changed from simple such as stone walls, clay tablets and clay pots, to rich carriers such as silk cloth, silk and paper. With more and more sufficient space provided for the foundation of the writing of calligraphy art, the most obvious changes go to calligraphy forms. The changes in the calligraphy forms are usually realized by the displacement of the center which results in the improvement of the visual effect [6]. The Changes in the Center of the Glyph There are fewer strokes in Figure 14 and Figure 15. Hence, the paper just pays attention to the compact structure and stable center of gravity when the characters ("九-nine", "米-rice") are written. Thus, the character ("九-nine") in Figure 14 moves the center of the glyph A to point B by raising the strokes from left to right, forming A straight image, while Figure 15 shows that in the character ("米-rice"), with its center position moved to the right, the center symmetry of the glyph is broken, forming a more flexible glyph and avoiding the appearance of a rigid image in terms of the entire character. Center between Components The central position of Chinese characters plays a vital role in overall beauty of the character form. Generally, the center of Chinese characters should be determined by the main component. As Figure 16 shows, the main component is the right half. Therefore, the font shape should be adjusted through stroke changes, so that the center point A of the two components should be shifted to the right half of the component. Similarly, the main component of Figure 17 is the left half. Through the process of stroke extension of the right half, the center point A is shifted to the left half of the component. The Center of Stroke Formation Take Figure 18 and Figure 19 for example, when there are more than one stroke of a character, there will be a connection between the strokes which makes the structure of the character look more compact. In Figure 18, the intersection point of the bottom stroke trace line is the center of the font, diffracting a relatively stable triangle that forms a regular font shape. The bottom stroke of the character "照" (means shine) in Figure 19 has a clearly aligned layout drawn from a horizontal position of the same height to the left, which is very neat. Calligraphy Cloth White with Geometric Figures With the maturity of the writing skills of Chinese characters, more and more people regard calligraphy as an art rather than simply as a tool. The recording function of calligraphy has been gradually weakened, and the function as a form of art becomes more prominent [7]. Therefore, calligraphy began to appear in daily life as a work of art. In order to better showcase the beauty of the art of calligraphy, people used geometric figures to design the shape and blank, and introduced the concept of "counting white when black", which enriched the aesthetic elements of the art of calligraphy again, searching for the law of aesthetics outside the characters [8]. Open Journal of Applied Sciences Rectangular Calligraphy Works Rectangular is one of the commonest forms of calligraphy. Figure 20 below is a horizontal rectangular calligraphy work with 1/3 the width of the overall length, which can be written in single lines of large characters or several lines of small characters. This form can be traced back to the ancient scrolls or bamboo slips. Round Calligraphy Works In artistic aesthetics, people have a special preference for the round shape that is full of harmonious beauty, and the same is true in the art of calligraphy. A calligraphy work with a circular outline, usually mounted in a picture frame and hung indoors for appreciation. When writing, people should notice the distance of the arc of big circle of outer circle and the characters to achieve equalization. Take Figure 23 for example, circle A is the outer frame, and B, D, F and H are Open Journal of Applied Sciences the outermost points of each character in the body of the calligraphy. People use the radius that passes through these points as the mid perpendicular to make a tangent line of a circle. It can be seen that the lengths of BC, DE, FG and HI are roughly the same which reflect "juzheng" (following the right way) and completeness in ancient Chinese philosophy [9]. The fan in Figure 24 is a calligraphy work with a certain curve and unique shape that can be regarded as a fan in a geometric figure, or a space formed by the superposition of two circles. Compared with the center of the mirror, the application of mathematical knowledge of the fan is more complex. It is a late mounting process. Calligraphy Aesthetics and Function Curve Calligraphy is an important form for ancient Chinese literati to show their individuality and express their aspirations and interests. It has been said that "the style of a man's characters shows who he or she is." Therefore, calligraphy works are greatly influenced by social environment and the author's personal experience. Take Figure 25 for example, the work is named Orchid Pavilion, which is known as the "Zenith of cursive script", and it was created in Wei-Jin period [10]. The political environment back then was full of darkness and cruelty so that a large number of scholars were forced to give up being officials and instead turn to study the metaphysics, astronomy, science and technology. Those factors exerted a subtle influence on the calligraphy art at that time. The style of calligraphy also presents flexibility, but not the characteristics of the bound. Especially in mathematics, the research achievements by mathematicians Liu Hui and Zu Chongzhi have also provided a new form of mathematical expression for art of calligraphy, this kind of regular fluctuation is easy to cause the aesthetic experience and imagination of the viewers [11], thus in this paper the function curve has been used to analyze the calligraphy. The Wave Changes in Columns As shown in Figure 26, take the fifth columns for example, which mainly illustrate the wave changes in columns, the axis of the 1 st and the 7 th character is to right, and the axis of the 4 th the 10 th character is to left. The axis of the last character is to right again. Also, the axis of each character not only deviates from left to right, but also changes in tilt direction. Open Journal of Applied Sciences The axis of the first character obviously tilts to right. The axis of the first character tilts to left. The axis of the seventh character tilts to right again while that of the ninth character tilts to left again. In this paper, a sine curve is used to symbolically describe the size of a font. Take the ninth column in Figure 27 for example, it can be seen that the first character is the peak because of its bigger font, the third character is the valley due to its smaller font, the sixth character is the peak, and the eighth character is the valley. By that analogy, the paper concludes the function curve shown in Figure 28. The Wave Changes of Interval Density The study of the character density involves analyzing the different locations of Open Journal of Applied Sciences the character. In order to facilitate this analysis, the paper sets up a coordinate system on the whole picture. The upper right corner of the work is for the origin, X(m, n). X refers to a Chinese character in the work. M and n respectively are two positive integers. M refers to the ordinal number of the column, and n refers to the ordinal number of the character. For example, the paper uses A(1, 1) to represent the first character in the first column, and uses B (7,9) to represent the ninth character in the seventh column thus people can quickly and accurately search the character in the work with this kind of coordinate method, and the exact position can thus be determined based on the coordinate. Take the 6 th column in Figure 29 for example, the distance from A(6, 4) to B(6, 3) and C(6, 5) looks closer. The distance from D(6, 6) to C(6, 5) and E(6, 7) looks farther. The distance from F (6,8) to E(6, 7) and G (6,9) looks closer. The distance from H(6, 10) to G (6,9) and I(6, 11) looks farther. If closer distance symbolizes the peak and farther distance symbolizes the valley, then it can be clearly seen that this kind of periodical change, like sine curve, is described by the specific function as Figure 30. Open Journal of Applied Sciences The Wave Changes of Stroke Weight Similar to the periodical changes of different density, there is a rule in the stroke weight of each line in Orchid Pacillion. Take the 22 nd column in Figure 31 for example, A(22, 4) is written thickly and looks heavier. This is the peak. B(22, 5), the strokes are thinner and the font is small. This is the valley. C(22, 7) and D(22, 10) are written thickly again. They are both the peaks. E(22, 12), the font is small. This is the valley as the end of the column. The function curve is shown as Figure 32. Conclusion Mathematics accurately expresses the rules and skills of things, while the skills and rules of Chinese characters and calligraphy can also be described by mathematical concepts. With simple mathematical knowledge, this paper analyzes some representative Chinese characters, and illustrates the skills of calligraphy art. It verifies that the mathematical knowledge explains the rules and skills in calligraphy better. Besides, it allows the beginners to quickly understand and apply the rules to improve the level of calligraphy. Open Journal of Applied Sciences Future Work Chinese calligraphy is a special form of Chinese characters. It is an art that comes from nature and has been continuously processed and sublimated by countless Chinese people. Nowadays, more and more people like and practice calligraphy. However, there are still some things needing to be further improved: 1) Select more analysis objects and extract more general skills and rules. 2) Establish a better and accurate coordinate system to make mathematical elements such as angles and functions more concrete. 3) Further explore the representation of mathematics in other calligraphy elements such as seal, signature and mounting position. The paper hopes to further research and analyze the above content and other related fields to make the major findings in this paper understood more accurately and thoroughly, and ultimately, to make the calligraphy art gain more traction around the world in a way that people can learn more about this art and apply skills they acquire to constantly make progress in this art. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
2020-02-27T09:35:01.460Z
2020-02-21T00:00:00.000
{ "year": 2020, "sha1": "90455f1667529f3bb33c44e1f77e57dcf3b7cd84", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=98495", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9372109e856079123d9ce5e692a49d0b008b1dfe", "s2fieldsofstudy": [ "Art", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
244305662
pes2o/s2orc
v3-fos-license
Performance analysis of pre-trained transfer learning models for the classification of the rolling bearing faults Nowadays, artificial intelligence techniques are getting popular in modern industry to diagnose the rolling bearing faults (RBFs). The RBFs occur in rotating machinery and these are common in every manufacturing industry. The diagnosis of the RBFs is highly needed to reduce the financial and production losses. Therefore, various artificial intelligence techniques such as machine and deep learning have been developed to diagnose the RBFs in the rotating machines. But, the performance of these techniques has suffered due the size of the dataset. Because, Machine learning and deep learning methods based methods are suitable for the small and large datasets respectively. Deep learning methods have also been limited to large training time. In this paper, performance of the different pre-trained models for the RBFs classification has been analysed. CWRU Dataset has been used for the performance comparison. Introduction The fault monitoring of the machine is also called health monitoring of the machines. And it has been found that most rotating machine failures have occurred in the industry due to rolling bearing faults (RBFs) [1]. Because, rolling bearings are an important part of the rotating machine, which needs continuous monitoring of the RBFs [2]. In the industry, different rotating machines operate continuously. And the machine's failure occurs due to the RBFs. Due to the failure of the machine the normal operation of the manufacturing industry gets affected. It may lead to economic loss, accidents, and production loss. [3]. Therefore, the detection of the RBFs type and its location is highly needed to reduce the economic losses, reduce the production losses, and avoid accidents. . For the detection of the RBFs, traditional, and artificial intelligence (AI) techniques have been reported in the literature, and in the era of AI, traditional methods are becoming obsolete. And AI technique based methods are being popular for the diagnosis of RBFs [4]. The AI based methods are mainly classified as machine learning (ML) based [5], deep learning (DL) based [6], and transfer learning (TL) based methods [7]. The ML based needs two important steps and these are: feature extraction and fault classification. First, Feature extraction is an important step in ML, which is used to extract the information related to the faults from non-stationary or nonlinear vibration signals [9]. The most commonly used techniques for the feature extraction are: fast Fourier transform (FFT) [10], empirical mode decomposition (EMD) [11], ensemble empirical mode decomposition (EEMD) [12], and discrete wavelet transform (DWT) [13]. Second, for [10], support vector machine (SVM) [14], and extreme learning machine (ELM) [15] techniques have been used. ML-based fault diagnosis methods need experiences and prior knowledge to extract features from non-stationary vibration signals, also limited to the small data sets. ML based methods has been learn only one or two layers of data, and its overall performance is poor. Therefore, deep learning methods have been developed to solve the problems of machine learning-based methods. DL extracts the features to train the deep network from the raw data and it does not require expertise [16]. DL has also been used efficiently in the areas of image processing for the image classification [17] and these are useful for the identification of the bearing fault with the help of time-frequency (TF) analysis methods. TF analysis method has been used to create the image from a one dimensional vibration signal of RBFs and it has successfully been applied for the RBFs diagnosis [7]. In this work, we have studied and analysed the performance of the continuous wavelet transform and different pre-trained models at different batch sizes for 10 types of fault classification. The remaining part of the manuscript has been organized as: In Section2, the pre-trained model, transfer learning, has been explained. The dataset, and evaluation parameters have been presented in Section3, Experimental results and performance analysis have been done in Section4, The conclusion of the work has been presented in Section5. Pre-trained models In a present scenario there are many pre-trained models available, in this analysis we include alexnet, googlenet, shufflenet, resnet18, resnet50. All pre-trained convolutional neural network models that have been trained with millions of images of the Image Net dataset [17]. The pre-trained network has been used to classify the thousands of images related to the object like pencil, mouse, keyboard, and several animals. The alexnet has one input, one softmax, one output, two norm layer, two dropout, three FC, five convolution, and seven Relu layers and three pooling layers. The googlenet has total 144 layers which includes 57 convolution layers, 57 ReLU layers, 9 Depth concatenation layers, 13 max pooling layers, 2 cross channel normalization, one average pooling, one dropout layer, one fully connected layer, one softmax layer, one input and one output layer. The resnet18 has total 72 layers which includes 20 convolutional layers, 17 ReLU layers, 8 addition layers, 20 batch normalization layers, one input, one output layer, one preprocessing layer, one max pooling layer, one average pooling layer, one fully connected layer and one softmax. the shufflenet has total 173 layers which includes 48 grouped convolutional layers, 48 batch normalization layers, 33 ReLU layers, 16 channel shuffling layers, 3 depth concatenation layers, 13 addition layers, one input, one processing, one convolution, one max pooling layer, one average pooling, one fully connected layer, one, softmax layer, one output layer. The resnet50 has a total 177 layers which include 53 convolutional layers, 53 batch normalization layers, 49 ReLU layers, 16 addition layers, one input, one max pooling, one average pooling, one fully connected layer, one softmax layer and one output layer. Transfer learning Deep learning requires a large number of samples to train a model for better accuracy, for small dataset deep learning is a good choice. TL is commonly used to address the limitations of a small dataset of DL. A small dataset is not sufficient to train your deep learning model from scratch. For TL, pre-trained models have been used and fine tune the last three layers of the model to classify 10 types of faults. Transfer learning is an easy process to train a network as compared to the training of the deep learning model from scratch. In the fine-tuning process, we modified the last layer to match the classes according to our dataset, we also retained the layers of the network that we wanted. The stepwise explanation of the proposed algorithm has been given below: Step1: Vibration Signal The vibration signal of a length 102400 sample point has been considered for the experiments for each fault type. Step2: Divide and pre-process the vibration signal Now, the vibration signal for each fault type has been divided into the 100 segments and each segment has 1024 sample points. Step3: Apply the analytic morlet wavelet based CWT filter bank to plot the scalogram image CWT filter bank (CWTFB) has been applied to convert each segment of the vibration signal into a timescale plot scalogram. An amor wavelet is used to compute the CWT because it has good time-frequency analysis ability. The scalogram images have been resized according to the input image size of each pre-trained model, each model accepting a different image size. Alexnet accepts images of 227*277*3 where width is 227 pixels, height is 227 pixels, and 3 represent Red Green Blue channels. googlenet, shufflenet, resnet18 and resnet50 accepts images of 224*224*3 where width is 224 pixels, height is 224 pixels, and 3 represents Red Green Blue channels. Step5: Load the pre-trained model The images created from vibration signals for all faults at respective load conditions have been applied as an input to the pre-trained model. Model chooses the training and testing samples in the ratio of 7:3 randomly. Step6: Fine tuning The fine tuning of the last three layers has been done as per the labelled vibration signal data set for the ten faults. Accuracy -Accuracy is the ratio of the sum of TPs and TNs to the sum of the TPs, TNs, false positives (FPs), and false negatives (FNs). It has been calculated using equation (3). Results and discussion All these pre-trained models are again trained on small dataset with the system configuration: Operating system: window 10 GPU: NVIDIA GeForce MX230 Software: MATLAB R2019a In this section, experiments have been conducted to analyse the performance of cwt and pre-trained transfer learning models for the classification of the 10 types of rolling bearing faults. In this experiment we have selected a total 4000 images for all load conditions that have been created using CWT. These images have been divided into the ratio of 70% and 30% for training and testing samples respectively. Conclusion In this study, we have analysed the performance of the CWT and various pretrained networks which includes alexnet, googlenet, shufflenet, resnet18, resnet50. Based on the experimental results it has been concluded that resnet50 model with large number of layers performing well only with the small batch size, at greater batch sizes it requires huge computational power. Results also show that alexnet model which has less number of layers performing well with batch size of 20 and 32 as compared to other transfer learning models. Requirement of computational power is very high for the pre-trained models which contain more number of layers at large batch size. Performance of the transfer learning models can be improved by increasing GPU power.
2021-11-18T20:07:12.441Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "42efc8420ef43eafa7f0deb34fe0b36ea1121fa3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2070/1/012141", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "42efc8420ef43eafa7f0deb34fe0b36ea1121fa3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
118460401
pes2o/s2orc
v3-fos-license
Accurate Electronic, Transport, and Bulk Properties of Gallium Arsenide (GaAs) We report accurate, calculated electronic, transport, and bulk properties of zinc blende gallium arsenide (GaAs). Our ab-initio, non-relativistic, self-consistent calculations employed a local density approximation (LDA) potential and the linear combination of atomic orbital (LCAO) formalism. We strictly followed the Bagayoko, Zhao, and William (BZW) method as enhanced by Ekuma and Franklin (BZW-EF). Our calculated, direct band gap of 1.429 eV, at an experimental lattice constant of 5.65325 {\AA}, is in excellent agreement with the experimental values. The calculated, total density of states data reproduced several experimentally determined peaks. We have predicted an equilibrium lattice constant, a bulk modulus, and a low temperature band gap of 5.632 {\AA}, 75.49 GPa, and 1.520 eV, respectively. The latter two are in excellent agreement with corresponding, experimental values of 75.5 GPa (74.7 GPa) and 1.519 eV, respectively. This work underscores the capability of the local density approximation (LDA) to describe and to predict accurately properties of semiconductors, provided the calculations adhere to the conditions of validity of DFT [AIP Advances, 4, 127104 (2014)]. Introduction Gallium arsenide is an important electronic and opto-electronic material. 1 It is a prototypical binary semiconductor. It has a high electron mobility and a small dielectric constant; GaAs is extensively utilized in high temperature resistance, ultrahigh frequency, low-power devices and circuits. 2 Gallium arsenide crystallizes in zinc-blende structure; many experiments and theoretical works established that it has a direct band gap. Several experimental reports dealt with the room temperature band gap of the material. Room temperature band gaps as small as 1.2 eV 3 and as high as to 1.7 eV 4 have been reported. Dong et al. 4 attributed the significant difference between these two values to a tip-induced band bending in the semiconductor. Recent, experimental values of the room temperature band gap of GaAs are 1.42 eV 5 , 1.425 eV 6 and 1.43 eV. 7 The accepted value of the room temperature band gap is 1.42 eV 5 to 1.43 eV 7 ; These value are in basic agreement with 1.425 eV and 1.430 eV. In the bottom rows of Table I, we show over 10 different measurements of the band gap of GaAs. As per the content of this table, the consensus experimental band gap, at low temperature, is 1.519 eV. [8][9][10] Numerous theoretical results have been reported for the band gap of GaAs. Our focus on the band gap stems from its importance in describing several other properties of semiconductors [AIP Advances]; in particular, a wrong bang gap precludes agreements between peaks in the calculated densities of states, dielectric functions, and optical transition energies with their experimental counterparts. In contrast to the consensus reached for the room and low temperature experimental gaps for GaAs, the picture for theoretical results is far from being satisfactory. Indeed, numerous theoretical values of the band gaps, obtained from ab-initio calculations, disagree with each other and disagree with experiment. Table I contains over 28 band gaps calculated with a local density approximation (LDA) potential. Some 16 of these results, from ab-initio calculations, range from 0.09 eV 11,12 to 0.98 eV. 13 Other results obtained with LDA potentials, as shown in Table I, are either underestimates or overestimates of the band gap of GaAs, except for some three that require some comments. The linear muffin tin orbital (LMTO) calculation that obtained a gap of 1.46 eV 14 employed an additional potential besides the standard LDA. The ab-initio LDA calculation that obtained a band gap of 1.54 eV l5 employed a lattice constant of 5.45 Å, a value that is 3% smaller than the low temperature value in Table I. As explained elsewhere, 16 the Tran and Blaha modified Becky and Johnson potential (TB-mBJ) 17 is not entirely a density functional onegiven that it cannot be obtained from the functional derivative of an exchange correlation energy functional. 16,18 So, while two calculations with this potential led to gaps of 1.46 eV 19 and 1.56 eV, 20 in general agreement with experiment, these values do not resolve the woeful underestimation by most of the LDA and GGA calculations in Table I. As shown in Table I, 12 calculations employing a generalized gradient approximation (GGA) found band gap values varying from 0.206 eV 19 to 1.03 eV. 21 Only one GGA calculation found a gap of 1.419 eV, 22 in basic agreement with the above accepted, experimental gaps of 1.42 eV -1.43 eV and 1.519 eV for room and low temperatures, respectively. The calculation that utilized a meta-GGA potential found a gap of 1.276 eV, 19 smaller than the experimental one. The Green function and dressed Coulomb (GW) approximation calculations led to mixed results. The non-self-consistent G 0 W 0 calculation obtained a gap of 1.51 eV, 23 in agreement with the low temperature experimental value of 1.519 eV, while the self-consistent GW calculation produced 1.133 eV, 24 well below the low temperature value. Several other theoretical results are reported in Table I. Some utilized a hybrid functional potential, 23 while others employed the modified Becke and Johnson (mBJ) potential. 16 These potentials are different from the standard, ab-initio LDA or GGA potentials due to the utilization of one or more parameters in their construction. The results of calculations employing these potentials vary with those parameters. For this reason, these results, while very useful, do not resolve the fundamental question of the serious band gap underestimation. With the use of several fitting parameters, the three empirical pseudo potential calculations, shown in Table I, understandably led to the correct, low temperature experimental band gap of GaAs. The above overview of the literature points to the need for our work. Indeed, numerous calculated values of the band gap disagree with corresponding, experimental ones. The disagreement between sets of calculated band gaps, as evident above and in Table I, adds to our motivation for this work. At the onset, we have to answer the question as to the reason our LDA calculations can be expected to lead to an accurate description of electronic and related properties of GaAs. Past, accurate descriptions 16 and predictions 16 of properties of semiconductors, using the distinctive feature our calculations, portend the same for GaAs. This distinctive feature, the Bagayoko, Zhao, and Williams (BZW) method, as enhanced by Ekuma and Franklin (BZW-EF), strictly adheres to conditions of validity of DFT or LDA potentials, as elucidated by Bagayoko. 16 We are aware of some explanations of the failures of many previous calculations to lead to correct values of the band gaps of semiconductors or insulators. Prominent among them are the self-interaction (SI) 25 and the derivative discontinuity 26-28 of the exchange correlation energy. Bagayoko, 16 using strictly DFT theorems and the Rayleigh theorem for eigenvalues, demonstrated that self-consistent calculations that do not adhere to well-defined, intrinsic features of DFT cannot claim to produce eigvenvalues and other quantities that possess the full, physical content of DFT. Hence, disagreements between their results and experiment may arise mostly from the fact that their findings do not fully possess the physical content of DFT. Our perusal of the articles that reported the results in Table I did not lead to any publication that adhered totally to these features of DFT. Specifically, we could not find any calculation that methodically searched for and attained the absolute minima of the occupied energies, using increasingly larger and embedded basis sets. 16 The point here is that popular explanations of band gap underestimation by DFT calculations notwithstanding, our distinctive computational method is likely to describe GaAs accurately. The rest of this paper is organized as follows. This section, devoted to the introduction, is followed by a description of our computational method, in Section II. We subsequently present our results in Section III and discuss them in Section IV. Section V provides a short conclusion. II. Computational Approach and the BZW-EF Method Our calculations are similar to most of the previous ones discussed in Table I, as far as the choice of the potential and the use of the linear combination of atomic orbitals (LCAO) are concerned. We used the local density approximation (LDA) potential of Ceperley and Alder 52 as parameterized by Vosko, Wilk and Nusair. 53 We employed Gaussian functions in the radial parts of the atomic orbitals, resulting in the linear combination of Gaussian orbitals (LCGO). The distinctive feature of our calculations, as compared to the ones discussed above, stems from our implementation of the LCGO formalism following the Bagayoko, Zhao, and Williams (BZW) method, as enhanced by Ekuma and Franklin (BZW-EF). 16,[54][55] The method searches for the absolute minima of the occupied energies, using successively augmented basis sets, and avoids the destruction of the physical content of the low, unoccupied energiesonce the referenced minima are attained. Typically, the implementation starts with a self-consistent calculation that employs a small basis set; this basis set is not to be smaller than the minimum basis set, the one that can just account for all the electrons in the system. A second calculation follows, with a basis set consisting of the previous one plus one additional orbital. The dimension of the Hamiltonian matrix is consequently increased by 2, 6, 10, or 14 for s, p, d, and f orbitals, respectively. Upon the attainment of self-consistency, the occupied energies of Calculation II are compared to those of I, graphically and numerically. In general, upon setting the Fermi level to zero, some occupied energies from Calculation II are found to be lower than corresponding ones from Calculation I. This process of augmenting the basis set and of comparing the occupied energies of a calculation to those of the one immediately preceding it continues until three consecutive calculations lead to the same occupied energies. This criterion is a clear indication of the attainment of the absolute minima of the occupied energies. The first of these three calculations, with the smallest basis set, is the one that provides the DFT description of the material. The basis set for this calculation is the optimal basis set. While the second of these calculations generally leads to the same occupied and low, unoccupied energies up to 6-10 eV, depending on the material, the third of these calculations often lowers some low, unoccupied energies from their values obtained with the optimal basis set. We should note that the referenced three calculations lead to the same electronic charge density. As explained by Bagayoko, 16 the energy functional derived from the Hamiltonian is a unique functional of the ground state charge density. Hence, the occupied and unoccupied energies of the spectrum of this Hamiltonian, with the physical content of DFT, cannot change upon an increase of the basis set. Consequently, the unoccupied energies obtained with basis sets much larger than the optimal basis set, and that contain this set, do not represent DFT solutions if they differ from their corresponding values obtained with the optimal basis set. Bagayoko 16 explained the unphysical nature of unoccupied energies, lowered from their values obtained with the optimal basis set, in terms of mathematical artifacts stemming from the Rayleigh theorem for eigenvalues. Upon the attainment of the absolute minima of the occupied energies, the above extra lowering of some unoccupied energies, with increasing basis sets, is not only a possible explanation of the underestimation of band gaps by calculations that do not search and find the optimal basis set, but also of discrepancies between several calculations that utilize the same potential and computational formalism as shown in Table I. The following computational details are intended to facilitate the replication of our work. GaAs is III-V semiconductor, with the zinc blende crystal structure in normal conditions of temperature and pressure. We used the experimental, room temperature lattice constant of 5.65325 Å. 56 Abinitio calculations of the electronic structures of Ga +1 and As -1 produced atomic orbitals employed in the solid state calculation. We utilized even-tempered Gaussian exponents, with 0.28 as the minimum and 0.55 x 10 5 as the maximum, in atomic unit, for Ga + 1 . We used 18 Gaussian functions for s and p orbitals and 16 for the d orbitals. Similarly, the Gaussian exponents for describing As -1 were from 0.2404 to 0.349 x 10 5 . A mesh size of 60 k points in the irreducible Brillouin zone, with appropriate weights, was used in the iterations for selfconsistency. The computational error for the valence charge was about 1.25 x 10 -3 per electron. The self-consistent potentials converged to a difference around 10 -5 between two consecutive iterations. With the LDA potential identified above and the computational details, we implemented the LCGO formalism following the BZW-EF method. Upon the attainment of absolute minima of the occupied energies, the optimal basis set was employed to produce the band structure of GaAs. The resulting eigenvalues and corresponding wave functions were utilized to calculate the total (DOS) and partial (pDOS) densities of states, as well as electron and hole effective masses. From the curve of the calculated total energy versus the lattice constant, we obtained the equilibrium lattice constant and the bulk modulus. These results follow below, in Section III. III. Results We present below the successive calculations that led to the absolute minima of the occupied energies for GaAs. Then, we discuss the electronic energy bands resulting from the calculation with the optimal basis set. We subsequently show the total (DOS) and partial (pDOS) densities of states and effective masses derived from the energy bands. The last results to be discussed pertain to the total energy curve, the equilibrium lattice constant, and the bulk modulus. We show, in Table II below, the successive calculations with increasing basis sets, along with the applicable orbitals and calculated band gaps. The occupied energies obtained by Calculations III, IV, and V are identical. Hence, Calculation III provides the DFT description of GaAs. Calculation Number Gallium Orbitals for Ga 1+ Orbitals for As 1-No. of Wave Functions Band Gap in eV Calc. I 3s 2 3p 6 3d 10 4s 2 4p 0 3s 2 3p 6 3d 10 4s 2 4p 4 52 1.380 Calc. II 3s 2 3p 6 3d 10 4s 2 4p 0 4d 0 3s 2 3p 6 3d 10 4s 2 4p 4 62 1.368 Calc. III 3s 2 3p 6 3d 10 4s 2 4p 0 4d 0 3s 2 3p 6 3d 10 4s 2 4p 4 4d 0 72 1.429 Calc. IV 3s 2 3p 6 3d 10 4s 2 4p 0 4d 0 5s 0 3s 2 3p 6 3d 10 4s 2 4p 4 4d 0 74 1.270 Calc. V 3s 2 3p 6 3d 10 4s 2 4p 0 4d 0 5s 0 3s 2 3p 6 3d 10 4s 2 4p 4 4d 0 5s 0 76 1.238 The calculated band structure of GaAs, from Calculation III, is shown in Figure 1. As per the explanations provided in the method section, the superposition of the occupied energies from Calculations III, IV, and V signifies that the absolute minima of the occupied energies are reached in Calculation III whose corresponding basis set is the optimal basis set. The calculated, direct band gap at the Г point is 1.429 eV (≈ 1.43 eV). This value is in excellent agreement with the accepted value for the room temperature experimental band gap of GaAs, i.e., 1.42-1.43 eV. This agreement is in stark contrast with the case of most previous, calculated band gaps in Table I. Figures 2 and 3 show the total (DOS) and partial (pDOS) densities of states obtained from the bands resulting from Calculation III. Several features of our calculated density of states (DOS) are close or the same as those of experimental densities of states from X-ray photoemission spectroscopy measurements. 57 According to Fig. 14 in the article by Ley et al., 57 the peak positions of H IT , P II, and P III correspond to the binding energies of 1.0 eV, 6.6 eV, and 11.4eV, respectively. From our calculations, the corresponding values are 1.0 eV, 6.4 eV, and 11.0 eV, respectively. The labels of the peaks are as reported by Ley et al. 57 As per our calculated pDOS in Fig. 3, the lowest lying group of valence bands is entirely from Ga d, while the middle group consists mostly of As s with faint contributions from Ga s and Ga p. The upper most group of valence bands is clearly dominated by As p, with a significant overlap with Ga s and a smaller contribution from Ga p. We provide in Table III We calculated the effective masses of n-type carriers for GaAs, using the electronic structure from Calculation III (in Fig.1), i.e., the vicinity of the conduction band minimum at the Г point. In Table IV, we show our results along with several, previous theoretical and experimental ones. Experimental electron effective masses are directionally averaged. Our results are comparable with those from measurement. IV. Discussions From our overview of the literature and the content of Table I, the band gap of GaAs, a prototypical semiconductor, was systematically underestimated by first principle, self-consistent calculations that utilized ab-initio LDA or GGA potentials. Unlike these previous results, our calculated, direct band gaps of 1.429 eV and 1.520 eV, for room and low temperatures, respectively, are in excellent agreement with corresponding, experimental ones. As shown in the section on results, the locations of several peaks in the calculated, total valence density of states practically agree with corresponding experimental ones. This latter agreement strongly indicates that our calculated band gap values are not fortuitous. Additionally, our calculated effective masses are close to corresponding, available, experimental ones, like some previous, theoretical results. A detailed comparison of the calculated, effective masses with experimental ones is partly hindered by the unavailability of directional, effective masses; most experiments reported averaged values. Our explanation of the excellent agreements noted above rests on the fact that our calculations, with the BZW-EF method, strictly adhered to necessary conditions 16 for their results to have the physical content of DFT. A careful perusal of the articles reporting the previous results in Table I found no indication that the pertinent calculations searched for and verifiably attained the absolute minima of the occupied energies. Without this explicit attainment, the results cannot be expected to possess the full physical content of DFT. 16 The BZW-EF method invokes the Rayleigh theorem for the selection of the optimal basis set out of several others that lead to the same occupied energies; the smallest of these basis sets, the optimal basis set, is complete for the description of the ground state and is not over-complete, like much larger ones that include it. Different over-complete basis sets containing the optimal one are expected to lead to different underestimated values of the measured band gap. V. Conclusion We performed ab-initio, self-consistent calculations of electronic, transport, and bulk properties of GaAs. Our results, unlike those of many previous ab-initio calculations, agree very well with experiment, for the band gaps, the total density of states, and the bulk modulus; they also agree with experiment for the effective masses, where the latter are inversely related to the mobility of charge carriers. We credit our strict adherence to conditions of validity for DFT or LDA potentials, with our implementation of the BZW-EF method, for the above agreements between our calculated results and experimental ones.
2019-04-13T13:53:34.653Z
2016-01-20T00:00:00.000
{ "year": 2016, "sha1": "9951057260ece2bd8a9cab91dfa331cb48d79b2c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "542170b77fc4c22109e0175daa5db7b37d491ca9", "s2fieldsofstudy": [ "Physics", "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
199504250
pes2o/s2orc
v3-fos-license
Crystal structure, Hirshfeld surface analysis and interaction energy and DFT studies of 5,5-diphenyl-1,3-bis(prop-2-yn-1-yl)imidazolidine-2,4-dione The title molecule consists of an imidazolidine unit linked to two phenyl rings and two prop-2-yn-1-yl moieties. The imidazolidine ring is oriented at dihedral angles of 79.10 (5) and 82.61 (5)° with respect to the phenyl rings, while the dihedral angle between the two phenyl rings is 62.06 (5)°. In the crystal, C—HProp⋯OImdzln (Prop = prop-2-yn-1-yl and Imdzln = imidazolidine) hydrogen bonds link the molecules into infinite chains along the b-axis direction. Two weak C—HPhen⋯π interactions are also observed. Chemical context Pyrazolones are an important class of heterocyclic compounds that occur in many drugs and their derivatives have long been of interest to medicinal chemists for their wide range of biological activities (Pawar & Patil, 1994), including antibacterial, antidiabetic, immunosuppressive agents, and substances displaying hypoglycemic, antiviral and antineoplastic actions (Pathak & Bahel, 1980;Naik & Malik, 2010;Srivalli et al., 2011). Their pharmaceutical applications include use as a non-steroidal anti-inflammatory agent in the treatment of arthritis and other musculoskeletal and joint disorders (Amir & Kumar, 2005), and as analgesic, antipyretic (Badawey & El-Ashmawey, 1998) and hypoglycemic agents (Das et al., 2008). They also have fungicidal (Singh & Singh, 1991) and antimicrobial properties (Sahu et al., 2007), and some have been tested as potential cardiovascular drugs (Higashi et al., 2006). In the past few years, research has been focused on existing molecules and their modifications in order to reduce side effects and to explore other pharmacological and biological activity (Sahu et al., 2007;Naik & Malik, 2010;Srivalli et al., 2011). As a continuation of our research on the development of new N-substituted pyrazolone derivatives and the evaluation of their potential pharmacological activities, we report herein the synthesis, the molecular and crystal structures, the Hirshfeld surface analysis and intermolecular interaction energies and density functional theory (DFT) computational calculation of the title compound, (I). Supramolecular features In the crystal, C-H Prop Á Á ÁO Imdzln (Prop = prop-2-yn-1-yl and Imdzln = imidazolidine) hydrogen bonds (Table 1 and Fig. 2) link the molecules into infinite chains along the b-axis direction. Two weak C-H Phen Á Á Á interactions (Table 1) may also contribute to the stabilization of the crystal structure. Hirshfeld surface analysis In order to visualize the intermolecular interactions in the crystal of the title compound, a Hirshfeld surface (HS) analysis (Hirshfeld, 1977;Spackman & Jayatilaka, 2009) was carried out by using CrystalExplorer17.5 (Turner et al., 2017). In the HS plotted over d norm (Fig. 3), the white surface indicates contacts with distances equal to the sum of van der Waals radii, and the red and blue colours indicate distances shorter (in close contact) or longer (distinct contact) than the van der Waals radii, respectively (Venkatesan et al., 2016). The brightred spots appearing near O2 and hydrogen atom H16B indicate their roles as the respective donors and/or acceptors; they also appear as blue and red regions corresponding to positive and negative potentials on the HS mapped over electrostatic potential (Spackman et al., 2008;Jayatilaka et al., 2005) as shown in Fig. 4 Table 1 Hydrogen-bond geometry (Å , ). Figure 1 The molecular structure of the title compound with the atom-numbering scheme. Displacement ellipsoids are drawn at the 50% probability level. (hydrogen-bond acceptors). The shape-index of the HS is a tool to visualize thestacking by the presence of adjacent red and blue triangles; if there are no adjacent red and/or blue triangles, then there are no -Á interactions. Table 2. The most important interaction is HÁ Á ÁH contributing 43.3% to the overall crystal packing, which is reflected in Fig. 6b as widely scattered points of high density due to the large hydrogen content of the molecule with the tip at d e + d i $2.44 Å . In the presence of two weak C-HÁ Á Á interactions, the pair of the scattered points of wings resulting from HÁ Á ÁC/CÁ Á ÁH contacts, with a 37.8% contribution to the HS, have a symmetrical distribution of points, View of the three-dimensional Hirshfeld surface of the title compound plotted over electrostatic potential energy in the range À0.0500 to 0.0500 a.u. using the STO-3 G basis set at the Hartree-Fock level of theory. Hydrogen-bond donors and acceptors are shown as blue and red regions around the atoms corresponding to positive and negative potentials, respectively. Table 2 Selected interatomic distances (Å ). Figure 3 View of the three-dimensional Hirshfeld surface of the title compound plotted over d norm in the range À0.2703 to 1.2169 a.u. Figure 5 Hirshfeld surface of the title compound plotted over shape-index. The Hirshfeld surface analysis confirms the importance of H-atom contacts in establishing the packing. The large number of HÁ Á ÁH, HÁ Á ÁC/CÁ Á ÁH and H Á Á Á O/OÁ Á ÁH interactions suggest that van der Waals interactions and hydrogen bonding play the major roles in the crystal packing (Hathwar et al., 2015). DFT calculations The optimized structure of the title compound in the gas phase was generated theoretically via density functional theory (DFT) calculations using the standard B3LYP functional and (Becke, 1993) as implemented in GAUSSIAN 09 (Frisch et al., 2009). The theoretical and experimental results are in good agreement (Table 4). The highest occupied molecular orbital (HOMO), acting as an electron donor, and the lowest unoccupied molecular orbital (LUMO), acting as an electron acceptor, are very important parameters for quantum chemistry. When the energy gap is small, the molecule is highly polarizable and has high chemical reactivity. The DFT calculations provide some important information on the reactivity and site selectivity of the molecular framework. E HOMO and E LUMO clarify the inevitable charge-exchange collaboration inside the studied material; the electronegativity (), hardness (), potential (), electrophilicity (!) and softness () are recorded in Table 3. The significance of and is to evaluate both the reactivity and stability of a compound. The electron transition from the HOMO to the LUMO energy level is shown in Fig. 8. The HOMO and LUMO are localized in the plane extending from the whole 5,5-diphenyl-1,3-di(prop-2-yn-1-yl)imidazolidine-2,4-dione ring. The energy band gap [ÁE = E LUMO -E HOMO ] of the molecule is about 5.8874 eV, and the frontier molecular orbital energies, E HOMO and E LUMO are À6.6964 and À0.8090 eV, respectively. Figure 8 The energy band gap of the title compound. 1.0 mmol) at room temperature. The reaction was monitored using TLC. After removal of the inorganic salt by filtration, the solution was evaporated under reduced pressure. The residue was separated by chromatography on a column of silica gel with ethyl acetate-hexane (v:v 3:7) as eluent. The isolated solid was crystallized from ethanol solution to afford colourless crystals (yield: 82%). Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
2019-06-26T14:12:57.424Z
2019-06-04T00:00:00.000
{ "year": 2019, "sha1": "9bd0528606f2d03b303038d7ff6efb4167944c69", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/e/issues/2019/07/00/lh5908/lh5908.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bd0528606f2d03b303038d7ff6efb4167944c69", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
54764142
pes2o/s2orc
v3-fos-license
Selenium Application Timing : Influence in Wheat Grain and Flour Selenium Accumulation Under Mediterranean Conditions Millions of people have an inadequate supply of selenium (Se) and Se-biofortified crops could prevent such deficiency. In order to establish an effective Se biofortification program under Mediterranean conditions on wheat, the objective of the present study was to evaluate the effect of the Se application timing on the Se accumulation in the grain, yield and protein content. In a field experiment, ten g ha of sodium selenate were foliar-applied at four different growth stages: at 1 node detectable (GS-31); at 5 node detectable (GS-35); at boots just swollen (GS-45); and at 1 spikelet visible (GS-51), in two different growing seasons, 2010-2011 and 2011-2012. The application of Se between GS-35 and GS-45 produced the highest Se accumulation in grain, especially in humid years. The milling process caused Se losses of about 15%. In the special conditions of the Mediterranean area, a proper timing of Se application might have major importance in the Se accumulation in the grain, but due to the rainfall before application, rather than to the plant growth stage. Introduction Two billion people in the world are thought to suffer from one or more micronutrient deficiencies (FAO, IFAD & WFP, 2012).Among such deficiencies, an inadequate supply of selenium (Se), micronutrient essential for humans and animals, has been associated to a multitude of health disorders, including oxidative stress-related conditions, reduced fertility and immune function, cardiomyopathy, and an increased risk of cancers (Reid et al., 2008;Zeng & Combs, 2008;Rayman, 2012).This deficiency is supported by the studies carried out in Europe by Roman-Viñas et al. (2011), which showed inadequate intake of Se in more than 20% of the population.In this context, the Se intake should be highly increased to reach the recommended values.The European Recommended Dietary Allowance (RDA) of Se for humans is about 55 μg Se day -1 (Elmadfa, 2009).Several authors go further and after carrying out clinical trials, have recommended a regular oral dose of 200 μg Se day -1 of Se to reduce the incidence of certain cancers and other diseases (Arthur, 2003;Reid et al., 2008).Because food consumption provides the principal route of Se intake for most of the population, the Se biofortification of crops has demonstrated to increase the Se in common dietary foodstuff (Broadley et al., 2010) and thus to enhance human nutrition.Among various crops, cereals could constitute a major source of Se as they are consumed in large amounts in the human diet. Among cereals, bread making wheat (Triticum aestivum L.) has a great relevance in Mediterranean areas and it is the most consumed cereal by humans in the European countries.The wheat grain has been shown to contain a wide range of Se concentrations depending on the Se concentration in the soil.In Spain, the Se concentration in the wheat grain derived products (white flour and biscuits) ranges between 30 μg kg -1 and 60 μg kg -1 of Se (Diaz-Alarcon, Navarro-Alarcon, de la Serrana & Lopez-Martinez, 1996); concentrations not enough to accomplish the Se intake recommendations.Most of the Se biofortification studies carried out on wheat have been performed in oceanic or continental areas, with high and regular rainfall and a temperate temperature (Broadley et al., 2010;Stroud et al., 2010).Under semiarid Mediterranean or other similar conditions, characterised by scarce precipitations and irregularity in the rainfall, Se biofortification have shown a different pattern, with a higher accumulation potential, at least in other cereals such as two-rowed barley and hard wheat (Rodrigo, Santamaria, Lopez-Bellido, & Poblaciones, 2013;Poblaciones, Rodrigo, Santamaria, Chen, & McGrath, 2014).On bread making wheat, although there are studies in Portugal (Galinha et al., 2013), Australia (Lyons et al., 2004) and New Zealand (Curtin, Hanson, & van der Weerden, 2008), the effect of the Se biofortification under Mediterranean conditions is still poorly understood. Regarding the time of application, many authors (Curtin, Hanson, Lindley, & Butler, 2006;Chu, Yao, Yue, Li, & Zhao, 2013) recommended just one application at stem elongation stage, when the flag leaf ligule/collar is just visible (GS-39 according to the Zadocks scale).That growth stage has been regarded as the most effective one in the later Se accumulation in the grain when several applications moments were evaluated.However most of those studies were performed on pots experiments under glasshouse or in oceanic or continental areas.Due to the different soil, climatic and cropping conditions the application of such experiences under Mediterranean conditions is questionable.Therefore, the main objective of the present study was to evaluate the effect of Se application time on the uptake and later accumulation of Se in grain and flour on bread making wheat, and on the grain yield and protein content, in order to provide the basis for an optimal implementation of a Se biofortification program under Mediterranean conditions.In addition, because in previous studies under Mediterranean conditions, the rainfall occurred during the growing season seemed to play a major role in the accumulation of Se in the grain (Poblaciones, Rodrigo, & Santamaria, 2013), it is also hypothesized and analyzed the importance of the rainfall according to the application time. Study Site Field experiment was conducted in Badajoz, southern Spain (38º54' N, 6º44' W, 186 m above sea level), in a Xerofluvents soil under rainfed Mediterranean conditions in 2010-2011 and 2011-2012 growing seasons.Weather-related parameters for this area in the study years, as well as in the average over a 30-year period, are shown in Figure 1.All climate data were taken from a weather station located at the study site.The precipitation was much higher (more than double), during the growing period (from late November to July), in 2010-2011 (492 mm) than in 2011-2012 (248 mm).In 2010-2011, at full flowering (between April and May), there was a severe drought period of about 40 days (in April, most of rainfall occurred during the first 10 days of the month, and in May, rainfall occurred in the final days of the month).In 2011-2012, at tillering (between late January and March) a very dry period took place (Figure 1). Figure 1.Monthly and annual rainfall and mean maximum and minimum temperatures in [2010][2011][2011][2012] and in an average year from a 30-year period at Badajoz (Spain) Experimental Design and Crop Management The study area was sown with bread making wheat, cultivar "Roxo".Conventional tillage treatment was used to prepare a proper seedbed before sowing.Sowing was made in late December both study years (2010-2011 and 2011-2012), at a rate of 180 kg ha -1 of seeds, in rows of 20 cm.A N-P-K fertilizer (8-15-15) was applied before sowing at a rate of 200 kg ha -1 in all plots.The experiment was arranged as a randomized complete block design with four repetitions.In each block, Se was applied foliarly on dry and sunny days at a dose of 10 g ha -1 of sodium selenate diluted in 3 L of water in one of the following four different moments: (1) at the stem elongation: (2) at the stem elongation: 5 th node detectable (GS-35); (3) at boots just swollen (GS-45); and (4) at the inflorescence emergence: first spikelet of inflorescence visible (GS-51).Additional plots without any Se application (treatment control) were also performed to be compared with those Se fertilized.The crop area for each Se fertilization treatment and repetition was 15 m 2 (3 m × 5 m).The experimental area used each year had not been previously fertilized with Se, therefore a potential residual effect of Se in the soil can be ruled out. Soil Analysis Each year, before sowing, four representative soil samples to 30 cm depth were taken from the experimental site.Soil samples were air dried and sieved to < 2 mm using a roller mill.Texture was determined gravimetrically; soil pH was measured using a calibrated pH meter (ratio, 10 g soil: 25 ml deionized H 2 O), and soil organic matter (SOM) was determined by oxidation by dichromate.Texture, pH and SOM was only determined at the beginning of the experiment. From these soil samples, total Se was determined as follows: a portion of each soil was finely ground (< 0.45 mm) using an agate ball mill (Retch PM 400 mill); 1 g was digested with ultrapure concentrated nitric acid (2 ml) and 30% w/v hydrogen peroxide (2 ml) using a closed-vessel microwave digestion protocol (Mars X, CEM Corp, Matthews, NC), and diluted to 25 ml with ultrapurified water (Adams, Lombi, Zhao, & McGrath, 2002).Sample vessels were thoroughly washed with acid before use.For quality assurance, a blank and a standard (tomato leaf material, NIST 1573a) were included in each batch of samples.Concentrations of Se were determined using an inductively coupled plasma mass spectrometer (ICP-MS) (Agilent 7500ce, Agilent Technologies, Palo Alto, CA, USA) operating in the hydrogen gas mode.These analyses were developed by the Elemental and Molecular Analysis Service of the University of Extremadura (Spain).All the results were reported on a dry weight basis. Extractable Se in the soil samples was determined by using KH 2 PO 4 (0.016 mM, pH 4.8) at a ratio of 10 g dry weight soil: 30 ml KH 2 PO 4 w/v (Zhao & McGrath, 1994).The Se concentration in the extracts was determined by ICP-MS, as described above. Grain and Flour Analysis Harvesting took place at crop maturity in early June.Grain yield (expressed as kg ha -1 of grain) and grain protein content were determined.Total N content was determined using the Dumas combustion method (Leco FP-428 analyzer, LECO Corp., Saint Joseph, MI U.S). Grain protein was determined by multiplying the total N by 5.7 as a conversion factor.Total Se contained in the milled grain and in the white flour was determined by ICP-MS as described above for soil samples.Grain was milled with a corundum mill (WolfgangMOC, Germany), and white flour was obtained using a Laboratory Mill CD 1 (Chopin, France).All the results were also reported on a dry weight basis. Statistical Analysis The effect of the cropping year (2010-2011 and 2011-2012) on the total and extractable Se into the soil was evaluated by 1-way analysis of variance (ANOVA).Total Se in flour expressed as μg kg -1 , Se in grain/Se in flour ratio, grain yield and grain protein were subjected to a 2-way ANOVA, including 'year' (2010-2011 and 2011-2012), 'Se application timing' (Control, GS-31, GS-35, GS-45, GS-51), and their interaction in the model.When significant differences were found in ANOVA, means were compared using Fisher's protected least significant difference (LSD) test at P < 0.05.A linear regression was also carried out between the total Se in grain and total Se in flour.In order to evaluate the effect of the weather conditions in the Se accumulation in the grain and flour, Pearson correlation tests, including the data of the two years and of the four application moments, were performed between total Se (in grain and in flour) and the following climate related parameters: (1) the number of days without rain after Se application, (2) the number of days without rain before Se application, (3) the amount of rainfall from seeding to Se fertilization, (4) the amount of rainfall from Se fertilization to harvesting, (5) the amount of rainfall during the 10 days before the Se fertilization, and (6) and the amount of rainfall during the 10 days before and 10 days after the Se fertilization.All these analyses were performed with the Statistix v. 8.10 package. Soil Properties of the Field Sites The soil of the experimental area had a loamy texture, with a pH of 7.0 ± 0.13 (mean ± standard error), and a soil organic matter (SOM) of 9.9 ± 0.13 g kg -1 .According to ANOVA there was not a significant effect of the year on total Se in the topsoil (degree of freedom (df) = 1, P = 0.055), with values of 137.2 ± 6.5 μg kg -1 in 2010-2011 and 121.2 ± 6.9 μg kg -1 in 2011-2012.Extractable Se, was neither significantly affected by the growing season (df = 1, P = 0.315), and was 2.6 ± 0.3 μg kg -1 in 2010-2011 and 3.3 ± 0.4 μg kg -1 in 2011-2012. Effects of Se Application Timing on Grain Yield and Protein Content Grain yield and grain protein content were significantly affected by the year.Grain protein was also affected by the Se application timing (Table 1).As the interaction between yearxtiming was not significant, the main effects could be analyzed separately.In 2011-2012, it was obtained the highest grain yield and the lowest protein content values (Table 2).Regarding Se timing, when Se was applied at the earliest stages, i.e. at the stem elongation (GS-31 and GS-35), the protein content was higher than when it was applied later, regardless of the growing year (Table 2).For each parameter, averages in the same row, with different lowercase letters mean significantly affected by year (P< 0.05) according to LSD test.Averages in the same column, with different uppercase letters are significantly affected by Se application timing (P< 0.05) according to LSD test.When letters do not appear, differences were not significant according to ANOVA. Total Se Uptake and Accumulation in the Grain and in the Flour The total Se contained in flour, referred to μg kg -1 DW, was significantly affected by the year, application timing and their interaction (Table 1).The Se fertilization, regardless of the application timing and the study year, increased at a great extent the Se concentration in the flour (Table 2) in relation with the control (on average, 525 vs. 42 μg kg -1 DW).The effect of the Se application timing on the Se accumulation in the flour was regarded to the year.Whilst in the driest year (2011-2012) it was not significantly affected by the application time, in the most humid year (2010-2011), the highest Se accumulation values were obtained when the fertilizer was applied at GS-35 and in GS-45 (Table 2). The ratio total Se in the grain/total Se in the flour, which indicates the loss of Se during the milling process, was not significantly affected by any of the studied variables (Table 1).The relationship between the total Se in grain and the total Se in flour was linear and highly significant with a Se in grain/Se in flour ratio of 1.15 (Figure 2). Influence of Rainfall Parameters on Total Se in the Grain and Flour Considering the pooled data of the two study years and the four application times, both total Se in the flour and total Se in the grain correlated positive (R 2 = 0.63/0.65)and significantly with the number of days before the Se application without rainfall, and negatively (R 2 = -0.76/-0.78)with the amount of rain fallen during the ten days before Se application (Table 3). Table 3. Correlation coefficients (R 2 ) obtained in the Pearson correlation tests performed between both the total Se in the flour and the total Se in the grain and each one of several weather-related parameters. Se Content in Soil Based on the classification given by Hawkesford and Zhao (2007) the soils of the experimental area could be considered as deficient-marginal in Se, regardless it was obtained as total or extractable Se.These soils might not be able then to provide crop products with enough Se to accomplish the Se intake recommendations, as discussed by Poblaciones et al. (2013) and Rodrigo et al. (2013) in two similar studies conducted on field pea and two-rowed barley in the same area.Under such conditions of deficient Se availability, the introduction of a Se biofortification program might be indicated.The lack of significant effect of the year on total and extractable Se in the topsoil might indicate that further differences in the studied parameters relating to Se content in grain and flour between years could be mainly attributed to climatic variability rather than soil. Effects of Treatments on Grain Yield and Grain Protein Content As it is well known in rainfed conditions, precipitation is a key factor in the crop yield.Although in 2010-2011 the precipitation was higher than in 2011-2012, the severe and prolonged drought occurring during 2010-2011 at full flowering might be the cause of the lower grain production and higher protein content, probably due to a dilution effect.Grain yield was not affected by the Se application timing, in clear agreement with Ducsay and Ložek (2006).In contrast, Chu et al. (2013) found that, when Se was applied at joining-heading and heading-blooming stages, the grain yield was much higher than it was applied at other stages.However, such study was carried out in much more humid conditions and Se was applied as sodium selenite (instead selenate). Those two relevant differences could explain such disagreement.Regarding protein content, it was higher when Se was applied at the earliest stages, regardless the growing year.It is known the toxicity of Se for the plants at very high concentrations into the soil (Hermosillo-Cereceres et al., 2011).Although in our case a serious toxicity problem might by unlikely due to the initial low Se values into the soil, and the very small dose of Se used in the fertilization, a slight toxicity could affect at some extent the plant protein synthesis or the efficiency of the N absorption by roots.According to the results, as at the earliest stages the protein content was not different than in controls, such slight toxicity only might affect at the latest stages.In any case, the differences in the protein content due to Se timing, although significant, were quite lower than those caused by the climatic conditions.Therefore their relevance could be considered as low in terms of management. Total Se in the Grain and in the Flour Due to the higher grain yield obtained in 2011-2012, a possible dilution effect could be the responsible of the lower Se concentration obtained that year.However, when data were referred to mg ha -1 (multiplying the total Se in μg kg -1 of Se by the grain yield) to take into account that possible dilution effect, the amount of Se in the flour was also much higher in 2010-2011 than in 2011-2012 (786 vs. 496 mg ha -1 DW, respectively).Therefore, in this case, rather than a dilution effect, it could be hypothesized that the lower the water availability, the lower the uptake, and consequently the smaller the Se accumulation in the grain.A significant and positive correlation between grain Se concentration and the amount of precipitation during the growing season have been already indicated (Johnson, 1991).Such hypothesis is also in agreement with the stated by Rodrigo et al. (2013) in two-rowed barley and Poblaciones et al. (2013) in field pea, who obtained similar results under similar conditions when Se was also applied as sodium selenate.Hence, the irregular precipitations typical of the Mediterranean conditions may prompt differences in the uptake and accumulation of Se in the grain after fertilization.Consequently, special attention to this irregularity should be paid in order to get a Se biofortification as effective as possible. In the most humid year, the Se accumulation in the grain was higher when the fertilizer was applied at GS-35 and in GS-45.This result was in line with the general recommendation (application at GS-39 stage, at stem elongation) given for humid regions in temperate climates, such as those in central and northern Europe.In a similar study carried out on winter wheat (Ducsay & Ložek, 2006), when Se fertilizer was applied at the GS-29 stage at a dose of 10 g ha -1 , the Se concentration in the grain was much lower than that obtained in the present study.Although several other parameters could have affected such lower Se concentration in the grain, a very early application does not seem to be the most convenient.Other authors have reported a better Se accumulation when Se was applied at flowering stage (Curtin et al., 2006) or even at grain filling (Chu et al., 2013).However, these studies were conducted in more humid regions or in irrigated crops.In our Mediterranean conditions it does not seem convenient such so late applications, as the accumulation of Se in the grain was significantly reduced when applications were performed later than at the boot stage, which is in agreement with the stated by Lyons et al. (2004) in a study on Se biofortification conducted in South Australia on wheat. The ratio total Se in the grain/total Se in the flour gives information about how much Se is lost during the milling process.Milling is considered as a critical process affecting the concentration of Se on wheat grain (Cubadda, Aureli, Raggi, & Carcea, 2009).Because bran (pericarp, seed coat and aleurone) and germ are removed in the milling process, the value of 1.15 may indicate that about 15% of the selenium in the grain was accumulated in the bran and germ.Similar losses in the milling process were found by Hart et al. (2011) in bread making wheat (Se loss of about 13%) and Cubadda et al. (2009) in durum wheat (Se loss of about 16%).The loss of Se during the milling process was constant regardless the Se application time.This fact could indicate that the distribution of Se within the grain, i.e. in the bran, germ and endosperm, was not affected by the moment in which Se was provided to the plant. Influence of Rainfall Parameters on Total Se in the Grain and Flour According to the results, it seems clear that the rainfall in the previous days before the Se application was the key factor affecting the absorption and later accumulation of Se into the plant, at least when Se was provided as foliar application.In fact, the highest accumulation of Se in the flour during 2010-2011, when Se was applied at the GS-35 and GS-45 stages, agreed precisely to the application carried out with the lowest amount of rainfall during the 10 days before the Se application.This fact could be explained by the high and negative osmotic potential which may occur in the plant under such dry conditions.In this situation, the plant may absorb eagerly and very efficiently through the stomata the Se fertilizer applied, reducing greatly the loss of Se.Therefore, the importance of the moment of the Se application on the Se accumulation in the grain and flour could be mainly attributed to the rainfall before the application, rather than to the exact growth stage of the plant.A lower amount of rain in this period led to a higher Se accumulation in the grain. Conclusion The application of Se between the GS-35 and the GS-45 stages according to the Zadocks scale produced the highest accumulation of Se in the grain and in the flour, especially in humid years.However, the rainfall of the growing season seemed to play a very important role in the accumulation of Se in the grain, even more important than the exact growth stage of the plant in the application time.The milling process produced a loss of Se of about 15%, which was not affected by the Se application time.Therefore according to our results, the general recommendations given for the Se biofortification management under humid or temperate climates would be mostly adequate under Mediterranean conditions.However, special attention should be paid to the precipitation as clearly affected the absorption and later accumulation of Se in the grain, especially the rainfall during the ten days before the application.A lower amount of rain in this period led to a higher Se accumulation in the grain.Bread making wheat would be thus a suitable candidate to be included in Se biofortification programs under Mediterranean conditions.The foodstuff derived from its grain could efficiently increase the Se content in the human food chain. Table 1 . ANOVA table showing the effect of the year, Se application timing and their interaction on the total Se in flour (μg kg -1 ), total Se in grain and flour ratio, grain yield (kg ha -1 ) and grain protein content (%) DF Total Se flour Total Se grain/Total Se flour Grain yield Grain protein DF: degree of freedom.F values, including the level of significance (*** P< 0.001), are shown in the rest of the columns.Table2.Mean ± standard error in total Se in flour, grain protein content and grain yield as affected by Se timing (control: without Se fertilization; Se fertilization at growth stages GS-31, GS-35, GS-45, and GS-51 according to Zadocks scale) and year
2018-12-11T15:32:11.199Z
2014-02-16T00:00:00.000
{ "year": 2014, "sha1": "bccc61c6ac0fd1028b7ffc8f438eee733535baea", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/jas/article/download/32179/19583", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bccc61c6ac0fd1028b7ffc8f438eee733535baea", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
947039
pes2o/s2orc
v3-fos-license
The Notch Ligands, Delta1 and Jagged2, Are Substrates for Presenilin-dependent “ (cid:1) -Secretase” Cleavage* The evolutionary conserved Notch signaling pathway is involved in cell fate specification and mediated by molecular interactions between the Notch receptors and the Notch ligands, Delta, Serrate, and Jagged. In this report, we demonstrate that like Notch, Delta1 and Jagged2 are subject to presenilin (PS)-dependent, intramembranous “ (cid:1) -secretase” processing, resulting in the production of soluble intracellular derivatives. Moreover, and paralleling the observation that expression of familial Alzheimer’s disease-linked mutant PS1 compromises production of Notch S3/NICD, we show that the PS-dependent production of Delta1 cytoplasmic derivatives are also reduced in cells expressing mutant PS1. These studies led us to conclude that a similar molecular apparatus is responsible for intramembranous processing of Notch and it’s ligands. To assess the potential role of the cytoplasmic derivative on nuclear transcriptional events, we expressed a Delta1-Gal4VP16 chimera and demonstrated marked transcriptional stimulation of a luciferase-based reporter. Our findings offer the proposal that Delta1 and Jagged2 play dual roles as activators of Notch receptor signaling and as receptors that mediate nuclear signaling events via (cid:1) -secretase-generated cytoplasmic domains. Mutations in genes encoding presenilins (PS1 and PS2) 1 cosegregate with the vast majority of pedigrees with early-onset familial Alzheimer’s disease (FAD) (1). Multiple lines of evidence indicate that PS expression is essential for intramembranous “ (cid:1) -secretase” The Notch signaling pathway is an evolutionarily conserved signal pathway for local cell-cell communication between neighboring cells involved in cell fate determination (11). The Notch receptors undergo proteolytic processing in the trans-Golgi network by a furin-like convertase in the ectodomain (12), resulting in a mature heterodimeric receptor that accumulates on the cell surface (13). Ligand binding triggers sequential proteolytic processing within the extracellular juxtamembrane region by a member of the ADAM (a disintegrin and metalloprotease domain) family, termed TACE (tumor necrosis factor (TNF)␣-converting enzyme) (14,15), and subsequent intramembranous cleavage of this membrane-tethered derivative, termed S2/NEXT, by a PS-dependent ␥-secretase activity (4). The resulting soluble cytoplasmic domain, termed S3/NICD (Notch intracellular domain), translocates to the nucleus and interacts with the DNA-binding proteins CSL, resulting in transcriptional activation of target genes (16,17). Several Notch ligands have been identified in vertebrates and invertebrates, including Delta, Serrate, and Jagged, transmembrane proteins that share several structural features including a DSL (Delta, Serrate, Lag-2) domain, required for Notch binding, and multiple epidermal growth factor-like repeats in respective extracellular domains (18,19). Like Notch, Delta is a substrate for proteolysis by a metalloprotease of the ADAM family, termed Kuzbanian (20), resulting in shedding of the ectodomain segment. The precise role of Kuzbanian-dependent proteolytic processing of Delta is not fully understood, but recent studies suggest that this event down-regulates Delta-mediated Notch signaling (21). In this regard, ectodomain shedding of Jagged has not been described to date. In this report, we demonstrate that like Notch, Delta1 and Jagged2, are subject to presenilin (PS)-dependent, ␥-secretase processing, resulting in the production of soluble intracellular derivatives. We show that a plasma membrane-resident ϳ40 kDa carboxyl-terminal fragment (CTF) that is presumably generated by a Kuzbanian-like activity serves as substrate for ␥-secretase, resulting in the liberation of a cytosolic, ϳ38-kDa CTF. We demonstrate that expression of a Delta1-Gal4VP16 chimera is capable of activating transcription of a luciferase reporter and that nuclear transactivation is abrogated by a highly potent and selective ␥-secretase inhibitor. In parallel, we demonstrate that a ϳ27-kDa Jagged2 CTF is also a substrate for ␥-secretase. Our findings suggest that Delta1 and Jagged2 may play dual roles as activators of Notch receptor signaling and as receptors that mediate nuclear signaling events via ␥-secretase-generated cytoplasmic domains. Finally, we report that expression of FAD-linked PS1 variants lead to compromised intramembranous cleavage of Delta1. These observations mimic earlier studies showing reduced cleavage at the Notch S3 site and the APP ''⑀'' site within respective transmembrane domains in cells expressing FAD-linked mutant PS1. Thus, we argue that a similar molecular apparatus is responsible for intramembranous cleavage of Notch and it's ligand, Delta1. EXPERIMENTAL PROCEDURES Cell Culture and Inhibitor Treatment-Mouse neuroblastoma N2a cells stably expressing mouse Delta1 and NIH 3T3 cells stably expressing human Jagged2 with a COOH-terminal Myc-epitope tag, were maintained in 200 g/ml G418 (Invitrogen) and 2.5 g/ml puromycin (Clontech), respectively. ␥-Secretase inhibitor treatments were for 16 h with 2 M of L-685,458 (22). Antibodies and Western Blot Analysis-Cells were lysed in immunoprecipitation buffer containing detergents and protease inhibitors as described (24). Solubilized proteins were fractionated by electrophoresis on SDS-polyacrylamide gels and electrophoretically transferred to polyvinylidene difluoride membranes (Bio-Rad). Membranes were blocked and then probed with primary antibodies and horseradish peroxidasecoupled secondary antibodies (Pierce). Myc-tagged Delta1 and Jagged2 derivatives were detected using monoclonal Myc specific antibody, 9E10. Polyclonal antibody, PS1 NT (25), was used to detect full-length PS1 and PS1 NH 2 -terminal fragment. ␤-Tubulin was detected by anti-␤-Tubulin antibody (Sigma). Bound antibodies were visualized by enhanced chemiluminescence (ECL) detection system (PerkinElmer Life Science). Cell Surface Biotinylation-Cells were grown to near confluence in a 10-cm dish and subjected to cell surface biotinylation with 0.5 mg/ml sulfosuccinimodobiotin (Sulfo-NHS-SS-biotin, Pierce) essentially as described previously (24). Cells were then lysed in immunoprecipitation buffer, and biotinylated proteins were captured with streptavidin-agarose beads (Pierce). Luciferase Reporter Assay-To generate a construct encoding the Delta1-Gal4VP16 fusion protein, the primer pairs, 5Ј-CCATCGATT-TAAGAAGCTACTGTCTTCTATC-3Ј and 5Ј-CCATCGATCACCGTC-CTCGTCAATTCC-3Ј, were incubated with pMst-GV-APP (26) in a PCR. The PCR product was inserted between the Delta1 COOH terminus and Myc sequences. 0.4 g of the resulting Delta1-Gal4VP16-Myc construct was cotransfected the Gal4 reporter plasmid (pG5E1B-luci) (26), and 50 ng of a control plasmid encoding Rellina luciferase into HEK293 cells. Cells were harvested 48 h after transfection, and luciferase activities were determined using dual-luciferase reporter assay system (Promega) following manufacture's instructions. Values shown are the averages from triplicate experiments for each condition. RESULTS The family of Notch receptors and Notch ligands are type I integral membrane proteins. It is now well established that Notch and the Notch ligand, Delta1, are substrates for processing by metalloproteases of the TACE/ADAM family (14,15,20), resulting in shedding of respective ectodomains. In the case of Notch, TACE cleavage generates a membrane-tethered derivative, termed S2/NEXT, that is the substrate for intramembranous proteolysis by a presenilin-dependent ␥-secretase activity (4). This cleavage event, termed S3 cleavage, occurs between amino acids 1743 and 1744 (16); the P1 valine residue is indispensable for S3 cleavage and subsequent nuclear signaling activity (16). Intramembranous cleavage at the S3 site results in the generation of a soluble, cytoplasmic derivative of Notch, termed S3/NICD, that is a transcriptional coactivator. Intrigued by the finding that Delta1 undergoes ectodomain shedding, and the presence of valine residues at analogous positions within the transmembrane domains of Delta1 and the Jagged2 (Fig. 1A), we asked whether these Notch ligands may be substrates for ␥-secretase cleavage, as well. We first examined stable N2a cells that constitutively express Myc-tagged mouse Delta1 harboring a carboxyl-terminal, Myc-epitope tag. Western blot analysis revealed the presence of full-length ϳ117-kDa Delta1-Myc and a prominent ϳ40-kDa Delta1 carboxyl-terminal fragment (D-CTF1) that presumably represents the membrane-tethered fragment generated follow-ing metalloprotease cleavage within the ectodomain (20, 21) (Fig. 1B, lane 1). In addition, we observed low levels of a ϳ38-kDa CTF (D-CTF2) at steady state (Fig. 1B, lane 1). Importantly, the ϳ38-kDa D-CTF2 derivative fails to accumulate in cells treated with a highly potent and selective ␥-secretase inhibitor, L-685,458 (Fig. 1B, lane 2), findings strongly suggesting that this fragments is generated by ␥-secretase. To further establish that production of D-CTF2 is PS-dependent, we transiently expressed Delta1-Myc in N2a cells that constitutively express either wild-type human PS1 or a dominant negative human PS1 variant that harbors the D385A mutation (24); intramembranous processing of Notch1 is abrogated in cells expressing PS1 D385A (24). As we had observed in cells treated with the ␥-secretase inhibitor, D-CTF2 failed to accumulate in cells expressing PS1 D385A (Fig. 2B, lane 4). Collectively, these data strongly suggest that Delta1 is a substrate of PS-dependent, ␥-secretase cleavage. We then examined the processing of the Notch ligand, Jagged2, in an NIH 3T3 cell line that stably expresses human Jagged2 harboring a carboxyl-terminal Myc-epitope tag. Western blot analysis revealed the presence of ϳ170-kDa full-length Presenilin-dependent Cleavage of Notch Ligands 7752 Jagged2 and an ϳ25-kDa Jagged2 carboxyl-terminal fragment (J-CTF2) (Fig. 1C, lane 1). However, the ϳ25-kDa CTF2 was eliminated in cells treated with the ␥-secretase inhibitor, L-685,458, and interestingly, a new ϳ27-kDa J-CTF1 accumulates under these conditions (Fig. 1C, lane 2). These findings suggest that J-CTF1 is constitutively processed by a ␥-secretase-like activity, to generate J-CTF2. To establish that production of J-CTF2 is PS-dependent, we transiently expressed Jagged2-Myc into N2a cells that express the PS1 D385A mutation (24). As we had observed in cells treated with the ␥-secretase inhibitor, the production of J-CTF2 is eliminated, and J-CTF1 now accumulates (Fig. 1C, lane 4). These results indicate that Jagged2 is also a substrate of PS-dependent ␥-secretase cleavage. To identify the subcellular site(s) at which the D-CTF2 and D-CTF1 derivatives of Delta1-Myc accumulate, we treated N2a cells that constitutively express Delta1-Myc with the membrane-impermeant, biotinylation reagent, sulfosuccinimidobiotin at 4°C. Biotinylated, cell surface polypeptides were recovered from detergent-solubilized lysates using streptavidin-conjugated agarose, and captured proteins were subject to Western blot analysis with the Myc-specific, 9E10 antibody. In cells expressing Delta1-Myc, we observed biotinylated fulllength Delta1-Myc and ϳ40-kDa D-CTF1 (Fig. 2A, lane 5). However, the ϳ38-kDa D-CTF2 was not recovered by immobilized streptavidin, despite the presence of the fragment in detergent lysates (Fig. 2A, lane 2). Hence, D-CTF2, like the Notch S3/NICD derivative is not present at the cell surface, but presumably present in the cytosol. Moreover, D-CTF2 failed to accumulate in cells treated with the ␥-secretase inhibitor, as expected, and streptavidin only recovered both full-length and ϳ40-kDa D-CTF1 from detergent lysates of these inhibitor-treated cells (Fig. 2A, lane 6). These observations strongly suggest that the soluble, cytoplasmic domain of Delta1 is generated following sequential cleavage of full-length Delta1 species by the concerted action of Kuzbanian-like metalloprotease(s) and a PS-dependent ␥-secretase. It is now clear that the soluble intracellular domains of Notch and APP that are generated by ␥-secretase are translocated to the nucleus and serve as transcriptional coactivators (16,17,26). To test the possibility that D-CTF2 could be transported to nucleus, and exhibit nuclear signaling activity, we generated cDNA encoding a Delta1-Gal4VP16 fusion protein and transfected this construct into human embryonic kidney 293 (HEK293) cells together with the reporter plasmid, pG5E1B-luciferase (26). Compared with cells expressing the pG5E1B vector, expression of Delta1-Gal4VP16 fusion protein stimulated transcription by ϳ70-fold (Fig. 2B). Notably, transactivation of the reporter plasmid by the Delta1-Gal4VP16 fusion protein was markedly inhibited upon treatment of cells with the ␥-secretase inhibitor (Fig. 2B). These results suggest that intracellular domain of Delta1 that is generated by PS-dependent ␥-secretase cleavage can be transported into nucleus, findings that raise the possibility that the cytosolic derivative of Delta1 may play a role in nuclear signaling events. Finally, and intrigued by the finding that cells expressing FAD-linked PS1 mutants enhance production of pathogenic ␤-amyloid 42 peptides (1), but exhibit impaired processing within the transmembrane domains of Notch and APP that liberate NICD and AICD, respectively (27)(28)(29), we analyzed processing of Delta1-Myc in pools of N2a cells that constitutively express human wild-type PS1 or the PS1 ⌬E9, M146L, E280A, or C410Y FAD variants. Compared with N2a cells expressing wild-type human PS1 (Fig. 3A, lane 1), in which Delta1-Myc was processed to D-CTF1 and low levels of D-CTF2, as described above (Fig. 1B), the production of the D-CTF2 fragment is reduced in most cell lines expressing FAD- 1 and 4) or borate buffer containing sulfosuccinimodobiotin (lanes 2, 3, 5, and 6) at 4°C. Biotinylated surface proteins were captured with streptavidin-agarose beads and detected with Myc-specific, 9E10 antibody. One-twentieth of the lysates used for precipitation were loaded for comparison (lanes 1-4). B, HEK293 cells were cotransfected with cDNAs including Gal4VP16 reporter plasmid, Delta1-Gal4VP16 fusion construct, and an internal control plasmid encoding Renilla luciferase. Values normalized by Renilla luciferase activity to standardize transfection efficiency were shown as the averages (ϮS.E.) of triplicate samples. Presenilin-dependent Cleavage of Notch Ligands 7753 linked PS1 variants, albeit with only a modest effect in the M146L cells (Fig. 3A, lanes 2-5, quantified in Fig. 3B), this despite comparable levels of expression of human PS1 (Fig. 3A, lower panel). Taken together with the curious observation that expression of FAD-linked PS1 variants leads to reduced processing at the Notch S3 and APP ⑀ sites, our findings that production of the D-CTF2 derived from Delta1 is also reduced by expression of mutant PS1 argues that a similar, if not identical, molecular apparatus is involved in substrate recognition and intramembranous processing of these functionally divergent membrane proteins. DISCUSSION A wealth of evidence has emerged to support a role for PS in intramembranous ␥-secretase processing of a host of type I membrane proteins. Intrigued by the similarity in the amino acid sequence of the transmembrane domains of Notch and it's ligands, Delta and Jagged, we hypothesized that these ligands might also serve as substrates for ␥-secretase. In the present report, we confirm our prediction and offer several insights relevant to the molecular apparatus responsible for intramembranous processing of Delta1 and Jagged2 and the potential functional significance of this processing event. First, we demonstrate that the production of cytosolic derivatives of Delta1 and Jagged2, termed DICD and JICD, respectively, are inhibited either by a highly potent, and selective transition-state isostere of ␥-secretase activity or by expression of the dominant negative D385A PS1 mutant. Thus, the Notch ligands, Delta1 and Jagged2, are novel substrates of PS-dependent ␥-secretase processing. Second, and in view of earlier demonstrations that FADlinked PS1 mutations impair cleavage at the Notch S3 and APP ⑀ sites that lead to production of S3/NICD and AICD (27)(28)(29), respectively, we assessed the effects of FAD-linked PS1 variants on ␥-secretase processing of Delta1. We show that the generation of DICD is impaired in most of cell lines stably expressing four independent FAD-linked PS1 mutants. Most interestingly, the relative levels of reduction in DICD parallel the reported effects on S3/NICD production (29); expression of the C410Y variant has the most pronounced effect, while the M146L variant has only a modest effect on ␥-secretase processing. Thus, we argue that the molecular apparatus involved in production of NICD, AICD, and DICD are similar, if not one in the same. Third, and in view of earlier conclusions that the intracellular domains of Notch (S3/NICD) and APP (AICD) are transcriptional coactivators (16,17,24), we hypothesized that ␥-secretase-generated DICD could be translocated to the nucleus and activate transcription of a reporter gene. Our present results support this prediction. Despite the strengths of these observations, the factors responsible for translocating DICD into the nucleus are not known. For the APP derivative, AICD, a cytosolic adaptor protein, Fe65, promotes nuclear translocation (24). For Notch1, three putative nuclear localization signals (NLS) are present within the ICD, and these may serve as recognition motifs by members of the importin/karyopherin ␣ and ␤ receptors involved in nuclear import. Similarly, the Del-ta1 intracellular domain contains two putative NLSs, PDRKRPE at amino acids 686 -692, and RKRP at amino acids 688 -691, while the Jagged2 intracellular domain contains putative NLSs RKRR at amino acids 1107-1110, and KRRK at 1108 -1111. Hence, it is conceivable that DICD and JICD may be imported via the classical importin pathway, but further mutagenesis studies of the putative NLSs will be required to validate this hypothesis. In any event, our results offer the suggestion that PS-dependent ␥-secretase processing of Delta1 or Jagged2 and production of DICD and JICD may play roles in activating nuclear transcriptional events. While the significance of these findings in relation to cell-intrinsic versus cell-extrinsic aspects of Notch signaling remains to be determined, the proposal that the Notch ligands, Delta1 and Jagged2, may perform roles both as ligands and receptors that directly participate in transcriptional activation warrants further investigation.
2018-04-03T03:09:44.680Z
2003-03-07T00:00:00.000
{ "year": 2003, "sha1": "e4ecacfd14c3f1007fc600fdf54a0f23a0229fc2", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/278/10/7751.full.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "9540b32e83803d59bcdc36c290da55cc198fee9f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234786260
pes2o/s2orc
v3-fos-license
MicroRNA-486-5p Suppresses Lung Cancer via Downregulating mTOR Signaling In Vitro and In Vivo Lung cancer is one of the central causes of tumor-related deaths globally, of which non-small cell lung cancer (NSCLC) takes up about 85%. As key regulators of various biological processes, microRNAs (miRNAs) have been verified as crucial factors in NSCLC. To elucidate the role of miR-486-5p in the mTOR pathway, we investigated its role in NSCLC and related signaling. Our results confirmed that miR-486-5p was downregulated in most of human NSCLC tissue samples and cell lines. Further study confirmed that it inhibited NSCLC through repression of the mTOR pathway via targeting both ribosomal proteins S6 kinase A1 (RPS6KA1, RSK) and ribosomal proteins S6 kinase B1 (RPS6KB1, p70S6K), which are critical components of the mTOR signaling. Additionally, miR-486-5p impeded tumor growth in vivo and inhibited tumor metastasis through repression of the epithelial-mesenchymal transition (EMT). Taken together, our study verified the role that miR-486-5p exerts in NSCLC, and its expression pattern in the different stages and morphologies of NSCLC makes it a promising biomarker in the early diagnosis of the disease. INTRODUCTION Lung cancer is known for its high occurrence and mortality rate both domestically and abroad (1), in which non-small cell lung cancer (NSCLC) accounts for about 85%. As a result, the prognosis of the disease remains poor (2,3). Further categorization of it comes to lung adenocarcinoma (LUAD, 40-50% of all cases), lung squamous cell carcinoma (LUSC, 20-30% of all cases) and large cell lung cancer. microRNAs (miRNAs) are noncoding RNAs with single strand of~22 nucleotides by which about 60% of genes are regulated post-transcriptionally through either translational repression or mRNA degradation (4). MiRNAs are reported to play critical roles in diverse biological processes including cell proliferation (5,6), cell cycle (7)(8)(9), cell migration (10)(11)(12), cell apoptosis (13,14) immune response (15) and tumorigenesis (16,17), suggesting their crucial roles in the development of various malignancies. To date, numerous miRNAs have been studied in NSCLC, revealing their crucial roles in the tumorigenesis and development of NSCLC (18)(19)(20)(21). shedding some bright light in the diagnosis and treatment of lung cancer. Our previous work has demonstrated the downregulation of miR-486-5p in NSCLC cell lines and patients' tissue samples and it suppressed NSCLC cell growth and promoted apoptosis by direct repression of cyclin-dependent kinase 4 (CDK4) (22). However, it was also reported to play an oncogenic role by targeting tensin homolog deleted on chromosome ten (PTEN) and subsequently activating the phosphatidylinositol 3-kinase (PI3K)/AKT signaling (23). Here, we identified targets of miRNA-586-5p, RSK and p70S6K, which are in regulating mTOR signaling. In short, our study showed that miR-486-5p significantly inhibited the mTOR pathway through targeting RSK and p70S6K, resulting in suppressive effects in NSCLC. Specimen Collection and Ethical Statement 68 NSCLC tissue specimens and their paired noncancerous tissues were collected from Shanghai Chest Hospital (Shanghai, China) with informed consent obtained. Approval of the experiments in this study by the Ethics Committee of Shanghai Chest Hospital was obtained. All patients' information used in this study is listed in Table S1. Cell Culture Five human NSCLC cell lines (95-D, A549, H1975, HCC827, and PC-9), cells from the normal human bronchial epithelium (HBE), and HEK293T were purchased from the Cell Bank of the Chinese Academy of Sciences (Shanghai, China). H1299 cells were bought from the American Type Culture Collection (ATCC, Manassas, VA, USA). Authentication of all the cell lines were with the short tandem repeat (STR) method was performed. Construction of Plasmid, siRNA, and Cell Transfection Lentivirus construction and infection were performed as formerly introduced (24). In brief, after digestion with EcoR I and Xba I (Takara, Dalian, China), pre-miR-486-5p sequence was cloned into pLenti vector (Invitrogen, Carlsbad, CA, USA) (named pLenti-miR-486). Viral particles containing pLenti or pLenti-miR-486 were collected to infect A549 and H1299 cells and further sorted with flow cytometry. qRT-PCR and Western Blot Total RNA was extracted, and reverse transfected according to manufactures' guide. The RNA level was quantified by qRT-PCR using an SYBR Green PCR master mix (TaKaRa, Dalian, China). The endogenous controls for mRNA and miRNA were 18S RNA and U6 snRNA, respectively. The relative quantification (2 -DDCT ) method was used for results analyzing. Total protein was extracted from the cells using RIPA lysis buffer (CWBIO, Beijing, China) and quantified with a Bradford Kit (Bio-Rad, Hercules, California, USA). Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) was applied to separate proteins, which were then transferred to a polyvinylidene difluoride (PVDF) membrane (Millipore Corporation, Billerica, MA, USA). After blocking with non-fat powdered milk at room temperature for 1 h following an incubation with rabbit anti-RSK, anti-p70S6K, anti-p-p70S6K, anti-mTOR, anti-p-mTOR, anti-E-Cad, and anti-N-Cad antibodies overnight at 4°C. After incubation with a goat-antirabbit secondary antibody conjugated to horseradish peroxidase (HRP) (1:10000, Transgene Biotech, Beijing, China), protein bands were photographed with a chemiluminescent HRP substrate (Millipore, Billerica, MA, USA) and imaged with an E-Gel Imager (Biotanon, Shanghai, China). All experiments were repeated for three times and the representative images of the protein bands are shown in the figures. The number indicates the grayscale value of the protein bands relative to GAPDH. All the antibodies used in this study are listed in Table S4. CCK-8 and Colony Formation Assay CCK-8 (Dojindo, Japan) and colony formation assays were performed according to a formerly described protocol (24). For CCK-8 assay, 2500 cells per well were seeded in 96-well plates and incubated with 5% CCK-8 for 2.5 h every 24 h, and then the absorbance of the supernatant at 450 nm was measured with a plate reader. For colony formation assay, 500 cells were seeded in 6-well plates for 12-14 d until the macroscopic colonies form, followed by crystal violet staining and photographing. Flow Cytometry Assay Flow cytometry (BD Biosciences, San Jose, CA, USA) was used for cell sorting, cell cycle analysis apoptosis detection. For apoptotic rate assessment, pLenti/pLenti-miR-486 stably transfected cells were treated with actinomycin D (AMD, 5 mg/ mL) for 4 h and then stained for Annexin-V APC/7AAD (Keygentec, Jiangsu, China) prior to analysis by flow cytometry. Transient transfected cells were stained for Annexin-V FITC/PI (BD Biosciences, San Jose, CA, USA) following treatment with actinomycin D and analyzed by flow cytometry. Wound Healing Assay Cells were scratched with 20 mL tips and washed with PBS twice, and the position was recorded by using a 10× lens on a light microscope and a digital camera. After 24 h (H1299) or 48 h (A549), cell positions were recorded again. Mouse Xenograft Model Six-week-old female SCID mice were purchased from the Shanghai Laboratory Animal Center (SLAC, Shanghai, China) and maintained under specific-pathogen-free (SPF) conditions with a healthy state test by SLAC. After random assignment to one of 2 groups, with 5 mice in each group, each mouse was injected subcutaneously in the right flank with 5 × 10 6 H1299 cells and intravenously with 2.5 × 10 6 cells transfected with pLenti or pLenti-miR-486. Tumor diameters were measured with calipers weekly, and the formula: volume = length × width 2 /2 was applied to determine the tumor size. Eight weeks after the injection, the mice were sacrificed with CO 2 without using any anesthesia, with the xenografts excised and weighed. The results were analyzed between the groups. All experimental protocols were approved under the institutional Animal Care and Use Committee of Shanghai University (Shanghai, China). IHC Analysis The xenografts were embedded in paraffin, then deparaffinized and rehydrated, followed by antigen retrieval. The sections were incubated with primary antibodies against Ki67, RSK, p70S6K, p-p70S6K, mTOR, p-mTOR, E-cad, and N-cad followed by incubation with secondary antibodies. The sections were then stained with 3, 3′-diaminobenzidine reaction solution and photographed using a digitalized microscope camera. All the antibodies used in this study are listed in Table S4. Bioinformatics miRWalk, together with miRTarBase and TargetScan, was used for targets prediction. UALCAN and Kaplan-Meier Plotter were used to analyze the data from TCGA. All the databases are listed in Table S5. Statistical Analysis Data were analyzed with GraphPad Prism software 8.0 (GraphPad Software, San Diego, CA, USA) and IBM SPSS v25.0 software (Abbott Laboratories, Chicago, IL, USA). Results were presented as the mean ± SD. Statistical analyses were performed using the Student's t-test and one-way ANOVA. Pearson's chi-square test was used to analyze the associations between miR-486-5p and RSK and p70S6K expression in tissue samples. All the experiments were repeated at least three times independently. Statistical significance was established at p < 0.05. miR-486-5p Is Downregulated in NSCLC To study the relationship between the expression of miR-486-5p and NSCLC, quantitative real-time polymerase chain reaction (qRT-PCR) results showed the marked downregulation of miR-486-5p in the 68 human NSCLC samples than in their paired adjacent non-tumor tissues ( Figure 1A), the expression of miR-486-5p based on the pathological type, namely LUAD and LUSC samples, the tumor size, and the TNM neoplasm staging were also compared. It turned out that miR-486-5p was mainly downregulated in LUAD ( Figure 1B) and samples with diameters <3 cm ( Figure 1C) and the stage I and II samples ( Figure 1D), revealing that miR-486-5p might play different roles in the tumorigenesis of LUAD and LUSC, and is positively regulated in advanced NSCLC. miR-486-5p was also downregulated in most of our NSCLC cell lines, comparing to the non-transformed human lung epithelial cell line HBE ( Figure 1E). A549 and H1299 cell lines were chosen for further study, for miR-486-5p expression in these two cell lines was relatively the lowest. Moreover, analyzing The Cancer Genome Atlas (TCGA) database also confirmed the downregulation of miR-486-5p in both LUAD and LUSC tissues ( Figure S1) (26), and we can also conclude from TCGA database that higher miR-486-5p level is positively correlated with higher probability of survival both in LUAD ( Figure 1F) and LUSC ( Figure 1G) patients (27). To sum up, our results indicated that miR-486-5p might function as cancer suppressor in NSCLC. miR-486-5p Inhibits NSCLC Proliferation miR-486-5p (pLenti-miR-486) or an empty vector (pLenti) was forced expressed with lentivirus to assess the roles of miR-486-5p on NSCLC cell phenotypes. The successful transfection of the vectors can be indicated from the fluorescence intensity of green fluorescence protein (GFP) ( Figure S2). miR-486-5p expression was upregulated remarkably in the pLenti-miR-486-expressing A4549 and H1299 cells relative to the pLenti-expressing ones, with nearly 8-and 18-fold increases respectively (Figure 2A). CCK-8 assay revealed that miR-486-5p overexpression inhibited H1299 and A549 cell proliferation ( Figures 2B, C). In search of the underlying mechanism of the suppressing role that miR-486-5p played in NSCLC, we analyzed its function on cell cycle, colony formation, cell migration and cell apoptosis. It turned out that the cell cycle progress of A549 and H1299 was retarded by miR-486-5p ( Figure 2D), as well as the colony formation ( Figure 2E) and migration rate ( Figure 2F), with the latter determined by wound healing assay. On the contrary, the cell apoptosis was considerably facilitated by miR-486-5p ( Figure 2G). In conclusion, these results verified that miR-486-5p acted as a suppressor in the NSCLC. RSK and p70S6K Are Targets of miR-486-5p To elucidate the mechanism by which miR-486-5p hinders NSCLC progress, we predicted its potential targets using miRWalk 3.0 (28). Four candidates stood out after overlapping the predicted targets by miRWalk with the mTOR-, apoptosisand cell migration-related proteins ( Figure 3A). Given the functions of miR-486-5p in NSCLC cell lines, we ultimately chose the target genes RSK and p70S6K. The wild type and mutated binding sites for miR-486-5p with RSK and p70S6K are demonstrated in Figure 3B, and their binding relationship was determined by dual-luciferase reporter assay. Significant decrease of the relative luciferase activity in HEK293T cells cotransfected with the 3' untranslated region (3'UTR) of RSK and p70S6K (pGL3-RSK/pGL3-p70S6K WT 3'UTR), pRL vector, and miR-486-5p mimic compared to the control (cotransfected with RSK and p70S6K 3'UTR, pRL vector and NC mimic) was observed, while no significant change in luciferase activity was found in cotransfection of miR-486-5p mimic with the RSK and p70S6K 3'mUTR (pGL3-RSK/pGL3-p70S6K Mut 3'UTR), indicating that miR-486-5p has direct binding sites in RSK and p70S6K mRNA 3'UTRs ( Figures 3C, D). To determine how miR-486-5p affected the expression of RSK and p70S6K, we investigated the expression of RSK and p70S6K in pLenti-miR-486 A549 and H1299 cells compared to that in pLenti cells by qRT-PCR and western blot. RSK and p70S6K expression was decreased by miR-486-5p on both the mRNA ( Figures 3E, F) and protein ( Figure 3G) levels in A549 and H1299 cells. Also, RSK and p70S6K expression showed to be negatively correlated with that of miR-486-5p in the NSCLC tissue samples ( Figure S3), especially in the LUAD samples ( Figures 3H, I), with their expression being significantly upregulated in most of NSCLC cell lines (Figures 3N, S). In accordance with the expression pattern of miR-486-5p in NSCLC tissue samples, both RSK and p70S6K showed to be upregulated in most LUAD tissues, tissues with diameters >= 3 cm and the stage III/IV tissues ( Figures 3J-M, O-R). Taken together, RSK and p70S6K are confirmed to be targets of miR-486-5p. To further study the effect of RSK and p70S6K on NSCLC, RSK and p70S6K siRNAs (siRSK 1-3 and sip70S6K 1-3) were transfected into A549 and H1299 cells, resulting in a significant reduction of mRNA ( Figure S4). As the siRSK -2 and sip70S6K -3 downregulated the mRNA of their respective target most significantly, we chose them for subsequent research (simplified as siRSK and sip70S6K in the main Figures). The siRNAs we chose significantly downregulated the mRNA (Figures 4A, B) and protein ( Figure 4C Figure 4G) and cell migration ( Figure 4H) of A549 and H1299 cells, whereas the cell apoptotic rate was upregulated ( Figure 4I) by the siRNAs. These results showed that the downregulation of RSK and p70S6K could mimic the effects of miR-486-5p in NSCLC cells, indicating that the function of miR-486-5p on NSCLC cell phenotype was exerted at least partly via downregulation of RSK and p70S6K. miR-486-5p Suppresses mTOR Pathway In Vivo To explore the effect of miR-486-5p on tumor growth and migration in vivo, 5 × 10 6 H1299 cells expressing either pLenti or pLenti-miR-486 were injected subcutaneously at the left flank and 2.5 × 10 6 cells intravenously into SCID mice. Upon implantation, tumor volume was measured every week, and mice were sacrificed at week 8. The tumors are displayed in Figure 6A. Suppression of tumor growth by miR-486-5p became remarkable at week five and got less significant later ( Figure 6B). One possible reason is that the tumor growth space is confined as the tumor grows, and nutrients and oxygen are insufficient for the tumor development, ultimately leading to festering. A noted induction in tumor weight was also observed ( Figure 6C). Moreover, miR-486-5p expression was significantly upregulated, and RSK and p70S6K were downregulated at the RNA level in the pLenti-miR-486 stably transfected xenografts (Figures 6D-F). Protein expression of RSK and p70S6K were also downregulated in the pLenti-miR-486 group, together with a marked reduction on the protein level of phosphorylated p70S6K, as well as the downstream mTOR and phosphorylated mTOR level ( Figure 6G), indicating the inactivation of the mTOR pathway by miR-486-5p. H&E staining of the tumor tissues was displayed ( Figure 6H, left panel), and that of the lung specimens showed that miR-486-5p significantly reduced the number and size of metastatic nodes ( Figure 6H expression was increased compared with control ( Figure 6H). In accordance with the phenotypic results, the protein expression of E-Cad was significantly upregulated by miR-486-5p, of which the N-Cad showed the opposite results ( Figure 6G). As E-Cad and N-Cad are two critical cadherins marking the occurrence of the EMT, the IHC results suggested that miR-486-5p suppressed the migration of NSCLC by inhibition of the EMT process. To sum up, these results revealed that miR-486-5p impeded xenograft growth by targeting RSK and p70S6K and further leading to inactivation and inhibition of the mTOR signaling, and the migration of tumor was also hindered partly via the inhibition of the EMT process. DISCUSSION The mammalian or mechanistic target of rapamycin (mTOR) is a serine/threonine kinase composed of two protein complexes, mTOR complex 1 (mTORC1) and mTOR complex 2 (mTORC2) that differ in structure and function (29), which has been broadly studied and found to play fundamental roles in cell growth, metabolism and the EMT process (30)(31)(32). The overactivation of mTOR is observed in more than 70% of cancers (33,34), indicating its critical functions in tumorigenesis. RSK and p70S6K are members of RPS6K family (35). Plenty of studies have demonstrated that the mTOR signaling is tightly regulated by miRNAs and long noncoding RNAs (lncRNAs) in various kinds of tumors (36)(37)(38)(39)(40)(41)(42), whereas how RSK and p70S6K are regulated is barely understood. Our work revealed that miR-486-5p could hinder lung cancer tumorigenesis through inhibition and inactivation of the mTOR pathway by targeting RSK and p70S6K, and the miR-486-5p/RSK/mTORC1/p70S6K axis (Figure 7) could be active. miR-486-5p is broadly studied in an array of tumor types, including NSCLC (22,23,(43)(44)(45), leukemia (46)(47)(48), colorectal cancer (49,50), renal cell carcinoma (RCC) (51), prostate cancer (52,53), ovarian cancer (54), hepatocellular carcinomas (55), and breast cancer (56,57). While most of the studies demonstrated that miR-486-5p played tumor-suppressing function in different malignancies, there are also several studies claiming that it played a tumor-facilitating role in leukemia (46), NSCLC (23), and prostate cancer (52). Thus, we decided that it's important for us to figure out its function in NSCLC. Moreover, the downregulation of miR-486-5p in NSCLC is found to be caused by the hypermethylation of the miR-486-5p gene promoter region, which is also confirmed by others' work (58). To make it clear whether miR-486-5p plays a suppressive or promoting role in NSCLC development, we decided to perform more in-depth research into its function in NSCLC and figure out the underlying mechanisms of this function. The overexpression of miR-486-5p not only suppressed the expression of RSK and p70S6K both in vitro and in vivo but also inhibited p-p70S6K, mTOR and p-mTOR. Also, the expression of CDK4 in the pLenti-miR-486 expressing xenografts was significantly downregulated ( Figure S5), consisting with our former study (22). These findings suggested that miR-486-5p inhibited NSCLC by inactivating of the mTOR signaling. The expression pattern of miR-486-5p in NSCLC tissue samples has aroused much interest in its function in NSCLC tumorigenesis. Specifically, the miR-486-5p expression is mainly downregulated in the LUAD and early-stage samples, and the expression of RSK and p70S6K showed the reversed tendency. There have been several studies exploring the differentially expressed noncoding RNAs between the LUAD and LUSC patients, thereafter, some lncRNAs, miRNAs, circular RNAs and molecules like interlukin-6R (IL-6R) and IL-1b are considered as diagnostic markers in distinguishing LUAD and LUSC. Our study may establish miR-486-5p as a biomarker for the early diagnosis of NSCLC, especially for the LUAD cases. CONCLUSION Taken together, our work further confirmed that miR-486-5p acts as a suppressor in NSCLC at least partly by targeting the mTOR signaling and CDK4. With more exploitation of its expression pattern and function mechanism in the in vivo model, including exploration of its effective delivery onto the tumor site, we believe it's a promising target in the clinical treatment of NSCLC. ETHICS STATEMENT The animal study was reviewed and approved by the institutional Animal Care and Use Committee of Shanghai University. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS YL and CJ designed the study and supervised the experiments. YW and CJ collected the NSCLC specimens and evaluated medical information. LD, WT, HZ and WL carried out the experiments and analyzed the results. LD and YL prepared the manuscript and revised with WT. All authors contributed to the article and approved the submitted version. FUNDING This study was supported by Pudong New District Plateau Discipline (PWYgy2018-06) and Shanghai Key Laboratory for Molecular Andrology, Shanghai Institute of Biochemistry and Cell Biology, Chinese Academy of Sciences (SLMA-013).
2021-05-20T13:37:11.095Z
2021-05-20T00:00:00.000
{ "year": 2021, "sha1": "282b4728428ae0514ea3034b10479534274823e2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.655236/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "282b4728428ae0514ea3034b10479534274823e2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
267079164
pes2o/s2orc
v3-fos-license
Post-traumatic stress and depression following disaster: examining the mediating role of disaster resilience The current study used structural equation modeling to examine the role of disaster resilience as a mediator between disaster exposure and post-traumatic stress and depressive symptoms among a sample of 625 U.S. adults who experienced a disaster event. Results found that disaster resilience mediated the relationship between disaster exposure as a predictor and depression and post-traumatic stress as dependent variables. These findings have important implications for understanding the mechanisms by which disaster resilience supports post-disaster mental health and can inform future disaster mental health interventions and practice models. Introduction Environmental threats such as natural and human-caused disaster events (e.g., tornados, hurricanes, floods, oil spills) are increasing in prevalence and severity in the United States and worldwide.Between 2000 and 2019, approximately 510,837 individuals have died and 3.9 billion people have been affected by disasters (1).Disasters and other environmental threats pose profound risks to human well-being and cause widespread mortalities, morbidities, property loss, and reduced access to food, water, and housing (1).Furthermore, they can contribute to adverse psychological risks and behavioral health disorders, including substance use, depression, anxiety, and post-traumatic stress disorder (2,3).After a disaster, it is common for people to experience a range of emotional and mental health difficulties, including stress, anxiety, fear, and grief.These effects can be short-term, such as increased stress and anxiety in the immediate aftermath of the disaster, or more long-term, such as the development of posttraumatic stress disorder (PTSD) and depression (3). Various factors have been found to place individuals more at risk for developing depression and PTSD following disasters.Prior research [e.g., (4)(5)(6)] indicates the extent of psychological harm is associated with factors such as the severity of the disaster (e.g., EF-5 tornado), the degree of exposure (e.g., personal injuries, loss of home), and the magnitude of community destruction (e.g., the prevalence of homes, schools, and hospitals destroyed).For example, in a meta-analytic review, Brewin et al. (7) found an association between the severity of the disaster trauma (higher degree of disaster exposure) and the subsequent severity of depression symptoms.In addition, prior studies have indicated that a dose-response effect occurs, wherein PTSD and depression symptoms have been found to increase with greater disaster exposure levels (5,8,9).While disaster exposure has been found to have a direct effect on post-disaster depression and PTSD, it could, however, also indirectly affect depression and PTSD through a third mediating variable, such as resilience.Although different definitions of resilience exist in the literature [for a review, see (10)], most of them generally share the idea that resilience is the ability of an individual to positively adapt in the face of stress, risk, and adversity (10-13).This definition indicates that resilience is a process and that protective factors (e.g., optimism, distress regulation, environmental resources) foster specific processes in the individual that assist in preventing adverse outcomes and promote positive adaption and growth following exposure to stressful or traumatic events (4,14). Within a disaster context, (15) described a risk and resilience framework, wherein resources or protective factors counterbalance the threats of disaster exposure.In terms of conceptualizing the process of resilience in a research model, resilience has the potential to operate as a mediator (16,17) between risk factors (e.g., disaster exposure) and adverse outcomes [e.g., depression, PTSD; (18)].Known as the "protective factor model, " resilience has been found to influence the effect of a risk factor by mediating the adverse impact of risk for predicting negative outcomes (19,20).For example, in prior research, resilience has been found to mediate the relationship between interpersonal risk factors and hopelessness, and contribute to lower levels of hopelessness in a sample of individuals with clinical depression (19).Resilience has also been found to mediate COVID-19 pandemic-related stress and contribute to lower depression and anxiety (21), and higher academic success among college students (22). Despite the role of resilience as a potential mediator between risk factors and mental health outcomes, few studies have examined the possible mediating relationship of resilience to mitigate against adverse mental health outcomes following exposure to disaster events (23).While there is a large amount of evidence that indicates disaster exposure and resource loss can have a detrimental impact on mental health after disasters (24,25), however, less is known about the processes and mechanisms by which resilience mitigates risk factors and reduces the probability of a negative mental health outcome.Uncovering the potential mechanisms by which disaster resilience may be directly and indirectly related to mental health outcomes is important for disaster preparedness and response, as it can provide insights into protective factors that are particularly important in the event of a disaster.Therefore, to address this gap, the objective of the current study was to examine whether disaster resilience had a protective mediating effect on the relationship between disaster exposure and post-disaster depression and PTSD among 625 U.S. adults exposed to disaster (e.g., hurricane, tornado, wildfire, oil spill). In the current study, disaster resilience was conceptualized as various internal and external factors that interact to influence an individual's ability to adapt and recover following exposure to disaster (26).These results could provide a further understanding of the dynamic process of resilience by understanding its interactive mechanisms between exposure to disaster and post-disaster mental health.Structural equation modeling (SEM) was utilized to test this model, and a cross-sectional study was conducted among a sample of adults exposed to a disaster event (N = 625).Based on the evidence reviewed above, the following hypotheses guide this study: H1.Disaster exposure will be positively associated with PTSD and depression. H2. Disaster exposure will be positively associated with disaster resilience. H3. Disaster resilience will (a) have an inverse or negative relationship with PTSD and depression and (b) will mediate the relationship between disaster exposure and PTSD and depression. Data collection procedures Data collection procedures were approved by the [Identity Removed for Review] Institutional Review Board (IRB).Participants qualified for this study if they were 18 or older and had experienced a disaster within the previous 3 years (2016-2019).To ensure the statistical analyses possessed sufficient statistical power with the SEM model, a power analysis was conducted to help determine the adequacy of the sample size required.The criterion was set that the estimated power needed to be 80% or higher, with a significance level (α) set at 0.05, for all the parameters of interest within the SEM (e.g., factor loadings, correlations, and regression paths), with a projected sample size of at least 500 participants was found as adequately powered (27). Participants were recruited through purposive online sampling using Qualtrics' panel aggregator sampling service.The Qualtrics panel aggregator provides researchers access to market research panels and uses digital technology (e.g., IP address checks, digital fingerprinting) to ensure participants' data are as valid and reliable as possible (28).In addition, Qualtrics can monitor the data collection procedure and controls for issues such as participant inattentiveness or ineligibility, high incompletion rates, duplicate responses, or unreasonably quick completion times (29).Qualtrics was chosen as the online data collection platform following research indicating that samples recruited via online panel aggregators represent the U.S. population demography slightly better and are usually less expensive than convenience samples (30).Qualtrics invited participants to the study by clicking on a link to a screening questionnaire to assess eligibility if they lived in a U.S. state or territory that has experienced a natural or human-caused disaster in the prior 3 years (2016-2019).Accordingly, the states targeted for recruitment included California, Tennessee, North Carolina, South Carolina, Georgia, Alabama, Mississippi, Florida, and Texas.Using the online interface of Qualtrics, participants were provided with study instructions and self-reported questionnaire items.In addition, participants were compensated for their time with incentives through the Qualtrics incentive program (e.g., prize drawings and accumulated rewards). Measures Disaster exposure Disaster exposure (M = 9.72, SD = 1.72, α = 0.66) was measured by participants rating their perceptions of exposure to five main disasterrelated stressors: did they lose personal belongs, was their home or property damaged, did they experience bodily injury, did their life or loved one's life feel threatened, and did they experience feelings of helplessness, fear, or horror [see (31,32)].Participants rated their responses on a 4-point Likert scale with response options ranging from 1 = not at all to 4 = a great deal.All items were summed to create an observed variable. Disaster resilience Disaster resilience (M = 166.51,SD = 28.53,α = 0.96) was measured via the Disaster Adaptation and Resilience Scale [DARS; (26)], a 43-item scale consisting of five domains found to support disaster resilience, including: material resources, social resources, distress regulation, problem-solving, and optimism.Each item is rated on a 5-point Likert scale ranging from 0 (not at all true) to 4 (true nearly all of the time), with higher scores reflecting higher levels of resilience.Participants were prompted to think about the most recent disaster event and answer to report if they possess a specific protective factor (e.g., distress regulation, access to basic resources) on a 5-point Likert scale ranging from 0 = not at all true and 5 = true nearly all the time to create a latent variable. Post-traumatic stress Post-traumatic stress disorder (PTSD) symptoms (M = 34.76,SD = 23.22,α = 0.97) were measured via the Impact of Event Scale-Revised [IES-R; (33)].The scale consists of three factors of symptoms related to posttraumatic stress: avoidance (eight items), hyperarousal (six items), and intrusion (eight items).Sample items include: "Any reminder brought back feelings about it, " "I felt irritable and angry, " and "My feelings about it were kind of numb." In the current study, participants will be instructed to report how distressing or bothersome each symptom had been within the past 7 days with respect to the most recent disaster event.Responses for the IES-R are provided on a 5-point Likert-like scale which ranged from 1 = not at all to 5 = extremely to create a latent variable. Depression Depression (M = 3.93, SD = 1.97, α = 0.89) was measured via the Patient Health Questionnaire [PHQ-2; (34)].The PHQ measures the degree to which an individual has experienced depressed mood over the past 2 weeks in order to screen participants for disaster-related depression.Responses were provided on a 4-point Likert-like scale which ranged from 0 = not at all to 3 = nearly every day to create a latent variable. Analyses The demographic characteristics of respondents were analyzed using univariate methods including means, standard deviations, and frequencies.To examine the relationships between disaster exposure, disaster resilience, and mental health outcomes, structural equation modeling (SEM) was used.Using a two-step procedure recommended by Kline (35), first tested a measurement model (confirmatory factor analysis, CFA) to examine and confirm the factor structure of the latent variables and indicators (e.g., disaster resilience, PTSD, depression).Next, the structural model analyzed the direct effects of disaster exposure and mental health outcomes and whether the impact of disaster exposure on PTSD and depression, can be filtered or mediated by the individual's level of disaster resilience. For both the measurement and structural SEM models, a maximum likelihood estimation with robust standard errors was performed using R software and the lavaan package ((37, 38) R Development Core Team, 2011; Rosseel, 2012).Guidelines for goodness of fit indices were used to evaluate model fit based on the recommendations of Little (36) included the root mean square error of approximation (RMSEA) <0.08, standardized root mean square residual (SRMR) <0.08, and comparative fit index (CFI) > 0.90 and the Tucker-Lewis Index (TLI) > 0.90.Residuals were also inspected for outliers, which can indicate a model misfit that is not due to chance.In addition, modification indices were inspected for high values indicating the possible need to remove an item or change the path of an indicator (35).To test the mediation or indirect effects, the 95% confidence interval of 1,000 bootstrapped resamples of the product of coefficients were generated to ensure the confidence intervals do not include zero, and therefore the effect is considered statistically significant (37).In the case of missing data at random, a full information maximum likelihood estimation will be used, which assumes missing data points have an expectation equal to a modelderived value estimated from the remaining data points (38). For the SEM analyses, a measurement model of the latent variables (e.g., disaster resilience, depression, PTSD) was first estimated and the initial measure model did not achieve an acceptable model fit as both the CFI and TLI were less than 0.90.To remedy this issue, parceling items, or combining indicators, can be a valuable method to improve model fit when latent variables have a high number of indicators and can provide information about the relationships among the latent variables (36).After parceling the 22 indicators for the PTSD latent variable into three equal-sized domain parcels, the measurement model achieved acceptable fit: χ2 (2108) = 4588.933,p < 0.01, CFI = 0.91, TLI = 0.90, RMSEA = 0.04, SRMR = 0.05.Next, the structural model was estimated and achieved acceptable fit, model fit statistics included χ2 (1117) = 2484.079,p < 0.01, CFI = 0.91, TLI = 0.90, RMSEA = 0.05, SRMR = 0.06, and allowed for the testing of hypotheses (Table 2 and Figure 1). In Figure 1, the SEM results revealed significant relationships among all the study variables.The first hypothesis (H1) predicted that disaster exposure would have a significant positive relationship with PTSD and depression symptoms.Results show that H1 was supported, and found that disaster exposure had a significant and positive relationship between PTSD (β = 0.744, p < 0.001) and depression (β = 0.773, p < 0.001).Individuals who had encountered more disaster-related losses and stressors (e.g., injuries, loss of a loved one, property damage) had a higher risk of disaster-related PTSD and depression.Next, the second hypothesis (H2) predicted that disaster exposure would have a significant and positive relationship with disaster resilience.Results found that H2 was supported and disaster exposure was significantly associated with having more disaster resilience (β = 0.109, p < 0.05).The increase in disaster exposure was found to predict an increase in the level of disaster resilience. Finally, the third hypothesis (H3a) predicted that disaster resilience would be inversely associated with PTSD and depression symptoms.Results found that H3a was also supported as disaster resilience had a significant and negative association with PTSD (β = −0.116,p < 0.001) and depression (β = −0.246,p < 0.001).Furthermore, the third hypothesis (H3b), predicted that disaster resilience would mediate the relationship between disaster exposure Discussion Disaster events place stress on human life, livelihood, and health, and can have significant impacts on the mental health and well-being of individuals exposed.To test whether the impact of disaster exposure on PTSD and depression can be mediated by disaster resilience, this study examined direct and indirect relationships between disaster stress, disaster resilience, and mental health using structural equation modeling among 625 U.S. adults.Results from the current study point to several findings.First, SEM analysis found that individuals with more disaster exposure were associated with higher levels of PTSD and depressive symptoms.These findings are consistent with prior studies (2,8,41) indicating that individuals exposed to more disaster-related losses (i.e., property damage, injuries) were more likely to demonstrate symptoms of PTSD and depression, and illustrate that disaster exposure can have significant effects on the mental health of individuals. Second, this study found that more exposure to disaster losses was associated with more resilience.This finding highlights that individuals experiencing greater amounts of disaster-related adversity required greater levels of resilience to help mitigate the negative effects of disaster exposure.Resilience or protective factors have been theorized to be able to help mitigate the effects of stressful and traumatic experiences after a collective trauma, and this study's results confirm prior studies (15,42) that have found a positive association between exposure to adversity contributing to greater resilience.However, researchers note that at certain doses, individuals may no longer be capable of adapting when exposure levels are cumulative and ongoing (8,43,44).For example, previous studies have found that cumulative exposure to multiple collective traumas may predispose people to negative mental health outcomes (43)(44)(45)47).Future research should continue to examine the relationship between disaster exposure and resilience responses to time-limited stressor events and in the face of chronic, ongoing collective traumas (46).In addition to acknowledging potential risks and adverse impacts from disasters, is an increased recognition and importance of understanding the mechanisms of disaster resilience (47).Results from this study found that disaster resilience demonstrated a significant mediating relationship between disaster exposure and PTSD and depression among participants.This finding provides further empirical support for conceptualizations of disaster resilience's ameliorating role in contributing to better mental health outcomes following disaster exposure (49)(50)(51), and further theoretical understanding of the phenomena of resilience and how it operates in the specific context of disasters (Schneiderman et al., 2005).In other words, disaster resilience was found to play an important role in changing the strength or direction of the relationship between disaster exposure and post-disaster mental health, such that individuals with access to more disaster resilience (e.g., material, social, and psychological resources) contributed to fewer negative mental health effects.Understanding the underlying mechanisms that help to explain the relationships between risk factors and adverse outcomes provides important insights into potential inventions to target to improve disaster mental health response and preparedness.Findings from this study will be able to assist disaster researchers and practitioners in identifying protective factors (e.g., physical, social, and psychological resources) for intervention development that promote resilience and healthy psychological development in communities experiencing disaster. Finally, these findings also have the potential to contribute to future research on identifying factors supporting the resilience of medical and healthcare professionals working in disaster and emergency response settings.Prior studies have found that working in disaster settings exposes healthcare workers to considerable stress, trauma, and emotional strain and can lead to conditions such as posttraumatic stress disorder (PTSD), depression, suicidality, and anxiety (52, 53), and this study illustrates the important mechanism or process of disaster resilience in reducing mental health symptoms.These findings could be used to inform future research on specific protective factors that could play a beneficial role in reducing negative mental health outcomes among high-risk medical workers in disaster settings (55).By systematically examining and refining these protective factors, future research can contribute to developing targeted interventions, training programs, and support systems tailored to the disaster resilience of the healthcare workforce. Limitations In regard to study limitations, this project was limited by non-probability sampling, by self-report measures, crosssectional design, and the sample's disaster experiences (e.g., majority natural hazards, hurricanes).First, this study utilized non-probability sampling, and therefore, the results may not be generalizable to all individuals experiencing a disaster event.Future studies could improve on this limitation by utilizing a probability sampling design.Second, this study utilized selfreport measures that may not be accurate as a full clinical evaluation of PTSD or depression symptomology.A third limitation is that this study is cross-sectional in design, and therefore, the collected data cannot make causal claims of temporal order (56).The current study's cross-sectional limitation could be improved upon by future studies employing a longitudinal design that collects data at several points and could, for example, assess resilience at 1 month, 6 months, and 1 year to increase further knowledge about disaster resilience.Despite these limitations, this study found the presence of important associations that were consistent with theoretical predictions (e.g., disaster exposure and resilience had direct and indirect associations with PTS and depression symptoms). Conclusion The current study used structural equation modeling (SEM) to identify the relationships between disaster exposure and disaster resilience on mental health outcomes in a sample of 625 U.S. adult participants.Results found that disaster exposure was significantly related to higher levels of PTS and depression symptoms.Disaster resilience was inversely related to PTSD and depression symptoms and played an important role in mediating the relationship between disaster exposure and mental health outcomes.Findings from this study can assist disaster researchers and practitioners in identifying protective factors to support disaster resilience interventions and practice models.
2024-01-23T16:46:02.422Z
2024-01-17T00:00:00.000
{ "year": 2024, "sha1": "596d1fead009ae269ff979093200287cca6a213e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2024.1272909/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9966632828ce8d81585d1c3cdb5fed0eb9afafc3", "s2fieldsofstudy": [ "Psychology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
259224606
pes2o/s2orc
v3-fos-license
Integrable Outer billiards and rigidity In the present paper we introduce a new generating function for outer billiards in the plane. Using this generating function, we prove the following rigidity result: if the vicinity of the smooth convex plane curve $\gamma$ of positive curvature is foliated by continuous curves which are invariant under outer billiard map, then the curve $\gamma$ must be an ellipse. In addition to the new generating function used in the proof, we also overcome the noncompactness of the phase space by finding suitable weights in the integral-geometric part of the proof. Thus, we reduce the result to the Blaschke-Santalo inequality. Introduction In this paper, we address the question of integrability of outer billiards.Throughout this paper, we consider a C 2 -smooth strictly convex closed curve γ in the plane of positive curvature.The outer billiard map T acts in Ω, the exterior of γ, as follows: Given a point A ∈ Ω, its image T (A) is defined by the condition that the segment [A, T (A)] is tangent to γ exactly at the middle of the segment.The map T is a symplectic diffeomorphism of Ω with respect to the standard symplectic form of the plane.Thus, Ω is the phase space of the outer billiard. The model of outer billiards was introduced by B. Neumann in the late 1950s [25] and even earlier in 1945 by M. Day [13].Thereafter, Jürgen Moser popularized the system in the 1970s as a toy model for celestial mechanics [23] [24]. Given a billiard curve γ of class C r , the billiard map T is of class C l , where l = r − 1.If l is sufficiently large, then the Moser twist mapping theorem [24] applies (see [15] for a proof of this application).It then follows that there always exist invariant curves of the outer billiard arbitrary close to infinity, and therefore all orbits of the billiard are bounded.The initial requirement, l ≥ 333, in Moser' theorem was improved by H. Russmann [26], l > 5, and then by M. Herman's [20] showing that l > 2 is enough. Later, unbounded orbits for outer billiards were proven to exist for curves which are not C 1 -smooth, starting from the work of R. Schwartz [28].In [29], unbounded orbits were discovered for the semi-circle by computer experiments, and were confirmed theoretically in [14].Apparently, nothing is known on this problem for the smoothness of γ between C 1 and C r , r > 3. Date: 5 October 2023.MB is partially supported by ISF grant 580/20 and DFG grant MA-2565/7-1 within the Middle East Collaboration Program. In this paper, we address the natural question which is analogous to Birkhoff-Poritsky conjecture for usual billiards (see [21] [16] [8] [22] for recent progress): Are there integrable outer billiards in the plane other than ellipses? For the algebraic version of this question, the answer is negative, as shown in [31] [17]. It is easy to see from the definition of outer billiard of γ that the group of affine transformations commutes with outer billiard map.Thus, the outer billiard map of the ellipse preserves the foliation of Ω by concentric homothetic ellipses (since this is obviously the case for a circle). Outer billiard map We now turn to formulate our main results. Theorem 1.1.Assume that the outer billiard of γ is totally integrable, i.e., the phase space Ω is foliated by continuous rotational (i.e., non-contractable in Ω) invariant curves, then γ is an ellipse.This result can be stated in variational terms, as we now turn to explain.In Section 2, we shall introduce a non-standard generating function S for the outer billiard, that corresponds to the symplectic polar coordinates (p, φ) on Ω, where p = r 2 /2, r is the radial distance, and φ is the polar angle.Then dp ∧ dφ is the standard symplectic form.Moreover, we will prove that function S satisfies the positive twist condition, S 12 (φ 0 , φ 1 ) < 0 (Theorem 4.1 below). Consider the corresponding action functional (see e.g. the survey [27]): (1) The extremal configurations {φ n } of (1) are in one-to-one correspondence with the orbits of the billiard map T , {(p n , φ n )}, where We shall say that the extremal {φ n } is locally minimizing if any finite sub-segment {φ n } N n=M , M ≤ N is a local minimum of the function We term the corresponding orbit {(p n , φ n )} as locally minimizing orbit. We shall denote by M the subset of the phase space Ω swept by the locally minimizing orbits. Theorem 1.2.If all orbits of the outer billiard T are locally minimizing, then γ is an ellipse.Theorem 1.2 implies the following geometric fact, which is equivalent to the existence of conjugate points for a billiard configuration. Corollary 1.3.For any outer billiard, which is different from ellipse, there exists a radial tangent vector v ∈ T x Ω, v = ∂ ∂r (x) and a positive n, such that after n iterations the vector DT n (v) is radial again.Now, I will explain how Theorem 1.1 follows from Theorem 1.2.Given a rotational invariant curve of a positive twist symplectic map, then by Birkhoff theorem this curve is a graph of a Lipschitz function (see e.g.[27]).Therefore, it is differentiable almost everywhere, and hence almost every orbit {(p n , q n )} on the curve has an invariant tangent field {(δp n , δq n )} with δq n > 0. It then follows from the criterion for local minimality (Theorem 1.1 of [9]) that almost every orbit on the invariant curve is locally minimizing.Since the set M, which is swept by all locally minimizing orbits, is closed [9], then all orbits on the rotational invariant curve are locally minimizing.Therefore, if there exists a foliation of Ω by rotational invariant curves, then all orbits in the phase space Ω are locally minimizing.Thus, Theorem 1.1 follows from Theorem 1.2. Remarks. 1.In fact M. Herman proved that all orbits on a rotational invariant curve are actually global minimizers. 2. We will not use this in this paper, but in fact Theorem 1.1 implies Theorem 1.2.Namely, if all orbits are locally minimizing, then one can reconstruct the foliation by rotational invariant curves.This was first performed by J. Heber [19] for geodesic flows, and then was extended to twist maps in [12], [3], [2]. 3. In [9], we formulated the criterion for local maximality since we considered there negative twist maps. The main idea in the proof of Theorem 1.2 is to apply the so-called E. Hopf type rigidity for the case of billiard dynamics.This method has two parts.First, along every locally minimizing orbit one can construct a positive discrete Jacobi field and the corresponding auxiliary function ω, which must satisfy certain evolution under T .The construction of the positive Jacobi field is the discrete analog of E. Hopf original construction.We refer to [5], [9], and also Section 5 for the details. The second part of the method is to prove an integral geometric inequality, which can be consistent with the evolution of ω only for the case of ellipses.This rigidity method was first found in [5] for ordinary billiards.Later, it was realized for billiards on the sphere and hyperbolic plane [6] and also for magnetic billiards in [7].Recently, it was also successfully applied in [8] for Birkhoff-Poritsky conjecture in a centrally symmetric case.Interestingly, in the paper [8], the integral geometry part was performed with suitable weights.This is also the case here. A new interesting class of symplectic billiards was introduced in [1].In a recent paper [4], rigidity for this model of billiards was proven using new ideas. The rigidity for outer billiard total integrability remained resistant for a long time due to two main difficulties.Firstly, the phase space Ω is not compact, which requires suitable weights for the integral geometric part of the proof.Secondly, the affine nature of the problem makes it harder than in the case of ordinary Birkhoff billiards. In this paper, we present the correct weights and reduce the proof of the main theorems to the Blaschke-Santalo inequality of affine geometry.Crucial new tool in the proof is the non-standard generating function of outer billiards.This generating function is somewhat analogous to the one used in [8] for usual billiards. In Section 2 and 3, we construct the non-standard generating function S and compute the derivatives for the change of variables.In Section 4, we find the second derivatives of S and verify the twist condition.In Section 5, we recall the properties of the function ω and its evolution.In Section 6, we obtain an inequality which is valid under the assumption that all orbits are locally minimizing.In Section 7, we show that in fact, the converse inequality holds, provided the origin is chosen at the Santalo point of the curve γ.Finally, we prove Theorem 1.2 and the Corollary at Section 8. Acknowledgments I discussed the problem of integrable outer billiards with Sergei Tabachnikov for many years.I am thankful to him for illuminating discussions and for the encouragement. I would also like to thank Luca Baracco, Olga Bernardi, Yaron Ostrover, and Leva Buhovsky for interesting and helpful discussions.I am grateful to Oleg Shaynkman for his help in conducting computer simulations. I am grateful to Marie-Claude Arnaud for giving the reference to M.Herman result, and to the anonymous referees for careful reading and suggestions for improvement. Non-standard generating function We introduce a new generating function S which is different from the one used in the previous papers, for example, from the one used in [10] [15][18] [29] and the book [30]. Let us fix the origin O to be an arbitrary point inside the curve γ.Later, in Section 7, we will need to specify the origin to be at the Santalo point of the convex body Γ, which is bounded by γ. Take a standard symplectic form in the plane and use the symplectic polar coordinates with respect to O: For the billiard map T : (p 0 , φ 0 ) → (p 1 , φ 1 ), p i = r 2 i /2, i = 0, 1, we wish to find the generating function corresponding to the primitive 1form α = pdφ = (r 2 /2)dφ.Therefore, we need to find a function S(φ 0 , φ 1 ) depending on the two angles such that (2) Here and below, we shall use sub-indices 1 and 2 of S for the partial derivatives with respect to φ 0 , φ 1 respectively. The function S(φ 0 , φ 1 ) can be easily found from geometric considerations, but we will also give the computational proof below.Given the values φ 0 , φ 1 , φ 0 < φ 1 < φ 0 + π, consider the segment with the ends lying on the rays with the angles φ 0 and φ 1 , which is tangent to the curve γ exactly at the middle (Figure 2).One can easily see that the point M is the tangency point of γ with a hyperbola whose asymptotes are the rays in the directions φ 0 and φ 1 (the envelope of the lines that cut off a fixed area from an angle is a hyperbola whose asymptotes are the sides of the angle). The point M is the tangency point of γ with a hyperbola whose asymptotes are the rays in the directions φ 0 and φ 1 Then, we define S(φ 0 , φ 1 ) to be the area of the triangle △M 0 OM 1 bounded by the two rays and the segment. With this definition of function S, Figure 3 gives a geometric explanation of (2).Indeed, let us consider φ1 := φ 1 + ǫ and compute the difference: Then we have (see Figure 3): Moreover, we have as required in (2).We shall compute now S as follows.Let γ be parameterized by the polar angle φ, and the radial function of γ will be denoted by r(φ).Hence, , where e φ , e ⊥ φ are the unit vectors in the directions φ, φ+ π/2.We shall write: With these formulas, S looks especially simple: S = tr 2 (φ). Derivatives for the change of variables In addition, we have the explicit formulas for the transition from (φ, t) to (φ 0 , φ 1 ) and for r 0 , r 1 . 2.We shall introduce the following notation: which is the numerator of the formula for the curvature k of γ in the polar coordinates, k = χ , and hence is strictly positive.The Jacobian matrix J = ∂(φ 0 ,φ 1 ) ∂(φ,t) and the inverse can be easily computed. The determinant of J reads: From here we know the inverse matrix J −1 = ∂(φ,t) ∂(φ 0 ,φ 1 ) .We have: ) .Now, we can confirm the geometric conclusion that S is a generating function by the exact computation.Namely, from S = r 2 t and formulas (4) we have: Thus, we have proved: Theorem 3.1.The function S(φ 0 , φ 1 ), which equals the area of the triangle △M 0 OM 1 (Fig. 2), is a generating function of the outer billiard map with respect to the symplectic polar coordinates in the plane. Partial derivatives of the generating function S Finally, using the formulas (4), we can find by the chain rule the derivatives of the function S with respect to φ 0 , φ 1 denoted by sub-indices 1, 2. It is important to mention that all the formulas are rational functions in t with coefficients depending on r(φ), r ′ (φ), r ′′ (φ). As a consequence of the last formula of ( 5), the theorems of Birkhoff and Herman apply, and we get the following: Theorem 4.1.The cross derivative of S 12 (φ 0 , φ 1 ) is strictly negative on Ω and hence the outer billiard map T is a positive twist map with respect to polar coordinates. Every continuous rotational invariant curve of T is star-shaped and is Lipschitz (the radial function r(φ) is Lipschitz). Moreover, all orbits on the invariant curve are minimizers for the functional (1). The inequalities for total rigidity Let A be a positive function on the phase space Ω of outer billiard.We shall denote by B := A • T .Theorem 5.1.Assume that all orbits of an outer billiard are locally minimizing.Then, for any positive A and B = A • T we have the following inequality, provided that the integral I is absolutely converging on Ω: where the invariant measure dµ is given by Proof.The first step of the proof is valid for any twist symplectic map of the cylinder.We refer to the papers [5] and [9] for the ideas of the method and the details. Let T be a symplectic positive twist map of the cylinder with the coordinates (p, q), having a generating function S(q 0 , q 1 ).The positive twist condition reads S 12 < 0. Thus we can state the following Lemma 5.2.For any symplectic positive twist map T of the cylinder: (1) The set M is a closed set invariant under T . In the second step, we specialize to the assumptions of Theorem 1.2.Since all the orbits of T are assumed to be locally minimizing, we have M = Ω.Moreover, it follows from the bounds (3) of the lemma and the explicit expressions (5) of the derivatives S 11 , S 22 , that the function ω is bounded on compact sets and hence can be integrated.Indeed, S 11 , S 22 in (5) are rational functions in t (with no real poles) with the coefficients depending on r(φ), r ′ (φ), r ′′ (φ).Now, we choose an arbitrary positive continuous function A and B := A • T , we multiply the first and the second equations of ( 9) by B 2 and A 2 respectively, and subtract, getting: ).From S 12 < 0 and from the arithmetic-geometric mean inequality, we get: Now we can integrate this inequality against the invariant measure over a compact invariant annular region Ω βγ , which is bounded by γ and a rotational invariant curve β.We get the inequality: In this section, we shall use Theorem 5.1 in order to prove the following: Proposition 6.1.If all orbits of an outer billiard γ are locally minimizing, then the following inequality holds: where r is a radial function of γ and χ = r 2 (φ) + 2r ′2 (φ) − r(φ)r ′′ (φ). Proof.We shall choose the weights A, B as follows: With this choice of A, B we compute the integral I of Theorem 5.1.Using formulas (5), we compute: . Now, we denote the three summands in the last expression of (11) F 1 , F 2 , F 3 .We simplify the integral of ( 11) by integrating with respect to t every summand: Next, we compute separately the integrals for F 1 and F 2 , F 3 . For F 2 and F 3 , we have: Summing the last two formulas, and then evaluating from 0 to T → +∞, we see that the sum of the terms with Log vanishes and so: The last integral can be computed: where we used for the arc-length and for the curvature the expressions: . A consequence of the Blaschke-Santalo inequality In this section, we consider an arbitrary simple closed convex C 2 curve γ in the plane.Let Γ be the convex body bounded by γ.Using the Blaschke-Santalo inequality, we shall prove the inequality opposite to (10), provided that the origin is placed at the Santalo point of Γ. Let me remind the relevant notions.For any point in the interior, x ∈ Int(Γ) one defines Γ x , which is the polar dual of Γ with respect to x.By definition, the Santalo point of Γ is the unique point x ∈ Int(Γ), which gives the minimum for the Area(Γ x ) (see e.g.[11]). Suppose that the convex curve γ is such that the origin coincides with the Santalo point of the body Γ.We denote Γ * the polar dual of Γ with respect to the Santalo point.Then Blaschke-Santalo inequality states (see e.g.[11]) with the equality only for ellipses.Now we can state the following: Proposition 7.1.Let γ = ∂Γ be such that the Santalo point of Γ is at the origin.Then where r is a radial function of γ and χ = r 2 (φ) + 2r ′2 (φ) − r(φ)r ′′ (φ).Moreover the equality occurs if and only if γ is an ellipse. Applying the Blaschke-Santalo inequality we conclude that: Let γ be an outer billiard such that all billiard orbits are locally minimizing.Then for an arbitrary choice of the origin in the plane, Proposition 6.1 implies the inequality (10).On the other hand, if we choose the origin at the Santalo point, then (13) of Proposition 7.1 gives the opposite inequality.Therefore, we have the equality in (13) and hence the curve γ is an ellipse.This completes the proof of Theorem 1.2. Proof of Corrollary 1.3.We need to show that if the curve γ is not an ellipse, then there exist a billiard configuration which has conjugate points, i.e a nontrivial Jacobi field vanishing at two points.If this is not the case, then all finite segments of billiard configurations must have non-degenerate Hessian matrices δ 2 F M N .But then all these matrices must be positive definite, by a continuity argument.Therefore, all orbits are locally minimizing and by Theorem 1.2 the curve γ is an ellipse.This contradiction completes the proof of Corollary 1.3. Remark. One can prove by a straightforward computation that the nonstandard generating function S and the standard one H satisfy the condition (GA) of [9].This condition guarantees that the classes M S , M H of locally minimizing orbits for S and H coincide.This fact implies in particular, that if the curve is not an ellipse then the conjugate points necessarily exist also for the functional corresponding to H. We will not dwell on this in this paper.
2023-06-23T06:42:36.219Z
2023-06-21T00:00:00.000
{ "year": 2023, "sha1": "b6498bec3e93666307608f871c07c790c9081744", "oa_license": null, "oa_url": "https://www.aimsciences.org/data/article/export-pdf?id=65eec4acc26d215a605079ab", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "b8b45a4bc2582631369f0e107dbad2eb478e206f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
118486555
pes2o/s2orc
v3-fos-license
Continuous Move From BTW to Manna Model In the present paper we consider the BTW model perturbed by random-direction anisotropy with strength factor \epsilon ranging from 0 to 1 corresponding to BTW and Manna model receptively and investigate the properties of the statistical observables for various rates of anisotropy. By increasing the \epsilon, we observe a cross-over taking place between these models. For small length scales, the curves show properties similar to the BTW model whereas in the Infra red limit the corresponding \kappa is nearly the same as the Manna model. The observations confirm that this perturbation is relevant for the BTW fixed point and the infra red limit of the perturbed model is described by the Manna model. We also propose a differential equation whose solution properly fits with the the Green's function obtained by the simulation. This can help us to obtain the action of the perturbed model. I. INTRODUCTION In recent years, the concept of self-organized criticality (SOC) proposed by Bak et. al. 1 has attracted a lot of attention as a possible general framework for explanation of the occurrence of robust power laws in nature as it does not require fine tuning of any parameter to set criticality. Sandpile models was the first example of these systems. The abelian structure of the sandpile model was first discovered by D. Dhar and named as Abelian Sandpile Model (ASM) 2 . Numerous works have been done on this model. The connection of this model with spanning trees 3 , ghost models 4 , q-state Potts model, Loop Erased Random Walk (LERW) 5 is known. The different height and cluster probabilities of this model 6 and its various geometrical exponents are also calculated numerically and analytically. For a review see 7 . Several variations of the ASM have been studied in the past with a view to understand the parameters that determine the different universality classes of self-organized critical behavior. These include models in which particle transfer is directed, or models in which the toppling condition or the number of sand grains transferred depends on the local slope rather than local height [8][9][10][11][12] . In this respect, it has been realized that stochasticity in toppling rules can lead to different critical behavior than models with deterministic toppling rules. One of the most interesting BTW-variations is the Manna model 8 in which after a toppling the grains redistribute randomly in a preferred direction which is randomly chosen (without dissipation). This model corresponds to a two state model in which there is a hard core interaction between two particles in the same site that prevents the site to be doubly occupied. The interesting question then would be what is the universality classes which these models belong to. In fact, the precise identification of universality classes in sandpile models is an unresolved issue. It is generally assumed that the avalanche size and duration distributions follow simple power laws in the infinite-size limit, and the departures from such power laws reflect finite-size effects. Such effects complicate the estimation of critical exponents, since the estimates are sensitive to the choice of fitting interval. In the main Manna's paper 8 it was claimed that the Manna model lies in the same universality class as the BTW model. Real-space renormalization group calculations 19 suggest that different sandpile models, such as the BTW and the Manna models, all belong to the same universality class. This result is also confirmed by a proposed field theory approach 20 that states that all sand pile models are described by the same effective field theory at the coarse grained level. 13 . They claimed that the Manna model is in the universality class of random neighbor models which is distinct from the BTW universality class. Some more exact numerical results also confirmed this hypothesis 16 . Based on some numerical analysis (bias removing), A. Chessa et. al. argued that the results of Ben-Hur et. al. are same for the two models which imply that they may belong to the same universality class 15 . They used the finite size scaling (FSS) arguments in calculating the exponents of size, time and area distributions of avalanches of BTW and Manna models which is believed that are not single fractals and do not fulfil the FSS 30 . Many papers argued this result and reported some exact results about the scaling behavior of this model [16][17][18] . In spite of very much works done on these variations, very low attention is paid on the question how these models are linked and transform perturbatively to one another. Knowing the structure of this transition, one can obtain some valuable informations of these models. To this end, one can add perturbatively stochasticity to the BTW model. It can be done by adding anisotropy with random preferable direction with arbitrary strength to the toppling rule of BTW model and see how observables evolve from BTW to the Manna model. A study on patterned and disordered continuous ASM has been done in 21 in which it has been shown that the quenched disorder lead to an irrelevant perturbation in the conformal field theory corresponding to the BTW. The plan of the present paper is to numerically see how things change when the disorder is not quenched. For this, we consider the geometrical objects (interfaces) of the per-turbed model and some other statistical functions. The properties of the interfaces are directly related to the correlation length of the system. In the discrete set up, in addition to the correlation length, the system has one more characteristic length scale, i. e. the lattice constant. In the scales much smaller than the system correlation length, and larger than the lattice constant, one expects that the statistical features of the curves are properly fitted to the unperturbed one. There is an idea to describe the geometrical interfaces of 2D critical statistical models via growth processes named as SLE 23 . It is a powerful tool to study the macroscopic interfaces of two dimensional systems instead of local fields as is common in ordinary CFT. The result of this idea is a complete classification of probability measures on random curves in (simply connected) domains of the complex plane satisfying two axioms: conformal invariance and the domain Markov property 27 . From the correspondence of ASM with the ghost model 4 , one expects that this model is a c=-2 conformal field theory and with the knowledge of the connection between conformal field theory and SLE 24 one finds that the ASM is related to SLE with κ = 2. This test is done in 25 in which numerically is shown that the boundary of avalanches of ASM is SLE (2). To generate curves running from origin to the infinity from loops, one has to cut the loops horizontally and then send the end point of the curve to the infinity. In this paper we consider perturbed ASM on a cubic lattice and numerically analyze the properties of interfaces of this model. In sections II and III we briefly introduce ASM and SLE respectively. Sections IV and VI also contain some numerical results from loop statistics and SLE on these interfaces respectively. II. INTRODUCTION TO THE ABELIAN SANDPILE MODEL Consider the ASM on a two dimensional square lattice L × L. For each site we consider the height variable h i taking its values from the set {1, 2, 3, 4} which shows the number of grains in this site. So each configuration of the sandpile is given by the set {h i }. The dynamics of this model is as follows; in each step, a grain is added to a random site i i.e. h i → h i + 1, then if the resulting height becomes more than 4, the site toppels and loses 4 sands, each of which is transferred to one of four neighbours of the original site. As a result, the neighboring sites may become unstable and topple and a chain of topplings may happen in the system. In the boundary sites, the toppeling causes to one or two sands to leave the system. This process continue until the system reaches to a stable configuration. Now another random site is selected and the sand is released on this site and the process continue. The movement on the space of stable configuration lead the system to fall in a subset of sets of configurations after a finite steps, named as the recurrent states. It has been shown that the total number of recurrent states is det∆ where ∆ is the discrete Laplacian. For details see 7 . This model can be generalized to other lattice geometries and to off critical set up. For a lattice with d neighboring sites, the toppling occurs when h i > d, then the original site will lose d grains and the height of each of its neighbors will increase by 1. It has been shown that the action corresponding to this model is: where θ andθ are complex Grassmann variables. Waves; As is mentioned above, the topplings in the ASM can be done in any order. One very useful way to relax is by a succession of waves of topplings. Let the site where the grain is added be O. If after addition, O is still stable, the relaxation process is over. If it is unstable, we relax it as follows: topple O once, and then allow the avalanche to proceed by relaxing any unstable sites, without however toppling O again. This constitutes the first wave of toppling. If at the end, site O is still unstable, we allow it to topple once more, and let the other sites relax, until all sites other than O are stable. This is the second wave of toppling. Repeat as needed. Eventually, site O is no longer unstable at the end of a wave, and the relaxation process stops. It is easy to see that in any wave, the set of toppled sites forms a connected cluster with no voids (untoppled sites fully surrounded by toppled sites), and no site topples more than once in one wave. (This would not be true if the graph had greedy sites). Random Anisotropic ASM; One can add random anisotropy to ASM in the following sense; Choose random number r = ±1. When the height of a site becomes more than 4n where n is some integer, then 4n grains is transferred to the neighboring sites; n − r grains transfer to the 'up' site, n − r grains to the 'down' site, n + r grains to the 'left' site and n + r grains to the 'right' site. In this respect we define = 1 n . = 1 and = 0 correspond to the Manna and BTW models respectively. When = 0 or = 1 the observables have robust power low behaviors up to a characteristic length (named as correlation length ξ) above which the correlation functions falls off rapidly. In a critical model, this length is of order of the lattice size and when the system size goes to infinity, it diverges. The correlation length can be defined as the loop linear size (r cut ) above which the logarithm of the distribution function of gyration radius falls off more rapidly than linear. When 0 < < 1 it is seen that there is another characteristic length ξ 2 in which the behavior of the statistical functions smoothly changes. III. NUMERICAL RESULTS; STATISTICS OF PERTURBED ASM In this section we numerically study the statistics of waves and avalanches of ASM to test its dependence on . For this, we have simulated over 100000 independent samples and obtained the domain walls between the toppled and untoppled sites on the honeycomb lattice of size 2048 × 2048. For the simulation, we considered the square lattice. The lattice and toppling rule hase been shown in FIG [1]. Consider the wave frontiers of toppled sites of ASM. We first study the statistics of the gyration radius of loops. In Fig[2] the Log-Log plot of the distribution of gyration radius N (r) versus gyration radius r is sketched. For the BTW model ( = 0), up to a length named as r (2) cut , it is seen that N (r) ∼ r −τr where τ r 1±0.05. For the lengths higher than r (2) cut this distribution function falls off rapidly. r (2) cut may be interpreted as the correlation length of the model and in the critical case, it is of order of the lattice size. By increasing , another length scale is observed at which the behavior is changed. We name this length r (1) cut . As is shown in this figure, by increasing , this point is changed and pushed along the origin. This point contains some interesting informations. Going from small lengths to large scales, we see two important lengths. In the small scales the curve is locally like BTW's up to the point r (1) cut . This suggests that the ultra violet (UV) properties of the perturbed model is given by the BTW model. In the vicinity of r (1) cut , the infra red (large scale) limit is reached and the behavior of the graph (more exactly, its slope) is smoothly changed to the new one. This and other figures (to be shown later) show that in the new regime the properties of the model is best fitted to the Manna model ( = 1 in the figure). So it seems that the infra red (IR) limit of these curves is given by the Manna model and a crossover takes place in between. The slopes of the graphs in the IR region is slightly different from the Manna model due to the finite size effects. In fact by enlarging the size of the lattice, we saw that the slopes in this region get closer to the Manna model (it is not shown here). The inner graph of this figure shows τ r i.e. the slope of the graph for the small scales with respect to . The horizontal axis is in logarithmic scale. As is seen, a smooth change of behavior takes place when we go from BTW to the Manna model. In the vicinity of the = 0 and = 1 the dominant behavior is the BTW's and the Manna's respectively and a jump takes place in between showing the mentioned cross-over. The same feature is seen in FIG [3] in which the distribution function of loop length 'N (l)' is shown versus loop length 'l'. In this case, as the gets non zero values, the deviation from the power law (governed on the first part of the graph) takes place at a characteristic length depending on value and this length decreases as increases. For = 1 the unique power law behavior is retrieved N (l) ∼ l −τ =1 l with τ =1 l = 1.8∓0.1. For 0 < < 1, the IR and UV properties the curves look like = 1 and = 0 cases respectively. A smooth transition for τ l from BTW (τ 0 l = 1) to Manna (τ 1 l = 1.8) is seen in the inner graph of this figure. A similar calculations was done for the distributions of loop masses and the number of waves in the avalanches and loop sizes. We observed the same results as above. By enlarging the size of the lattice, we saw that this graph was totally shifted to the left showing that enlarging the lattice size infinitely, the dominant behavior would be the Manna's. This tells us that this perturbation is relevant for the BTW fixed point and irrelevant for the Manna model. For the scales much smaller than the correlation length, one expects that the system have a well defined fractal dimension. So we computed the fractal dimension of waves defined as l ∼ r D f and found that for finite , there is a slight deviation from the critical fractal dimension (Note that surely for the lengths comparable with or larger than the correlation length, fractal dimension does not make sense, but to see the behavior of models with 's near the Manna model, we calculated the fractal dimension for all 's). The interesting feature, as is seen in FIG[4], is that this quantity does scale with the logarithm of in the cross-over region i.e. IV. DETERMINATION OF THE PERTURBING FIELD To determine the weight of the perturbing field we use two methods. First we directly try to determine this weight by using finite size effects. The second method we use is the determination of the Green function which depends directly to . Green function method; In this section using Green function of the perturbed model we present a method to compute the conformal weight of the perturbing field. As proved by Dhar 7 the Green function of ASM is defined as follows: suppose that site i is toppled by adding a grain. The Green function G(|i − j|) is the number of topplings that occur in site j (up to a normalization factor) and is proved that the form of this function in 2D is G(r) ∼ Log(r) which Mathematically is the answer of the equation ( 1 r ∂ r [r∂ r ])G(r,ŕ) = δ(r −ŕ). When the action of a critical model is perturbed, the action of the model is modified. This modification may be of the form: in which S * is the action of the conformal field theory corresponding to the critical model (in this case the Eq [1]) and S is the perturbed action and λ k is the coupling constants of the perturbations. Then one expects that the differential equation of resulting Green function is also modified. In our case we can first search to find which equation the Green function does satisfy. Then we can observe that what is the conformal weight of the perturbing field. In Fig[5] the results of the simulations are sketched. In this figure the horizontal axis is in logarithmic scale. We see that for low 's the resulting graph is logarithmic (up to a characteristic point R cut which is the finite size effect) as expected. But when increases, the graphs do not show this behavior. In the inner graph we have shown the Log-Log graphs and see that for = 1 the graph is power law G(r) ∼ r −x l with x l 1. There is another way i.e. scaling argument, to study the Green function in IR regime. It is known that for a sandpile where D is the dimension of the space (here is 2) and where s is the area of the loop and r is its gyration radius. Here d (2) f = 2. In the case = 0 (τ r 1) it is obvious that x l = 0 and the answer will be logarithmic. For the case = 1 also we have (we have calculated above that τ =1 r 2) x l = 1 which confirms the calculation and the simulation. In the intermediate values of we see two different behaviors reflecting their IR and UV properties. For the lengths much smaller than the correlation length we see that they show similar features like the BTW model as expected. For the large scales (lengths much larger than the correlation length) they behave like the Manna model. This confirms our claim that the IR properties of the perturbed models best fit with the Manna model. Now we will search for the best differential equation which yield the mentioned properties. For large scales, the answers of this equation should be like the Manna model's i.e. G(r) ∼ r −1±0.1 and for the small lengths, should have logarithmic form. We have observed that the best fit can be achieved by the following differential equation: In this equation, g( ) is some function of . The numerical values of this function has been presented in Table[II]. For small values of , we have g( ) ∼ 0.87±0.04 . We will use this form in the next subsection. The Eq. [4] properly yields the IR and UV properties of the Green function and for the intermediate lengths also fit with the simulation. For example in FIG[6] we have shown the simulation and calculated (from Eq [4] with g( = 0.1) = 0.25) results which have been fitted well. From the extra term 1 r ∂ r [rG(r)] we are led to the very important conclusion that the conformal weight of the perturbing field is 2∆ = 1. This statement will be further tasted in the next subsection in which we try to yield ∆ from RG arguments. RG equation; In the previous subsection we obtained the scaling of g( ) versus . In this subsection, using this construction and the RG arguments, we check the validity of this claim. We use the equation that govern the off-critical conformal field theory (in vicinity of the fixed point). Suppose that the action of the theory is: It can be easily proved that the RG equation for this where λ k = g k a 2(1−∆ k ) , ∆ k is the conformal weight of the perturbing field ϕ k , b is the length scale, a is the lattice constant and C i,j,k is the fusion coefficient of the fields ϕ i and ϕ j i.e. ϕ i ϕ i = k C i,j,k ϕ k . In our case, we have single coupling constant g( ) and we can let g( ) 2 0. So we have(g( ) ∼ n with n = 0.87 ± 0.04 for small 's): where ∆ is the weight of the perturbing field. For using the RG equation [RG], we can set a → a(1 + δb) or equivalently let the size of lattice L → L 1+δb ≡Ĺ. So the correlation length of the system ξ → ξ 1+δb . To use Eq. [6], it is sufficient to repeat the simulation forĹ and see for which two cases do match. In Fig[7] we have presented the result for L = 2048 and δb = 0.5 and 2 = 1/3000. We see that the best fit is done for 1 = 1/1500. Then we have Where¯ is the averaged value of the in this interval. We have done this test on many such samples with various lengths and 's and found the best values for ∆ are 0.9 ≤ 2∆ ≤ 1.05 in agreement with the obtained result in the previous subsection. V. SCHRAMM-LOEWNER EVOLUTION Critical behaviour of the two dimensional statistical models can be described by their geometrical features. In fact instead of studying the local observables, we can focus on the interfaces of two dimensional models. These domainwalls are some non-intersecting curves which directly reflect the status of the system in question and supposed to have two properties: conformal invariance and the domain Markov property 29 . Schramm-Loewner Evolution is the candidate to analyze these random curves by classifying them to the one-parameter classes (SLE κ ). Let us denote the upper half plane by H and γ t as the SLE trace i.e. γ t = {z ∈ H : τ z ≤ t} and the hull K t = {z ∈ H : τ z ≤ t}. SLE κ is a growth processes defined via conformal maps which are solutions of Loewner's equation: Where the initial condition is g t (z) = z and ξ t = √ κB t is a real valued smooth function. For fixed z, g t (z) is well-defined up to time τ z for which g t (z) = ξ t . The complement H t := H\K t is simply connected and g t (z) is the unique conformal mapping H t → H with g t (z) = z + 2t z + O( 1 z 2 ) as z → ∞ that is known as hydrodynamical normalization. One can retrieve the SLE trace by γ t = lim ↓0 g −1 t (ξ t + i ). There are phases for these curves, 2 ≤ κ ≤ 4 the trace is non-self-intersecting and it does not hit the real axis; k t = γ t . This is called "dilute phase". But for 4 ≤ κ ≤ 8, the trace touches it self and the real axis so that a typical point is surely swallowed as t → ∞ and K t = γ t . This phase is called "dense phase". However, there is an important property: The frontier of K t i.e. the boundary of H t minus any portions of the real axis is a simple curve and is locally SLEκ withκ = 16 κ . This duality links models in dilute phase to one model in the dense phase and vice versa e.g. the ASM (κ = 2) to the Uniform Spanning Tree (UST) (κ = 8). The main question "what is the relation between SLE and CFT" is answered by M. Bauer and D. Bernard 24 . They showed that the boundary condition changing (bcc) operator in SLE correspond to a degenerate field with a vanishing descendant at level two and conformal weight h 1;2 = 6−κ 2κ in CFT with central charge c κ = (6−κ)(3κ−8) 2κ . SLE Out of criticality; Now consider the system out of criticality. In this case the conformal invariance of the system is broken and the the system correlation length ζ will have a crucial role in statistical properties of the random curves. So if we apply the Loewner uniformizing map, the resulting domains are not more equivalent due to absence of conformal invariance. At scales much smaller than the correlation length, i.e. in the ultraviolet regime, the deviation from criticality is small, and the interface should look locally like the critical interface. This means that over short time periods, the off-critical driving function ξ ζ t should not be much different from its critical counterpart. In the other hand, at large scales (with respect to ζ), i.e. in the infrared regime, the interface may look like another SLE with a new κ ir . The reason is; when we integrate out the small distances to reach the large distance properties, the regions which is formed by SLE trace for lengths smaller than the correlation length (with diffusivity κ U V ), may be seen as points that the SLE trace with the new diffusivity (κ ir ) corresponding to the new infra red fixed point crosses. The example is Ising model. At criticality κ = 3, but if the temperature is raised above the critical point, renormalization group arguments indicate that at large scale the interface looks like the interface at infinite temperature i.e. percolation with κ ir = 6 27 . Now consider a curve that starts from origin and end on a point on real axis (x ∞ ). Then by using the map φ = x ∞ z/(x ∞ − z), one can send the end point of the curve to the infinity. In this respect, the function h t = φog t oφ −1 describes chordal SLE. It is easy to show that the equation governing on h t is: This mapping is not hydrodynamically normalized i.e. it does not fix the infinity, instead it fixes the ending point of the curve. VI. NUMERICAL RESULTS; SCHRAMM-LOEWNER EVOLUTION In this section we present some numerical results obtained by applying SLE on the critical and off-critical abelian sindpile model. The the frontier of avalanches form the set of loops with discrete points. As in the chordal SLE, the curve goes from a point of the real axis to a point in the infinity and here we have loops, we are to use a trick to generate desired curves. Having these loops, one can cut them with a straight line to generate interface curves starting from the origin and ending at some point on the real axis (x ∞ ). Then using the map φ = x ∞ z/(x ∞ − z), that fixes the origin and sends the ending point (x ∞ ) to the infinity, we will have a curve on the upper half plane. Then by applying the chordal SLE formalism and the proper uniformizing map step by step, one can emerge ξ t for these discrete curves. The essential assumption is that ξ t is partially (in each interval) constant, then it can be easily proved that the mapping that can be used to uniformize the curves is 26 : where This mapping uniformizes a semicircle that is extended from η to x ∞ and by demanding that this semicircle involve z 1 one obtain: FIG [8] contains the graph ξ 2 t − ξ t 2 versus t for the critical case. As it is explicit in the graph, ξ t has the expected behaviour: ξ t 0 and ( ξ 2 t − ξ t 2 ) = κt with κ = 2.0 ± 0.1. We note that the initial portion (0 < t < 1000) of the graph is different from the remaining and has been ignored. The reason is that for these times, the effect of finite size (lattice constant) on the curve growth is important as the size of the curve is comparable with it. So in the remaining of the paper we will ignore this portion. We however are not concerned about the effect of the size of the system, because it take very long time for such a fractal to have a linear size of the system order. For the off-critical model, as stated in Sec. V, we have two important scale limits. For small scales (scales much smaller than r (1) cut ) the interface should look locally like the critical interface at UV fixed point. At large scales however, the interface may behave like a SLE corresponding to the IR fixed point with κ ir . We consider the perturbed ASM as described bove and analyze the resulting driving function. The important quantity which can be extracted from driving function is κ which in the critical case is obtained from the relation ξ 2 t − ξ t 2 = κt. In the off-critical case we may observe two slopes for the graph: one for UV region (the resulting κ should not be much different from the critical one) and another for IR region. In between the curve may have complex behaviors. In FIG[9] ξ 2 t − ξ t 2 versus t is shown for some 's. The slope of the graph for the critical ( = 0) one is κ = 1.95 ± 0.1 in agreement with other numerical results 25 . When becomes non-zero, the graphs does not show simple linear behavior. As is seen from this figure, there are two transition points: The first one is the earliest time in which the graph separates from the critical one i. e. is the first transition point from UV to the "cross over" region (in the FIG[9] is shown as T 1 ). The next (T 2 ) is the transition from the "cross over region" to the the IR region. In this region, the slope is the same as the Manna's i. e. = 1.These transition points depend on and increase as decreases and in the case = 0 become infinite. We can investigate the behaviour of the curves at UV and IR regions well separated from the crossover region. The result is that the curves are linear in each region with the same slope in each region. These feature for IR regime has been shown in Fig[9] and magnified in FIG [10] in which is seen that all have the same slope κ ir = 1.65±0.1. VII. CONCLUSION In this paper, we analyzed the statistics of wave and avalanche frontiers of continuous random anisotropic ASM. The BTW model corresponds to the perturbation parameter = 0 and Manna model to = 1. It has been shown that a cross over takes place between these two models. We studied the behavior of some statistical observables and found the conformal weight of the perturbing field by two methods: Green function and RG arguments. Each of them confirm that the weight x = 1. Using SLE for the geometric curves of the perturbed model we showed that there are two important length scales in which the corresponding SLE parameter 'κ' is different. These scales are determined with respect to the correlation length. Using SLE, we found numerically that for the scales much smaller than the correlation length, the curves have the same properties as the UV critical model (BTW) with nearly the same κ. For scales much larger than it, also we found that the curves acquire the new k ir 1.65.
2012-05-30T12:33:26.000Z
2012-05-30T00:00:00.000
{ "year": 2012, "sha1": "db70a2b8f91b68916b63b5878dfdb2516135048b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "db70a2b8f91b68916b63b5878dfdb2516135048b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
5433936
pes2o/s2orc
v3-fos-license
Short term micronutrient-antioxidant supplementation has no impact on a serological marker of gastric atrophy in Zambian adults: retrospective analysis of a randomised controlled trial Background Gastric cancer is a major contributor to cancer deaths in Zambia but, as elsewhere, no preventive strategies have been identified. We set out to investigate the possibility of reducing gastric atrophy, a premalignant lesion, using micronutrient-antioxidant supplementation. Methods We analysed 215 archival samples from a randomised controlled trial of micronutrient-antioxidant supplementation carried out from 2003 to 2006. Participants were randomised to receive either the supplement or placebo and had been taking the allocated intervention for a mean of 18 (range 14–27) months when the samples used in this study were taken. We used low pepsinogen 1 to 2 (PEP1:2) ratio as a surrogate marker of gastric atrophy. A PEP 1:2 ratio of less than three was considered low. HIV serology, age, nutritional status, smoking, alcohol intake and gastric pH were also analysed. Ethical approval was obtained from the University of Zambia Biomedical Research Ethics Committee (011-04-12). The randomized trial was registered (ISRCTN31173864). Results The overall prevalence of low PEP 1:2 ratio was 15/215 (7%) and it did not differ between the placebo (8/103, 7.8%) and micronutrient groups (7/112, 6.3%; HR 1.24; 95% CI 0.47-3.3; P = 0.79). The presence of low PEP 1:2 ratio was not influenced by HIV infection (HR 1.07; 95% CI 0.37-3.2; P =0.89) or nutritional status but it inversely correlated with gastric pH (Spearman’s rho = -0.34; P = 0.0001). Age above 40 years was associated with atrophy, but neither alcohol nor smoking had any influence. Conclusion Short term micronutrient supplementation does not have any impact on PEP 1:2 ratio, a serological marker of gastric atrophy. PEP 1:2 ratio inversely correlates with gastric pH. Background Gastric cancer is a major contributor to cancer deaths in Zambia but, as elsewhere, no appreciable preventive strategies have been identified. Low intake of fruits and vegetables has been associated with increased risk of gastric adenocarcinoma in the United States of America [1]. This appears to be also true for patients in Zambia, as we reported in a case control study that low fruit intake in the diet was associated with gastric cancer [2]. Furthermore, isoprostane excretion was increased in cancer cases, suggesting that the association with low fruit intake might be mediated via antioxidant status. While cancer is associated with poor antioxidant status, it is less well established if the same applies to premalignant lesions. The sequence of gastric premalignant lesions was first defined by Correa in 1975 [3]. Premalignant lesions include atrophy, intestinal metaplasia and dysplasia [4]. These are believed to be present for several years before undergoing malignant transformation. The best prospect for prevention of cancer would be to halt the progress or reverse these lesions, using an intervention which addresses some fundamental aspect of carcinogenesis. Repeated endoscopy would be a good way to evaluate presence or reversal of premalignant lesions, but it is invasive and not without sampling error. Low pepsinogen 1 to 2 ratio (PEP 1:2) has been shown to correlate very well with gastric atrophy [5][6][7][8] and is thus an attractive alternative for determining the presence of gastric atrophy. We have previously shown that the prevalence of serologically diagnosed gastric atrophy among patients with normal upper gastrointestinal endoscopies was as high as 28% (26/94) with 23% (6/26) of these being less than 45 years old [9]. There is conflicting evidence regarding the benefit of micronutrient supplementation in premalignant gastric lesions. Most studies have failed to sufficiently prove any beneficial effect of this supplementation on the progression to gastric cancer [10][11][12]. Vitamin C has no effect on the reduction of infections in patients with gastric atrophy [13]. However, other studies have shown that there is significant regression of gastric atrophy after anti-oxidant supplementation [1,14]. The findings in the large trial in Linxian, China, showed that giving betacarotene, vitamin E and selenium supplements for 5 years had an effect on the reduction of gastric cancer risk [15]. A study done in mice showed that vitamin C supplementation does not protect against Helicobacter pylori (H. pylori) gastritis or gastric premalignancy [16]. Beno et al. [17] reported that the serum levels of vitamins A, C and E, selenium, zinc and copper were low in patients with premalignant gastric lesions in Slovakia. These studies mainly investigated the effects of vitamins C, E, A and selenium, and did not include other micronutrients. These studies also used histological changes to assess gastric atrophy. We set out to test the hypothesis that micronutrient and antioxidant supplementation could lead to the regression of gastric atrophy determined by low PEP 1:2 ratios in a Zambian population. Methods This was a retrospective analysis of samples from a randomised, placebo-controlled trial of multiple micronutrient and antioxidant supplementation which was conducted in Misisi Township in Lusaka, Zambia between 2003 and 2006 [18]. Misisi is one of the poorest and most densely populated townships in Lusaka, with a prevalence of H. pylori infection of 81% [19]. For this study, we used samples collected in 2005 from volunteers who had been in the trial for not less than 12 months. As the allocation was random, it was assumed that at the start of the study the proportion of subjects with gastric atrophy would be the same in both groups, and thus any difference noted at the end of the study would be attributable to the supplementation. Ethical approval was obtained from the University of Zambia Biomedical Research Ethics Committee, ref 011-04-12. The randomized trial was registered as ISRCTN31173864. Conduct of the trial during which samples were collected During the controlled trial, all consenting residents above the age of 18 years, living in the designated area were eligible to participate in the study. There were no exclusion criteria. In total, 500 residents volunteered to participate in the study, half of whom were randomly allocated to receiving either micronutrient supplementation or placebo. The multiple micronutrient and the placebo tablets were both prepared and supplied by Dnask Farmaceutic Industri (Ballerup, Denmark). The supplement tablets contained multiple micronutrients at around 1.5-2 times Recommended Nutrient Intakes ( Table 1). Details of the randomization have been previously reported [18]. Blood samples were collected from these volunteers at the start of the study and annually until the end of the study, and stored at −80°C in a secure laboratory. Compliance was measured by counting unused pills and was estimated at 95% [18]. Nutritional assessment at baseline included height, weight (to determine the body mass index), and mid upper arm circumference; fat mass and lean mass were measured by impedance (Body Stat 1500, Douglas, Isle of Man, UK). Gastric pH, measured in fasting participants by endoscopic aspiration of gastric juice, was measured as previously described [20]. Serology To determine the presence of severe gastric atrophy, PEP 1:2 ratios were determined using ELISA kits for pepsinogen 1 and 2 (Biohit, Helsinski, Finland) and the manufacturer's instructions were followed. A ratio less than 3.0 was used to signify the presence of severe gastric atrophy. VK and MC were completely blinded to the allocations while running the samples. The coding of these samples was only broken by PK after completion of data entry. Data analysis Statistical analysis was carried out using STATA 12. Results A total of 226 serum samples collected in 2005 were available for analysis. Eleven samples were, however, left out of the analysis as three were not clearly labelled and eight were collected after crossover from micronutrient to placebo in the original trial [18], leaving a total of 215 samples. The flow of collected samples used in this study was as outlined in Figure 1. Subjects from whom these samples were collected had therefore received either micronutrient supplementation for a median of 18 months (IQR [16][17][18][19] or placebo for 19 months (IQR [18][19][20]. Demographic characteristics and potential confounders are shown in Table 2. Gastric atrophy (low PEP 1:2 ratio) The overall prevalence of gastric atrophy in these healthy volunteers was 15/215 (7%). There was no significant difference in the prevalence of low PEP 1:2 ratios between the participants on micronutrient supplementation, 8/103 (7.8%) and those on placebo, 7/112 (6.3%) Hazard Ratio The mean ratio among the supplementation group was 5.8 (IQR 4.3-6.9), while that among the placebo group was 5.9 (IQR 4.5-8.0). Gastric pH Measurements of pH in gastric aspirates taken while fasting were available on 121 participants. Gastric pH and low PEP 1:2 ratios were inversely correlated (Spearman's rho = −0.34; P = 0.0001). Of the 60 participants randomised to placebo for whom pH measurements were available, 19 (32%) had a fasting gastric pH of more than 4 (which was taken as significant hypochlorhydria) compared to 26 of 61 (43%) participants allocated to micronutrient-antioxidant supplementation (HR 1.35, CI 0.84-2.16, P = 0.26). Nutritional status Parameters of nutritional status analysed in the main trial were considered in this analysis. There was no correlation between gastric atrophy and nutritional status (Table 3). Multivariate analysis A multivariate logistic regression was carried out including several factors to try and ascertain any relation to severe gastric atrophy (Table 4). Alcohol and smoking did not show any influence on gastric atrophy. The amount of alcohol being consumed was also taken into consideration, and categorised as those taking every day and not every day. Even after taking this into consideration, there was still no significant difference between the atrophy subjects and the ones without atrophy. The only influence found to be significant was age, as subjects above the age of 40 years were found to be more at risk of having gastric atrophy. Discussion Gastric cancer remains a major cause of cancer mortality in Zambia [22]. We set out to investigate the possibility that micronutrient supplementation could reduce the occurrence of low PEP 1:2 ratio, which is a surrogate marker of gastric atrophy. We found that an average of 18 months of micronutrient supplementation made no impact on its occurrence among healthy adult Zambians. HIV infection was also found not to have any influence on atrophy. Gastric atrophy was more likely among participants above the age of 40 years, while there was an inverse correlation with fasting pH, which is consistent with our previous findings [20]. Alcohol and smoking also did not show any influence on gastric atrophy. The study participants were randomly allocated to either supplementation or placebo in the main trial. We made a reasonable assumption at since the participants were randomly allocated at enrolment then the prevalence of atrophy was not significantly different in the two groups. It is well understood that any differences in groups that have been randomly allocated are merely due to chance [23]. This is a premise on which randomised controlled trials are based. There was essentially no difference in the basic demographics of the two groups of participants used in this analysis. However, on average participants allocated to the placebo group were followed up for one month longer than the supplementation group. This was statistically significant (P = 0.0001). If the supplementation had any effect on development of atrophy, this difference would still have been seen despite the longer follow-up in the placebo group. It seems unlikely that this difference in duration of follow-up (which is determined by the date the sample was collected) could explain a lack of apparent benefit. On the other hand, 18 months might be too short to demonstrate the effect of these supplementations and it is possible that a longer follow-up (5-10 years) might yield different results. Gastroscopies were done on some of the participants in the main trial to determine the fasting gastric pH. Unfortunately, no gastric biopsies were obtained as this was not part of that study protocol. We therefore, did not have any data on the histological diagnosis of gastric atrophy or the degree of atrophic changes. Analysis of PEP 1:2 ratios as continuous variables did not yield any statistically significant difference between the two groups. We know from our previous work that the presence of H. pylori antibodies is very common in this population [19]. H. pylori infection has an influence on the presence and persistence of gastric atrophy but it was not tested in the main trial and we did not have the appropriate samples to test for active infection. Therefore, the presence of H. pylori infection was not checked in this retrospective analysis of stored serum samples. This could not have affected the results as none of the patients received treatment to eradicate the infection during the study follow-up. The adherence (pill count) of the subjects was at more than 95%, as reported in the main trial [18]. Participants were drawn from an impoverished community in Zambia where micronutrient deficiency is common. If supplementation had an effect, then it would be more obvious in such a population than in a generally well nourished community. Our study was not designed to assess the nutritional impact on intestinal metaplasia, which is another step in the carcinogenetic pathway. The population prevalence of gastric atrophy in Zambia is unknown, but in a hospital based case-control study [9], we found that up to 28% of healthy controls had serological evidence of severe gastric atrophy. In this study, we used serum samples from healthy community volunteers and found that the prevalence of atrophy was much less at 7%. However, in the current study the mean age was 37 years whereas it was 55 years in the case-control study. We have demonstrated that gastric atrophy tends to be more common with advancing age, which could explain the different prevalence. On the other hand, these findings might reflect a higher frequency of incipient gastroduodenal pathology in hospital based controls. HIV positive patients have in the past been reported to have higher gastric pH than their HIV negative counterparts [20]. Neither the aetiology nor the consequence of this finding is clear. In addition, HIV infection was not found to be associated with gastric cancer [9]. In this study, we found no correlation between HIV infection and gastric atrophy, suggesting that the hypochlorhydria seen in HIV infection cannot be explained by the loss of normal gastric mucosa cells. More work still needs to be done in order to find out why HIV infected patients have high pH and the consequence of this as there seem to be no connection with gastric cancer or it precursor lesions [9]. We were also interested to see if alcohol and smoking had any influence on gastric atrophy. It remains unclear at which exact stage (if at all) of the carcinogenic pathway alcohol and smoking influence gastric carcinogenesis. We found that these two factors have no influence on gastric atrophy. There is a shortage of data on gastrointestinal malignancies in Africa, and further work is needed to determine if there might be any scope for nutritional interventions to reduce risk.
2016-05-12T22:15:10.714Z
2014-03-25T00:00:00.000
{ "year": 2014, "sha1": "279ff6fba6c2710747d6bd837e1b6934d5521fae", "oa_license": "CCBY", "oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/1471-230X-14-52", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6fd0b1a3298c2ecb9be09c6482e70503f343a35", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234873937
pes2o/s2orc
v3-fos-license
Tuning the Optical Properties of WO 3 Films Exhibiting a Zigzag Columnar Microstructure : Tungsten oxide WO 3 thin films are deposited by DC reactive magnetron sputtering. The Reactive Gas Pulsing Process (RGPP) associated with the GLancing Angle Deposition method (GLAD) are implemented to produce zigzag columnar structures. The oxygen injection time ( t ON time) and the pulsing period are kept constant. Three tilt angles α are used: 75, 80, and 85 ◦ and the number of zigzags N is progressively changed from N = 0.5, 1, 2, 4, 8 to 16. For each film, refractive index, extinction coefficient, and absorption coefficient are calculated from optical transmission spectra of the films measured in the visible region from wavelength values only. Absorption and extinction coefficients monotonously drop as the number of zigzags increases. Refractive indices are the lowest for the most grazing tilt angle α = 85 ◦ . The highest refractive index is nevertheless obtained for a number of zigzags close to four. This optimized optical property is directly correlated to changes of the microstructure, especially a porous architecture, which is favored for high tilt angles, and tunable as a function of the number of zigzags. Introduction Structuring of solid materials at the micro-and nanoscale appeared as a key strategy to generate novel materials properties. Based-on two major approaches to achieve nanoscale engineering, the top-down and bottom-up approaches also generated technological and scientific challenges, especially in thin films science [1,2]. This structuring of thin films may become a complex task for ceramic materials produced by bottom-up methods since they may require reactive processes [3][4][5]. Accordingly, resulting film properties do not only depend on composition (many chemical and physical characteristics of metal oxide thin films are closely linked to their chemical composition, especially the oxygen-to-metal concentrations ratio), but the role of structure becomes a fundamental parameter particularly for optical materials [6,7]. It is well known that the films structure at the sub-micrometric scale can also influence their performances [8]. As a result, design and growth control of nanostructures turn into a fundamental issue in order to tune the optical properties by playing on architectural features of ceramic thin films [9]. Among ceramic compounds, transition metal oxides represent an exciting class of materials since they exhibit a wide range of physical and chemical behaviors [10,11]. Of interest, tungsten trioxide (WO 3 ) thin films have been thoroughly studied due to their strong potential and concrete applications as active layers for gas sensors [12], optical coatings due to their high refractive index [13], transparent conducting oxides [14], and electrochromic devices [15]. However, whatever the end-user and the WO 3 functionality, understanding some correlations between micro-and nanostructure of WO 3 thin films and their physical properties still remains a scientific motivation. To this aim, structuring of thin films by means of bottom-up ways has been the focus of substantial efforts and promising Operating Conditions for WO 3 Films Growth WO 3 thin films were prepared by the Reactive Gas Pulsing Process (RGPP) in a custommade magnetron sputtering system consisting of a 40 L vacuum chamber as shown in Figure 1. It was evacuated with a turbomolecular pump, backed by a mechanical pump leading to an ultimate pressure lower than 10 −5 Pa. A tungsten circular target (54 mm diameter and purity 99.9 at.%) was fixed at 65 mm from the center of the substrate holder. The target was DC sputtered using a constant current I W = 100 mA (the corresponding current density was J W = 50 A m −2 ). A pre-sputtering time was applied for 5 min in order to remove the contamination layer on the target surface and stabilize the process. The argon flow rate was kept constant at q Ar = 1.2 sccm. A constant pumping speed S = 10 L s −1 was used leading to an argon partial pressure p Ar = 2.8 × 10 −3 mbar. Oxygen mass flow rate q O2 was pulsed during WO 3 deposition by means of the Reactive Gas Pulsing Process (RGPP) [28]. A constant pulsing period P = 16 s was used. For all depositions, the oxygen injection time t ON was 12.8 s, which corresponds to a duty cycle dc = 80% of the pulsing period P (dc = t ON /P). This duty cycle was selected since such operating conditions correspond to the deposition of stoichiometric WO 3 thin films [29]. The maximum oxygen flow rate was q O2Max = 2.4 sccm. This value corresponds to the oxygen amount required to completely avalanche the reactive sputtering process in the oxidized sputtering mode. During the t OFF time, the oxygen mass flow rate was completely stopped (q O2min = 0 sccm). The substrate holder was grounded, and no external heating was added. The angle of inclination of the substrate holder (tilt angle, namely, α), could be changed from 0 to 90 • compared to the substrate normal. The rotating speed φ was computer-controlled and could be adjusted from 0 to a few revolutions per hour. period P (dc = tON/P). This duty cycle was selected since such operating conditions correspond to the deposition of stoichiometric WO3 thin films [29]. The maximum oxygen flow rate was qO2Max = 2.4 sccm. This value corresponds to the oxygen amount required to completely avalanche the reactive sputtering process in the oxidized sputtering mode. During the tOFF time, the oxygen mass flow rate was completely stopped (qO2min = 0 sccm). The substrate holder was grounded, and no external heating was added. The angle of inclination of the substrate holder (tilt angle, namely, α), could be changed from 0 to 90° compared to the substrate normal. The rotating speed ϕ was computer-controlled and could be adjusted from 0 to a few revolutions per hour. The tilt angle α can be adjusted from 0 to 90° and the substrate rotation ϕ can be 0 to a few revolutions per hour. Argon mass flow rate is kept constant, whereas oxygen mass flow rate is periodically controlled vs. time following a rectangular signal. Maximum and minimum oxygen flow rates (qO2Max and qO2min, respectively), pulsing period P and oxygen injection time tON are computer-controlled. Substrates were standard microscope glass and (100) silicon wafer. Glass was used for optical transmission measurements, whereas silicon allows a frank fracture for microscopic cross-section observations. They were ultrasonically cleaned in acetone, ethanol, and deionized water for 10 min and dried in an oven at 60 °C for 20 min. The film's thickness was measured by profilometry, and the deposition time was adjusted in order to prepare WO3 films with a thickness of 1 µm. For the deposition of a zigzag columnar structure, 3 tilt angles α were used: 75, 80, and 85°. For each tilt angle, the substrate was periodically turned with discrete 180° rotations. The number of zigzags N was gradually increased following N = 0.5, 1, 2, 4, 8, and 16 keeping a total film thickness of 1 µm. As a result, the number of 180° rotations directly gives the number of zigzags, which can be checked from cross-section SEM observations. The tilt angle α can be adjusted from 0 to 90 • and the substrate rotation φ can be 0 to a few revolutions per hour. Argon mass flow rate is kept constant, whereas oxygen mass flow rate is periodically controlled vs. time following a rectangular signal. Maximum and minimum oxygen flow rates (q O2Max and q O2min , respectively), pulsing period P and oxygen injection time t ON are computer-controlled. Substrates were standard microscope glass and (100) silicon wafer. Glass was used for optical transmission measurements, whereas silicon allows a frank fracture for microscopic cross-section observations. They were ultrasonically cleaned in acetone, ethanol, and deionized water for 10 min and dried in an oven at 60 • C for 20 min. The film's thickness was measured by profilometry, and the deposition time was adjusted in order to prepare WO 3 films with a thickness of 1 µm. For the deposition of a zigzag columnar structure, 3 tilt angles α were used: 75, 80, and 85 • . For each tilt angle, the substrate was periodically turned with discrete 180 • rotations. The number of zigzags N was gradually increased following N = 0.5, 1, 2, 4, 8, and 16 keeping a total film thickness of 1 µm. As a result, the number of 180 • rotations directly gives the number of zigzags, which can be checked from cross-section SEM observations. Characterization Morphology of WO 3 films was observed with a Dual Beam SEM/FIB FEI Helios 600i microscope on the fractured cross-section. Optical transmission spectra of the films deposited on glass substrates were recorded with a Lambda 900 UV-visible optical spectrometer. A mask (black piece) with a hole of 1 mm diameter was placed in the light path, to limit the observed area on the film. This mask allows probing a nearly constant thickness and avoid thickness gradient, which is especially substantial in GLAD films (intrinsic to the GLAD process). Placing such a mask, the thickness gradient was estimated to be less than 1% of the probed film thickness. Optical transmittance spectra of thin films deposited on glass substrate were recorded in the visible range. The scanning wavelength ranged from 300 nm to 850 nm, with 1 nm s −1 scanning speed. In order to get optical properties of WO 3 films, the Swanepoel's method was implemented. This method was used because it requires only two transmission spectra (one at normal incidence and another at oblique incidence) and optical properties can be determined from wavelength values only [30]. To this aim, optical transmission vs. wavelength was recorded at 0 • (normal incidence), and at ±30 • (oblique incidence) in order to check the homogeneity of the probed part of the sample. Microstructure Cross-sections of WO 3 thin films were systematically observed by SEM. Figure 2 illustrates a typical series of zigzag columnar structures prepared with a duty cycle dc = 80% of the pulsing period P and with a tilt angle α = 80 • . The number of zigzags N as well as the column tilt angle β (angle between the substrate normal and the column center axis) can be easily measured. For N = 0.5 (i.e., tilted columns), β changes from 50, 52, and 53 • as α rises from 75, 80, and 85 • , respectively. As usually reported with the GLAD process, this column angle β is always lower than α angle. These β vs. α values well correlate with Tait et al. equation (also known as the cosine rule) relating both angles together via a geometric analysis of the intercolumn shadowing geometry [31]. As the number of zigzags increases, it is worth of noting that the column tilt angle β is not uniform but gradually varies from sublayer to sublayer. As previously reported by Hawkeye et al. [32], the angle β is incremented up to a few degrees after each column arm deposition, then saturates after several arm numbers for any initial tilt angle. to limit the observed area on the film. This mask allows probing a nearly constant thi ness and avoid thickness gradient, which is especially substantial in GLAD films (intrin to the GLAD process). Placing such a mask, the thickness gradient was estimated to less than 1% of the probed film thickness. Optical transmittance spectra of thin films d posited on glass substrate were recorded in the visible range. The scanning waveleng ranged from 300 nm to 850 nm, with 1 nm s −1 scanning speed. In order to get optical pro erties of WO3 films, the Swanepoel's method was implemented. This method was us because it requires only two transmission spectra (one at normal incidence and another oblique incidence) and optical properties can be determined from wavelength values on [30]. To this aim, optical transmission vs. wavelength was recorded at 0° (normal in dence), and at ±30° (oblique incidence) in order to check the homogeneity of the prob part of the sample. Microstructure Cross-sections of WO3 thin films were systematically observed by SEM. Figure 2 lustrates a typical series of zigzag columnar structures prepared with a duty cycle d 80% of the pulsing period P and with a tilt angle α = 80°. The number of zigzags N as w as the column tilt angle β (angle between the substrate normal and the column center ax can be easily measured. For N = 0.5 (i.e., tilted columns), β changes from 50, 52, and 53° α rises from 75, 80, and 85°, respectively. As usually reported with the GLAD process, t column angle β is always lower than α angle. These β vs. α values well correlate with T et al. equation (also known as the cosine rule) relating both angles together via a geomet analysis of the intercolumn shadowing geometry [31]. As the number of zigzags increas it is worth of noting that the column tilt angle β is not uniform but gradually varies fro sublayer to sublayer. As previously reported by Hawkeye et al. [32], the angle β is inc mented up to a few degrees after each column arm deposition, then saturates after seve arm numbers for any initial tilt angle. This β evolution is due to the modified surface topography when depositing onto a pre-existing columnar arm. As a result, the effective tilt angle is not the real tilt angle α and varies over the film surface. The shadowing effect strongly depends on local variations of the surface geometry. From the second arm growth, particles impinge on the growing film and so, on the column apex according to an effective tilt angle significantly higher than angle α leading to an increase of β after a few deposited arms. As the zigzag films thickness increases, widening of the column cross-section can also be observed as commonly noticed in tilted columnar GLAD films, and although the change of the growth direction induced by the 180 • φ rotations. This structural broadening is well described by a power law connecting column width and film's thickness via a scaling exponent describing how quickly the columns enlarge [33]. Despite the abrupt alternation of the tilt angle from +α to −α during the zigzag fabrication process, the column widening is intrinsic to the surface growth due to the ballistic regime of sputtered particles and the shadowing effect. As the number of zigzags N increases, the zigzag architecture becomes less and less distinguishable, i.e., the column angle β and the zigzag design still remain observable for N = 16 ( Figure 2f). However, a further increase of N definitely leads to an undefined zigzag shape, but wide and spaced columns perpendicular to the substrate surface instead. By increasing the number of zigzags, the length of a column arm reduces and becomes lower than a few tens nanometers. The deposition process behaves as a segmented growth. Intervals associated to the abrupt 180 • φ rotations produce the growth of a new column. The latter develops on the opposite side at the apex of the previous column. The new column does not grow long enough for producing a significant shadowing effect leading to vertically oriented columns. This segmented characteristic of the zigzag growth also produces some effects on the films' density. Gu et al. [34] and Hass et al. [35] clearly reported that the porous structure changes as a function of the number of zigzags, and such a zigzag architecture exhibits three types of porosities with a distinct pores scale for each type. This hierarchical nature of the film's porosity cannot be clearly distinguished from our SEM cross-section observations ( Figure 2). However, these different types of porosities will be considered in § 3.3 to discuss some correlations between refractive index of WO 3 zigzag films and their porous microstructure. Optical Properties Optical transmittance spectra of WO 3 thin films deposited on glass substrate were measured in the visible range and for various numbers of zigzags ( Figure 3). A standard WO 3 films (tilt angle α = 0 • and without oxygen pulsing, i.e., dc = 100% of the pulsing period P) was also recorded and taken as a reference. Typical interference fringes (as commonly observed for dielectric thin films) are obtained whatever the number of zigzags. The film prepared by conventional sputtering process (α = 0 • ) exhibits the highest amplitude of the interference fringes. Such amplitude is strongly reduced for zigzag films. This is directly connected to the dense structure produced for WO 3 film with α = 0 • , whereas a more porous architecture is obtained for tilted and zigzag films (cf. discussion later). In addition, fringes tend to disappear at wavelengths close to the absorption edge and the average transmittance of zigzag films smoothly increases as a function of the wavelength, especially from 400 nm where it is around 60-70%, and beyond 80-90% at 800 nm (average transmittance is below 80% in the visible range for WO 3 film with α = 0 • ). This is mainly assigned to a more dispersive character of GLAD metal oxide thin films [36,37]. Changes of the growth direction (φ angle alternates from 0 to 180 • for zigzags deposition) create interfaces, which disturb the wave propagation through the films and favor absorption phenomena. Surface roughness of GLAD films also increases significantly for grazing incident angles (i.e., higher than 70 • ), and contributes in the same manner to the light dispersion [38]. Coatings 2021, 11, x FOR PEER REVIEW 6 of 13 Figure 3. Optical transmittance spectra in the visible range of 1 µm thick WO3 zigzag thin films prepared on glass substrate with various numbers of zigzags N = 0.5 to 16. Films were deposited with a tilt angle α = 80° and a duty cycle dc = 80% of the pulsing period P. A standard WO3 films is shown as a reference (α = 0° and without oxygen pulsing, i.e., dc = 100% of the pulsing period P). Figure 4 shows typical spectra of WO3 thin film 1 µm thick sputter-deposited on glass substrate with a tilt angle α = 85° and a duty cycle dc = 80% of the pulsing period P (i.e., number of zigzags N = 0.5). The sample was tilted at 0°, +30°, and −30° and the optical transmittance spectra were measured. The highest average transmittance is obtained at normal incidence (0°), whereas tilting the sample at ±30° reduces the transmittance and systematically shifts all optima to lower wavelengths. Tilting the sample to a given angle lengthens the optical pathlength through the film's thickness. In addition, it is commonly admitted that in the GLAD process, deposition angles α higher than 70° (i.e., at grazing incidence) favor the films roughness, emphasizing the light scattering phenomenon at the film/air interface. As a result, optical transmittance measured at ±30° is reduced compared to that recorded at normal incidence and film's thickness crosses by the light is geometrically extended, increasing the number of interference fringes. Figure 4 shows typical spectra of WO 3 thin film 1 µm thick sputter-deposited on glass substrate with a tilt angle α = 85 • and a duty cycle dc = 80% of the pulsing period P (i.e., number of zigzags N = 0.5). The sample was tilted at 0 • , +30 • , and −30 • and the optical transmittance spectra were measured. The highest average transmittance is obtained at normal incidence (0 • ), whereas tilting the sample at ±30 • reduces the transmittance and systematically shifts all optima to lower wavelengths. Tilting the sample to a given angle lengthens the optical pathlength through the film's thickness. In addition, it is commonly admitted that in the GLAD process, deposition angles α higher than 70 • (i.e., at grazing incidence) favor the films roughness, emphasizing the light scattering phenomenon at the film/air interface. As a result, optical transmittance measured at ±30 • is reduced compared to that recorded at normal incidence and film's thickness crosses by the light is geometrically extended, increasing the number of interference fringes. Optical transmittance spectra in the visible range of the films deposited on glass substrate were used to determine the evolution of refractive index vs. wavelength by means of the Swanepoel's method involving measurements of the optical transmission vs. wavelength at normal incidence (0 • ) and at oblique incidence angles (±30 • ) [30]. Refractive index in the visible range of all zigzag WO 3 films exhibit a typical Cauchy's dispersion law ( Figure 5). Whatever the number of zigzags, the film 's index is lower than that of the WO 3 bulk value (n 589 = 2.50 at 589 nm [39]). This is mainly assigned to the porous architecture commonly reported in oxide thin films prepared by GLAD, and exhibiting tilted columns, zigzags, or helices [40]. The shadowing effect induced by the high deposition angle (α = 80 • ) and the periodic change of φ angle (180 • rotations) to produce zigzag columns both favor the growth of a voided microstructure. Optical transmittance spectra in the visible range of the films deposited on glass substrate were used to determine the evolution of refractive index vs. wavelength by means of the Swanepoel's method involving measurements of the optical transmission vs. wavelength at normal incidence (0°) and at oblique incidence angles (±30°) [30]. Refractive index in the visible range of all zigzag WO3 films exhibit a typical Cauchy's dispersion law ( Figure 5). Whatever the number of zigzags, the film 's index is lower than that of the WO3 bulk value (n589 = 2.50 at 589 nm [39]). This is mainly assigned to the porous architecture commonly reported in oxide thin films prepared by GLAD, and exhibiting tilted columns, zigzags, or helices [40]. The shadowing effect induced by the high deposition angle (α = 80°) and the periodic change of ϕ angle (180° rotations) to produce zigzag columns both favor the growth of a voided microstructure. It is also worth of noting that the number of zigzags strongly influences the refractive index. The lowest indices are obtained for N = 16 and N = 0.5 (tilted columns as shown in Figure 2a) with n589 below 2.0. In contrast, the highest refractive indices are produced for WO3 films exhibiting N = 4 and 8 zigzags through the thickness of 1 µm. This optimized refractive index of WO3 thin films exhibiting a zigzag columnar architecture for a number of zigzags close to 4-8 agrees with former experimental and simulated optical properties of tungsten oxide nanostructured thin films [41]. As a result, optical properties, and thus the films' density can be tuned and optimized by a simple adjustment of the number of zigzags (cf. Section 3.3. for correlating with the microstructure). The Swanepoel's method was also used to determine extinction and absorption coefficients of WO3 zigzag films as a function of the number of zigzags, and for a given wavelength ( Figure 6). Both coefficients exhibit the same trend as the number of zigzags increases, i.e., a continuous reduction as N changes from 0.5 to 16. However, films containing the lowest number of zigzags (0.5 < N < 4) show the highest extinction and absorption It is also worth of noting that the number of zigzags strongly influences the refractive index. The lowest indices are obtained for N = 16 and N = 0.5 (tilted columns as shown in Figure 2a) with n 589 below 2.0. In contrast, the highest refractive indices are produced for WO 3 films exhibiting N = 4 and 8 zigzags through the thickness of 1 µm. This optimized refractive index of WO 3 thin films exhibiting a zigzag columnar architecture for a number of zigzags close to 4-8 agrees with former experimental and simulated optical properties of tungsten oxide nanostructured thin films [41]. As a result, optical properties, and thus the films' density can be tuned and optimized by a simple adjustment of the number of zigzags (cf. Section 3.3 for correlating with the microstructure). The Swanepoel's method was also used to determine extinction and absorption coefficients of WO 3 zigzag films as a function of the number of zigzags, and for a given wavelength ( Figure 6). Both coefficients exhibit the same trend as the number of zigzags increases, i.e., a continuous reduction as N changes from 0.5 to 16. However, films containing the lowest number of zigzags (0.5 < N < 4) show the highest extinction and absorption coefficients, as well as the most significant drop vs. N. These two coefficients are connected to each other and directly represent how easily a volume of material can be penetrated by the light. As a result, these high coefficient values mean a strong attenuation of the beam light when crossing the zigzag structure. Reducing the number of zigzags down to N = 0.5, WO 3 films give rise to a tilted columnar architecture with column broadening effect and an increase of surface roughness as the film's thickness or the column angle increases [42]. Basically, the termination of the slanted nanocolumns at the film surface produces an unusual topography, which is characterized by a high surface roughness [43]. It is also important to notice that reduction of extinction and absorption coefficients is not so noticeable as the number of zigzags is higher than 4. This smooth decrease for N > 4 also corresponds to the maximum of refractive index ( Figure 6). A more voided structure is produced, and the films become more transparent. Microstructure vs. Refractive Index Refractive index of zigzag WO3 films prepared with three deposition angles (α = 75°, 80·and 85°) has been systematically calculated at 589 nm and as a function of the number Figure 6. Refractive index, extinction coefficient, and absorption coefficient at 589 nm of WO 3 films vs. number of zigzags N. All films were sputter-deposited with a tilt angle α = 80 • and a duty cycle dc = 80% of the pulsing period P. Fit lines are used to guide the eye. As the deposition angle α rising from 0 to 85 • , typical RMS (root-mean-square) roughness ranging from a few nm to more than 20 nm have been reported for GLAD thin films produced by evaporation or magnetron sputtering [44]. This high surface roughness favors the light scattering at the air/film interface and thus contributes to the high extinction and absorption coefficients. By increasing the number of zigzags, the length of a single zigzag reduces. Assuming Backholm et al. investigation [45], RMS roughness significantly drops when the length scale (dimension of a single zigzag) is lower than 100 nm and becomes below 2-3 nm as the length scale decreases down to a few nm. It is also important to notice that reduction of extinction and absorption coefficients is not so noticeable as the number of zigzags is higher than 4. This smooth decrease for N > 4 also corresponds to the maximum of refractive index ( Figure 6). A more voided structure is produced, and the films become more transparent. Microstructure vs. Refractive Index Refractive index of zigzag WO 3 films prepared with three deposition angles (α = 75 • , 80·and 85 • ) has been systematically calculated at 589 nm and as a function of the number of zigzags (Figure 7). Whatever the deposition angle, all films exhibit the same evolution of refractive index vs. N, i.e., a maximum value around N = 4. In addition, refractive index is always lower than that of WO 3 film prepared by conventional sputtering (α = 0 • ). An increase of the deposition angle from 70 • to 85 • steadily shifts indices to lower values for all numbers of zigzags. This trend has ever been reported for WO 3 films exhibiting a tilted columnar architecture [46]. It is mainly ascribed to the growth of a more porous structure as the deposition angle rises. The shadowing effect is emphasized for high deposition angles, which leads to higher amounts of voids in the films, and thus a decrease in the overall film's density. For zigzags, a similar explanation can be suggested for discussing this drop of refractive index vs. deposition angle. However, the number of zigzags also influences the optical properties and thus, the peculiar growth and geometry of zigzags architecture has to be considered for understanding these optimized refractive indices close to N = 4. growth interval [47]. This partial filling of the defecting growth, roughness, and porosities on a part of the columns becomes more and more significant when the number of zigzags increases (i.e., N > 2 in our study). The voided structure of a given column is reduced and the films tend to be denser. As the number of zigzags increases, the 180° ϕ rotation becomes more frequent and leads to a less and less undistinguishable zigzag shape of the columnar structure. As-deposited architecture looks like a field of columns perpendicular to the substrate surface with large spaces between columns (as shown in Figure 2f). By shortening the growth Because of the change in the growing direction at every 180 • φ rotation, such a change disturbs the widening of the column cross-section and thus interrupts the shadowing effect from arm to arm. After the first growing stage, the subsequent column arms grow on the column apex and partially on the columns side previously located in the shadowing zone. It somewhat fills the serrated zone, voids, and defects created by the preceding growth interval [47]. This partial filling of the defecting growth, roughness, and porosities on a part of the columns becomes more and more significant when the number of zigzags increases (i.e., N > 2 in our study). The voided structure of a given column is reduced and the films tend to be denser. As the number of zigzags increases, the 180 • φ rotation becomes more frequent and leads to a less and less undistinguishable zigzag shape of the columnar structure. Asdeposited architecture looks like a field of columns perpendicular to the substrate surface with large spaces between columns (as shown in Figure 2f). By shortening the growth interval of each arm, the incident vapor cannot fully form a new arm and the structure gradually broadens. It is also interesting to note that the GLAD film porosity strongly depends on the thickness. As previously reported by Amassian et al. [48], a transition occurs from a high coverage film to a porous film during the first growing stage (first monolayers) and above 1 nm of deposition. Shadowing phenomenon prevails over smoothing effects due to surface diffusion of atoms impinging on the columns. For the highest number of zigzags, the shadowing length reduces and for each new grow direction, the formation of a new arm vanishes. A new column nucleates at the start of each grow interval and produces a nearly constant column width preventing columns extinction. This atypical growth of zigzag architectures also influences shape and dimension of the resulting porous structure, as suggested by Hass et al. [34,35]. These authors proposed the occurrence of three types of pores. The largest pores (type I) with a width exceeding 0.3 µm separating primary growth columns that are a few µm in width (intermediate columns of about 1 µm width exist within the primary columns). They are bounded by narrower type II pores that are~0.1 µm wide. Finally, type III pores of~20 nm width existing between even finer growth columns (20-80 nm in width) present within the secondary growth columns. These type III pores are usually discontinuous. The same authors showed that the thermal conductivity of zigzag films exhibits a minimum value as a function of the number of zigzags, as also reported by Amaya et al. [49]. They assigned this minimum value to the type I pores, which have a longer wavelength and greater inclination angle than type II and III pores. As a result, type I pores mainly rule heat flow propagation through the zigzag films thickness. Assuming such an approach, the analogy can be suggested for explaining the maximum of refractive index that we systematically obtained in zigzag WO 3 thin films for a number of zigzags around 4. For the smallest numbers of zigzags (N < 2), the type I pores wavelength is higher corresponding to a high porous structure, and thus a low refractive index (type II and III pores also contribute to lower the refractive index). For the highest numbers of zigzags (N > 8), type I pores tend to be perpendicular to the substrate surface (contribution from type II and II pores become negligible) with a pore amplitude larger than the columns width, and so, a reduction of the films refractive index. Conclusions WO 3 thin films 1 µm thick exhibiting a zigzag columnar architecture were produced by magnetron reactive sputtering combining GLancing Angle Deposition (GLAD) and Reactive Gas Pulsing Process (RGPP). The number of zigzags was systematically changed from 0.5 to 16 leading to a tunable zigzag architecture with adjustable arm lengths. For the shortest numbers of zigzags, a series of column arms with alternating growth direction was clearly produced, whereas a reduction of growth interval between each 180 • φ rotation (highest numbers of zigzags) led to single vertically oriented columns. Optical properties were investigated, especially refractive index, extinction, and absorption coefficient in the visible range. Both coefficients exhibited a continuous decrease as a function of the number of zigzags. It was mainly assigned to the high surface roughness favoring the light scattering at the air/film interface particularly when the number of zigzags changed from 0.5 to 4. WO 3 zigzag films prepared with the highest deposition angle (α = 85 • ) gave rise to the lowest refractive indices because of the largest voided structure mostly produced as the deposition angle tend to the most grazing incidence. Refractive index interestingly showed a maximum value as a function of the number of zigzags whatever the deposition angle (α = 70 • , 80 • , or 85 • ). A correlation was proposed between this optimized optical characteristic and the hierarchical pore structure, especially the width and the inclination of the biggest pores between zigzags as a function of their number. As a result, such a zigzag architecture appears as an original way to precisely tune the optical properties of metallic oxide thin films and can be certainly extended to other properties. Funding: This work has been supported by the Region Bourgogne Franche-Comté and by EIPHI Graduate School (contract "ANR-17-EURE-0002"). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All data are presented in the current manuscript.
2021-05-21T16:57:21.878Z
2021-04-10T00:00:00.000
{ "year": 2021, "sha1": "90b4bcca78ed9222f8853bca1dd07a38407d68f5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/11/4/438/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f678cc386bf5cc70d708560eeb5e7ab4a1938945", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
54676577
pes2o/s2orc
v3-fos-license
Gravitational Wave Damping of Neutron Star Wobble We calculate the effect of gravitational wave (gw) back-reaction on realistic neutron stars (NS's) undergoing torque-free precession. By `realistic' we mean that the NS is treated as a mostly-fluid body with an elastic crust, as opposed to a rigid body. We find that gw's damp NS wobble on a timescale tau_{theta} approx 2 x 10^5 yr [10^{-7}/(DId/I_0)]^2 (kHz/ nu_s)^4, where nu_s is the spin frequency and DId is the piece of the NS's inertia tensor that"follows"the crust's principal axis (as opposed to its spin axis). We give two different derivations of this result: one based solely on energy and angular momentum balance, and another obtained by adding the Burke-Thorne radiation reaction force to the Newtonian equations of motion. This problem was treated long ago by Bertotti and Anile (1973), but their claimed result is wrong. When we convert from their notation to ours, we find that their tau_{theta} is too short by a factor of order 10^5 for typical cases of interest, and even has the wrong sign for DId negative. We show where their calculation went astray. I. INTRODUCTION This paper calculates the effect of gravitational wave (gw) back-reaction on the torque-free precession, or wobble, of realistic, spinning neutron stars (NS's). By 'realistic' we mean the NS is treated as a mostly-fluid body with an elastic crust, as opposed to a rigid body. (However we do not include any superfluid effects in our analysis.) Freely precessing neutron stars are a possible source for the laser interferometer gw detectors (LIGO, VIRGO and GEO under construction, TAMA already operational); it is the prospect of gravitational wave astronomy that motivated our study. Also, the first clear observation of free precession in a pulsar signal was reported very recently [1], making this investigation all the more timely. The effect of gw back-reaction on wobbling, axisymmetric rigid bodies was first derived 27 years ago, in an impressively early calculation by Bertotti and Anile [2]. In the same paper, Bertotti and Anile [2] went on to calculate the effect of gw back-reaction on wobble for the more realistic case of an elastic NS. When cast into our notation, their claimed gw timescale is 5I 1 c 5 /[2G(2πν s ) 4 ∆I Ω ∆I d ] where ∆I Ω is the asymmetry in the moment of inertia due to centrifugal forces and ∆I d is the asymmetry due to some other mechanism, such as strain in the solid crust. Taking ∆I Ω to be (roughly) the asymmetry expected for a rotating fluid according to ∆I Ω /I ≈ 0.3(ν s /kHz) 2 , we would then have a damping time of merely 0.6 yr (kHz/ν s ) 6 [10 −7 /(∆I d /I)](10 45 g cm 2 /I). Despite the fundamental beauty of this probem and its potential astrophysical significance, their remarkable claimthat in realistic NS's, gw's damp wobble with amazing efficiency-was apparently little known. (A citation index search showed that Bertotti and Anile [2] had been referenced by other authors only four times in the last 27 years. ) We will show that the Bertotti and Anile result for elastic NS's is very wrong, however. For typical cases of interest, their gw timescale τ θ is too short by a factor ∼ 10 5 ! Moreover, their calculation even gives the wrong sign (exponential growth instead of damping) when ∆I d is negative. 1 In contrast, we find that gw's always act to 1 Actually, Bertotti and Anile [2] never claim in words that they find unstable growth of the wobble angle when ∆I d < 0, but that is what is found if one just takes their formulae and converts from their notation to ours, as above. Moreover we have repeated their (flawed) calculation, includ-damp the wobble in realistic NS's, just as for rigid bodies. While in Nature the typical case will be ∆I d positive, ∆I d < 0 can also occur in principle. We call attention to this case not because it is common, but because it highlights how much our result differs from Bertotti and Anile [2], and because, in fact, their implicit prediction of exponential wobble growth for this case provided our initial impetus to look more closely at this problem. The organization of this paper is as follows. In §2 we derive the gw damping timescale for rigid-body wobble, using the mass quadrupole expressions for the energy and angular momentum radiated to infinity. (This derivation is actually Exercise 16.13 in the textbook by Shapiro and Teukolsky [3]). We give another derivation of τ θ in §3, this time by adding the Burke-Thorne radiation reaction force directly to the Newtonian equations of motion. This latter approach was how Bertotti and Anile [2] first calculated (correctly) the gw damping time for wobbling, rigid bodies. In §4 we review standard material on the torque-free precession of elastic bodies, in the absence of viscous terms or gw back-reaction. In §5 we derive the gw damping timescale τ θ in the elastic case, using energy and angular momentum balance. In §6 we give a second derivation of τ θ in the elastic case, using the Burke-Thorne radiation reaction force to evolve the elastic body's free precession. This was also the strategy of Bertotti and Anile [2], and we show where they went wrong. Briefly, they did not realize that in addition to torquing the NS, the radiation reaction force also perturbs the NS's shape (in particular, its inertia tensor). When solving for the evolution of the wobble angle, we show that the "perturbed shape" term in the equations of motion almost entirely cancels the gw torque term that they do include. (Of course, by definition there is no "perturbed shape" term in the rigid-body case, which is probably why they forgot this term when adapting that calculation to the elastic case.) In §7 we describe how to include the effects of a fluid core in the radiation reaction calculation. Finally, in §8 we conclude by commenting briefly on the astrophysical implications of our result. We will work in cgs units. II. RADIATION REACTION FOR A RIGID BODY: ENERGY AND ANGULAR MOMENTUM BALANCE The derivation of the wobble damping rate for realistic NS's, using energy and angular momentum balance, is ing their one crucial error, and seen that it does lead to a prediction of exponential wobble growth for ∆I d negative. The conversion from their notation to ours is simply (δ1I − δ2I)(cos 2 γ − 1 2 sin 2 γ) → ∆I d and δ2I → ∆IΩ). rather similar to the corresponding derivation for rigid bodies. Here we briefly review the solution to the rigidbody problem, as a warm-up for tackling the realistic case. Consider an axisymmetric rigid body with principal axesx 1 ,x 2 ,x 3 and principal moments of inertia I 1 = I 2 = I 3 . Let the body have angular momentum J , misaligned fromx 3 . Define the wobble angle θ by J ·x 3 = Jcosθ. It is a standard result from classical mechanics that (in the absence of external torques) the body axisx 3 precesses around J with (inertial frame) precession frequencyφ = J/I 1 , with θ constant [4]. Together, the pair (θ,φ) completely specify the free precession (modulo a trivial constant of integration specifying φ at t = 0). We wish to calculate the evolution of these two parameters using the time-averaged fluxes (Ė,J). Straightforward application of the mass quadrupole formalism [6] giveṡ It follows from differentiation ofφ = J/I 1 thaẗ To calculate the rate of change of the wobble angle, rearrange where Eq. (2.2) has been used. The energy of the body is simply its kinetic energy: and so We can construct timescales on which the spin-down and alignment occur: 1 cos 2 θ(16 sin 2 θ + cos 2 θ) . (2.11) Radiation reaction causes bothφ and sin θ to decrease, regardless of whether the body is oblate or prolate. Note that in the limit of small wobble angle the inertial precession frequency remains almost constant (τ rigiḋ φ → ∞), while θ decreases exponentially on the timescale (2.13) In the limit of vanishingly small wobble angle the partial derivative on the lhs of Eq. (2.7) becomes what we conventionally call the 'spin frequency' Ω of the body [5]. Eq. (2.5) then shows thatθ is proportional to the difference between the inertial precession frequencyφ and the spin frequency Ω. This difference remains finite as θ → 0 according toφ − Ω = (∆I/I 1 )Ω[1 + O(θ 2 )]. Thus for a prolate body (∆I < 0), such as an American football, the body precesses slower than it spins, while for an oblate body the inertial precession frequency is higher than the spin frequency. Since the denominator in (2.5) is also proportional to ∆I, the wobble angle decreases regardless of the sign of this factor. This viewpoint will be useful when we consider the radiation reaction problem for an elastic body. III. RADIATION REACTION FOR RIGID BODIES: LOCAL FORCE We will now re-derive the spin-down and alignment timescales by adding the Burke-Thorne local radiation reaction force to the equations of motion. The Burke-Thorne radiation reaction potential at a point x is given by [6]: where I ab denotes the trace-reduced quadrupole moment tensor: Note that this is related to the moment of inertia tensor according to with the result that The radiation reaction force (on a particle of unit mass) is F RR a = −∂Φ RR /∂x a . The instantaneous (not timeaveraged ) torque on a body can easily be shown to be Making use of Eq. (3.4) it is straightforward to calculate this torque for the free precessional motion. We find acting always in the plane containing the angular momentum and the symmetry axis x 3 , and perpendicular to n d , i.e. along the direction of n ⊥nd shown in Fig. 1. We will refer to this plane as the reference plane. For the rigid body the gravitational radiation reaction torque T lies in the reference plane. It acts perpendicular to the symmetry axis, i.e. along the direction of unit vector n ⊥n d The evolution equations can be calculated without going to the trouble of writing down Euler's equations. Differentiation ofφ = J/I 1 gives and soφ Define J ⊥n d as the component of the angular momentum perpendicular to the symmetry axis. Then differentiation of the trivial relation where T ⊥J is the component of the torque perpendicular to J. Equations (3.8) and (3.10) show that the action of the torque breaks down neatly into two parts. The component along J acts to change the inertial precession frequencyφ while the component perpendicular to J acts to change θ. Substitution of (3.6) into (3.8) and (3.10) then reproduces the spin-down and alignment of equations (2.3) and (2.9), so the two methods of calculation agree. As this torque formulation makes clear (by combining Eqs. 3.8 and 3.10), the productφ cos θ remains constant, so that if a body is set into free precession described by (θ 0 ,φ 0 ), it tends to a non-precessing motion about x 3 with (inertial frame) angular velocitẏ φ = cos θ 0φ0 . IV. TORQUE-FREE PRECESSION OF ELASTIC BODIES We now review the theory of the free precession of an elastic body. This problem was first addressed in the context of the Earth's own motion. A rigorous treatment of the methods employed can be found in Munk and Mac-Donald [7]. The terrestrial analysis was extended to neutron stars by Pines and Shaham [8]. The energy loss due to gravitational waves was considered by Alpar and Pines [9]. Following the latter authors we will model a star consisting of a centrifugal bulge and a single additional deformation bulge. Alpar and Pines wrote an inertia tensor for the elastic body of the form where δ is the unit tensor [1, 1, 1], n Ω is the unit vector along the star's angular velocity Ω, and n d is the unit vector along the body's principal deformation axis (explained below). The I 0,S and ∆I d pieces of I together represent the inertia tensor for the corresponding nonrotating star. The ∆I d term is just the non-spherical piece of this tensor (approximated as axisymmetric). If the star were a perfect fluid, ∆I d would vanish, but in real stars (and the Earth) ∆I d is non-zero due to crustal shear stresses and magnetic fields. The term ∆I Ω (> 0 and ∝ Ω 2 for small Ω), represents the increase in the star's moment of inertia (compared to the non-rotating case) due to centrifugal forces. Since the crust of a rotating NS will tend to "relax" towards its oblate shape, having ∆I d > 0 is surely the typical case in Nature. (E.g., if one could slow the Earth down to zero angular velocity without cracking its crust, it would remain somewhat oblate: the crust's "relaxed, zero-strain" shape is oblate, and after centrifugal forces are removed, the stresses that build up in the crust will act to push it back towards that relaxed shape.) But a negative ∆I d is also possible in principle. We say the deformation bulge aligned with What is a typical magnitude for ∆I d in real, spinning NS's? Let us assume ∆I d is due primarily to crustal shear stresses (as opposed to stresses in a hypothetical solid core, extremely strong B-fields, or pinned superfluid vortices). Then for a relaxed crust (i.e., a crust whose reference ellipticity is very close to its actual ellipticity), we have ∆I d = b∆I Ω , where Alpar and Pines [9] estimate b ∼ 10 −5 for a primordial (cold catalyzed) crust. The maximum value for ∆I d /I is therefore of order ∼ 10 −5 . The parameter b (which arises from internucleon Coulomb forces) scales like the average Z 2 /A of the crustal nuclei. Since crusts of accreted matter (as in LMXB's) have smaller-Z nuclei [10], their b factor is correspondingly smaller, by a factor ∼ 2 − 3. Using ∆I Ω /I ∼ 0.3(ν s /kHz) 2 , we would therefore estimate ∆I d /I ∼ 10 −7 for a NS with a relaxed, accreted crust and ν s ∼ 300 Hz, while for the Crab one would expect ∆I d /I ∼ 3 × 10 −9 (again, assuming its crust is almost relaxed). For the freely precessing pulsar reported in Stairs et al. [1], where the body-frame precession period is ∼ 2 × 10 8 times the rotation period, Eq. (4.15) below (valid for elastic bodies) yields ∆I d /I = 5 × 10 −9 . For b = 10 −5 this corresponds to a reference oblateness of 5 × 10 −4 . This is consistent with the star's crust having solidified when it was spinning at about 40 Hz, assuming that neither glitches nor plastic flow have modified its shape since. (When the effects of crust-core coupling are taken into account, giving Eq. (7.5), this initial frequency reduces to 12 Hz. See Jones [11] for a review of pulsar free precession observations). Precession occurs when n d and n Ω are not aligned. Below we describe the precessional motion when there is no damping. This analysis is quite general: it applies to any star whose inertia tensor is described by Eq. (4.1), independent of what causes the deformation bulge. In the case of several equally important sources of deformation along different axes, extra terms must be added to (4.1) and the analysis would become more complex. To proceed it is necessary to use equation (4.1) to form the angular momentum J of the body. However as we are not modelling a rigid body, we must take care to allow for relative motion of one part with respect to another. Following [7] we will write the velocity of some point in the body as the sum of a rotational velocity with angular velocity Ω and a small velocity u relative to this rotating frame. We will call the frame that rotates at Ω the body frame although it is only in the rigid body limit that the body's shape is fixed with respect to this frame. In other words the velocity of some particle making up the body is the sum of the body frame velocity Ω × r at that point r plus the velocity u of the point relative to the body frame. Then where the possibly time-varying moment of inertia is defined in the usual way: while h a is the angular momentum of the body relative to this frame: We will neglect the h i term when constructing a free precessional motion, as it can be shown that h i is small in a well-defined sense [11]. Therefore we will simply write J a = I ab Ω b (4.5) Having formulated the problem in this manner it is straightforward to show that the free precession of an elastic body is similar to that of a rigid one. First write down the angular momentum using (4.1) and (4.5). Referring all of our tensors to the body frame, with the 3-axis along n d : This shows that J , Ω and n d are coplanar. As the angular momentum is constant this plane must rotate about J . As in the rigid body case, we will refer to this as the reference plane. See figure 2. Taking the components of (4.6) we obtain: These equations show that despite the triaxiality of I the angular momentum components themselves are structurally equivalent to those of a rigid symmetric top. The equations of motion of the body (i.e. Euler's equations) involve only the components of J and Ω. Therefore equations (4.7)-(4.9) show that the free precession of the triaxial body is formally equivalent to that of a rigid symmetric top. We can think of the elastic body as having an effective moment of inertia tensor diag[I 1 , I 1 , I 3 ]. Note that the effective oblateness I 3 − I 1 is equal to ∆I d . Now introduce standard Euler angles to describe the body's orientation, with the polar axis along J . Let θ and φ denote the polar and azimuthal coordinates of the deformation axis, while ψ represents a rotation about this axis. We refer to θ as the wobble angle. Taking the ratio of components J 1 and J 3 using (4.7) and (4.9) at an instant when Ω 2 = 0 we obtain where γ denotes the (Ω, n d ) angle. See figure 2. This shows the reference plane, which contains the deformation axis n d , the angular velocity vector Ω and the fixed angular momentum J. The vectors n d and Ω rotate around J at the inertial precession frequencyφ. The terms 'oblate' and 'prolate' refer to the deformation bulge. We will label the angle between J and Ω asθ: This angle is much smaller than θ, as can be seen by linearising (4.10) in ∆I Ω and ∆I d to givê θ = ∆I d I 3 sin θ cos θ. Note that according to our conventions, when the deformation bulge is oblate ∆I d andθ are positive, but when the deformation bulge is prolate ∆I d andθ are negative. We can decompose the angular velocity according to Ω =φn J +ψn d . Substituting this into equation (4.6) and resolving along n J and n d gives where J denotes the magnitude of the angular momentum. Note that when ∆I Ω = 0 the above formulae reduce to the familiar rigid body equations. Thus the motion is simple. As viewed from the inertial frame the deformation axis rotates at a rateφ in a cone of half-angle θ about the angular momentum vector. This angular velocity is sometimes called the inertial precession frequency. The centrifugal bulge rotates around the angular momentum vector also, butfor oblate deformations-on the opposite side of J , making an angleθ ≡ γ −θ with J . Superimposed upon this is a rotation about the deformation axis at a rateψ, known as the body frame precession frequency or sometimes simply the precession frequency. This frequency is negative for an oblate distortion and positive for a prolate one. V. RADIATION REACTION FOR AN ELASTIC BODY: ENERGY AND ANGULAR MOMENTUM BALANCE Here we derive the wobble damping time τ θ for elastic bodies, based on energy and angular momentum balance. Once fully underway, the derivation is just a couple lines. But to understand it, it is useful to carry along a simple, physical model for the deformed crust. (However our derivation will actually be completely general). Here is the model: take some non-rotating, spherical NS, and stretch a rubber band around some great circle on the crust. We shall refer to this great circle as the NS's equator. Obviously the effect of the rubber band is to make the NS slightly prolate (but still axisymmetric). To get an oblate shape, you can instead imagine sewing compressed springs into the surface of the crust at the equator. For definiteness, let the potential energy of the band (or springs) be V = 1 2 ǫl 2 , where l is its length. So ǫ is positive for the rubber band (prolate deformation, ∆I d < 0) and negative for the springs (oblate deformation, ∆I d > 0). Now give the NS angular momentum J about some axis that is not quite perpendicular to the equator. We now have our deformed, wobbling NS. We consider the equation of state of the star and the value ǫ to be fixed once and for all, and consider how the energy of the system (star + band) varies as a function of its total angular momentum J and the wobble angle θ (the angle between J and the perpendicular to the equator); i.e., we consider E(J, θ). We will be concerned with small wobble angle, so let us expand E(J, θ) as a Taylor series in J and θ: Here E 0 is defined to be the energy of the (star + band) at zero J, and B, C, and F are some expansion coefficients that in principle depend on the physical properties of the (star + band). Fortunately we will soon see that there are simple relations between B, C, and F and previouslydefined physical parameters, such as ∆I d . Our ultimate goal is to obtain the two partial derivatives on the righthand side of equation (2.5), where E now denotes the total energy. First, to see that no lower order terms (such as J, θJ, θ 2 , or θJ 2 terms) can appear in the expansion (5.1), note that the J = 0 configuration corresponds to the minimum of the potential energy of the (star + band) system. Displacements of the (star + band) are first order in J 2 , so changes in the potential energy of (star + band) are O(J 4 ). Thus terms in E(J, θ) that are ∝ J 2 are kinetic energy pieces. These terms with a J 2 in them are clearly just 1 2 (I −1 0 ) ab J a J b , where I ab 0 is defined to be the inertia tensor of the (star + band) at J = 0. (Corrections to the star's I ab first enter the energy at O(J 4 ).) We write I ab 0 as where I 0,S represents the 'spherical part' of I ab 0 . Then where a term of O(∆I 2 d ) has been neglected. The kinetic energy part of E is [up to terms of O(∆I 2 d ) and O(J 4 ) ] where we have used the small wobble angle result J a n a d = J(1 − 1 2 θ 2 ). From Eq. (5.4) we immediately read off the values of B and F ǫ in expansion (5.1): and obtain the partial derivative To compute the partial derivative in the numerator of equation (2.5) it is sufficient to consider the θ → 0 limit [5] so that where Ω denotes the spin frequency in the axisymmetric limit. It is related to the inertial precession frequency by The final physics inputs we need arė Eqs. (5.9) and (5.10) follows from the quadrupole formalism in the same way as for the rigid body. The necessary pieces have been gathered; substituting into equation (2.5) giveṡ This is simply the same spin-down rate as for a rigid body, with the replacement (∆I/I 1 ) → ǫ d . This is much longer than the timescale claimed by Bertotti and Anile [2] by a factor ∆I Ω /∆I d , which is typically ∼ 10 5 or higher. Finally, the spin-down rateφ can be obtained in the same way as for a rigid body, i.e. by differentiatingφ = J/I 1 and using equations (5.9) and (5.10). Strictly there will also be a term inİ 1 , but this correction will be down by a factor of order (Ω/Ω max ) 2 . We then obtain the same spin-down as for a rigid body, again with the replacement ∆I → ∆I d :φ VI. RADIATION REACTION FOR AN ELASTIC BODY: LOCAL FORCE We now give a second derivation of the wobble damping rate for an elastic star, by directly adding the gw radiation reaction force to the Newtonian equations of motion. Besides being a satisfying consistency check on the calculation in §4, by doing this second derivation correctly we can show where Bertotti and Anile [2] went astray. As was the case for the rigid body, the Burke-Thorne potential will exert a torque on the spinning star. However, this is not the only effect of the radiation reaction force: It will distort the shape of the NS and thus its moment of inertia. The equation describing the precession is then of the from where I N denotes the Newtonian part of the moment of inertia tensor, δI BT the perturbation in this tensor due to the Burke-Thorne force, and T the Burke-Thorne torque. It was the δI BT terms that were not included by Bertotti and Anile. Fortunately, these can also be calculated explicitly, as we show below. A. Effect of Φ RR on the NS's Shape It is perhaps surprising that one can explicitly determine the effect of Φ RR on the NS's moment of inertia, since the answer would seem to depend on the NS's mass and the details of its equation of state; i.e., one might worry that extra parameters must be specified even to make the problem well-defined. However the point is that (from symmetry arguments) the perturbation ∆I ij depends only on a single physical parameter, and this parameter already appears in our Newtonian equations of motion. That parameter is ∆I Ω /Ω 2 , the amount of oblateness caused "per unit centrifugal force". The point is that both the centrifugal and radiation reaction forces have the very special property that they grow linearly with distance from the center of the star. This fact, coupled with symmetry arguments, is enough to determine ∆I ij in terms of ∆I Ω /Ω 2 ; no new physical parameters have to be introduced. Let Φ Λ be some external potential of the form Φ Λ ≡ Λ ab x a x b , where Λ ab is some trace-free tensor. Allow this potential to act on the non-rotating (and so spherically symmetric) NS; it will induce a perturbation ∆I ab in the NS's inertia tensor. Since the background is spherically symmetric, the only possibility (to first order in the perturbation) is that ∆I ab = CΛ ab , where C is some constant (i.e, independent of Λ ab ). We can determine C as follows. Decompose the centrifugal potential into a spherically symmetric and a trace-free piece: The radiation reaction potential for the freely precessing elastic body can be found by substituting the radiation reaction free motion into equation (3.1) to give: 3) The first term is the potential caused by the motion of the deformation bulge, the second by the centrifugal bulge. The differentiations of the unit vectors are straightforward. In the case where θ ≪ 1 we can approximate n d ≈ n J + θn ⊥J and n Ω ≈ n J −θn ⊥J , where n ⊥J is the unit vector in the reference plane which lies perpendicular to J and points towards n d . We then find (v a n J b + n J avb ) . (6.4) Herev denotes a unit vector n J × n ⊥J . Using the prescription described above, these radiation reaction potentials can be converted immediately into perturbations of the moment of inertia tensor: It now remains to compute the torque T using equation (3.5). We obtain four terms, corresponding to the expansion of the product of I with its fifth time derivative. Again linearising with respect to θ we obtain Define ǫ Ω ≡ ∆I Ω /I 0,S and ǫ d ≡ ∆I d /I 0,S . 2 Then the terms on the rhs of (6.6) stand in the ratio ǫ d /ǫ Ω : ǫ d : 1 : ǫ Ω . We are now in a position to write down the equation for d(I N Ω)/dt. Using Eq. (6.5) and the Newtonian motion to compute d[(δI BT )Ω]/dt, and neglecting terms of order θ 2 , we find that Eq. (6.1) reduces to We see that the last two terms on the rhs are cancelled by terms on the lhs. This leaves The problem has reduced to a rigid-body Newtonian one, with the two torque terms indicated on the right-hand side. The terms stand in the ratio 1 : ǫ Ω . In fact, the dominant term is the same as that obtained in the rigid body case with the change ∆I → ∆I d . We therefore find that the alignment rate as calculated using the local Burke-Thorne formalism agrees with the flux-at-infinity method. The previous force-based calculation of Bertotti and Anile [2] failed to include the deformation δI BT , so that the cancellations in equation (6.7) described above did not occur. Finally, it is easy to show that even when the approximations θ ≪ 1, ǫ d ≪ 1 are not employed, the effective torques due to the δI BT terms are still perpendicular to J, so the spin-downφ using this local formalism is necessarily the same as in the flux-at-infinity method. VII. ALLOWANCE FOR A LIQUID CORE We have successfully described the effects of gravitational radiation reaction on an elastic precessing body. We will now briefly describe how to extend this result to the realistic case where the star consists of an elastic shell (the crust) containing a liquid core. The Earth itself is just such a body, and the form of its free precession was considered long ago. We will base our treatment on that of Lamb [12], who considered a rigid shell containing an incompressible liquid of uniform density. To make the problem tractable the motion of the fluid was taken to be one of uniform vorticity. We will assume the ellipticity of the shell, and also the ellipticity of the cavity in which the fluid resides, are small. Then the small angle free precession of the combined system can be found by means of a normal mode analysis of the equations of motion [12]. The key points are as follows: The fluid's angular velocity vector does not significantly participate in the free precession. Instead it remains pointing along the system's total angular momentum vector. The shell precesses about this axis in a cone of constant half-angle. The fluid exerts a force on the shell such that the shell's body frame precession frequency is increased in magnitude, so that:ψ where ∆I denotes the difference between the 1 and 3 principal moments of inertia of the whole body, not just the shell. We now wish to calculate the alignment rate of such a body due to gravitational radiation reaction. The averaged energy and angular momentum fluxes, as well as the instantaneous torque, depend only upon the orientation of the mass quadrupole of the body, and so are exactly the same as if the body were rigid, i.e. equations (2.1), (2.2) and (3.6) apply. Equations giving the kinetic energy and angular momentum of the body are given in Lamb [12]. These can be used to obtain the partial derivatives that appear in equation (2.5). Explicitly, we find ∂E ∂J θ = Ω =φ +ψ (7.2) and ∂E ∂θ J =φ 2 θ∆I. (7.3) (See Jones [11] for a detailed derivation.) These lead to an alignment timescale that is I crust /I shorter than that of equation (2.13). This result is confirmed using the local torque formulation, wherė In the realistic case where both crustal elasticity and core fluidity are taken into account we can combine the above arguments as described by Smith and Dahlen [13], i.e. we can take the rigid result and put I → I crust and ∆I → ∆I d to giveψ VIII. CONCLUSIONS We have shown that the gw damping time for wobble in realistic NS's has the same form as for rigid bodies, but with the replacement ∆I 2 /I 1 → ∆I 2 d /I crust . This given an alignment timescale of: For the Crab, taking ǫ d ∼ 3 × 10 −9 , this gives τ θ ∼ 5 × 10 13 yrs-much longer than the age of the universe. For an accreting NS with ǫ d ∼ 10 −7 and ν s ∼ 300 Hz, we estimate τ θ ∼ 2 × 10 8 yrs. Our basic conclusion, then, is that gw backreaction is sufficiently weak that other sources of dissipation probably dominate. Unfortunately, even for the Earth the dissipation mechanisms are not well understood [7]. Early estimates of Chau and Henriksen [14], which considered dissipation within the neutron star crust, suggested that wobble would be damped in around 10 6 free precession periods, i.e. over a time interval of 10 6 /(ǫ d ν s ). A more recent study of Alpar and Sauls [15] argued that the dominant dissipation mechanism will be due to imperfect coupling between the crust and the superfluid core. They estimate that the free precession will be damped in (at most) 10 4 free precession periods. In contrast, according to equation (8.1), the gw damping time is in excess of 10 8 kHz νs 3 free precession periods. On the basis of these estimates, it seems likely that internal damping will dominate over gravitational radiation reaction in all neutron stars of interest. Note however, that while internal dissipation damps wobble for oblate deformations, we expect that internal dissipation causes the wobble angle to increase in the prolate (∆I d < 0) case. A study of the gravitational wave detectability of realistic neutron stars undergoing free precession, including a discussion of other astrophysical mechanisms which might affect the evolution of the motion, will be presented elsewhere (Jones, Schutz and Andersson, in preparation).
2018-12-09T02:10:03.088Z
2000-08-09T00:00:00.000
{ "year": 2000, "sha1": "69fd8852c44947411f703c4fc5e181a7722ef818", "oa_license": null, "oa_url": "https://pure.mpg.de/pubman/item/item_152277_1/component/file_152276/2721.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "69fd8852c44947411f703c4fc5e181a7722ef818", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
51895886
pes2o/s2orc
v3-fos-license
2469-5726 Modulation of Histone Deacetylases ( HDACs ) Expression in Patients with and without Systemic Lupus Erythematosus : Possible Drug Targets for Treatment There is increasing evidence that epigenetic factors may play a role in the pathogenesis of Systemic Lupus Erythematosus (SLE). Both global and gene specific methylation is known to occur in lupus patients, as well as, changes in histone acetylation status. Histone acetylation is associated with active chromatin or activation of genes, whereas histone deacetylase (HDAC) activity is associated with silencing of genes. Therefore, HDACs have been targeted as potential therapeutic targets for a number of diseases, including lupus. The purpose of this study was to determine histone deacetylase (HDAC) expression in patients who are diagnosed with SLE compared to age-matched healthy controls. Quantitative real-time PCR expression levels of HDAC 1, HDAC 2 and HDAC 7 were investigated in peripheral blood mononuclear cells of African American and European American women. Our results showed that HDAC 1 expression is significantly (p < 0.0039) elevated in lupus patients compared to controls. HDAC 2 expression is also increased in lupus patients (p < 0.0427). However, HDAC 7 showed no significant difference (p < 0.4644) in expression in our SLE patients compared to their controls. Those lupus patients with a SLE disease activity index (SLEDAI) of 4 or greater showed lower expression of HDAC 1 (p < 0.0026) compared to those with modest disease and a SLEDAI of less than 4. However, in those lupus patients with a SLEDAI of 4 or greater showed increased expression of HDAC2 (p < 0.053) when compared to those with a SLEDAI of less than 4. This observation was also noted in HDAC7. Increased expression in HDAC 1 and 2 has been associated with induced kidney injury and induction of proinflammatory cytokines. Introduction The difficulties in designing an effective pharmacological therapy for Systemic Lupus Erythematosus (SLE) are due in part to the complexities of its pathophysiology. However, recently researchers have begun to look at the role chromatin modification plays in SLE. Chromatin modification is important in the regulation of genomic expression. One of the essential parts of chromatin structure are the histones. Histones are responsible for binding the nucleosome and provide the entry and exit sites to DNA. Like DNA, histones can also be subjected to epigenetic events. A group of enzymes that have shown to play a key role in histone modification are histone deacetylases (HDACs). HDAC enzymes work on the amino terminal tail of histones. They are able to regulate gene expression by modulating histone acetylation patterns [1]. There are 18 genes identified as HDACs. These 18 genes can further be grouped into four classes based on their structural functional capabilities. There has been increasing evidence that targeting certain classes of HDACs can provide therapeutic benefit to patients with SLE. For example, preclinical studies have shown that using the HDAC inhibitor (HDACi), Trichostatin A (TSA), can reduce anti-DNA autoantibody production HDAC 1, HDAC 2, and HDAC 7 mRNA expression was conducted using a Bio-Rad IQ5 quantitative Real Time Polymerase Chain Reaction Detection System (BIO-RAD, Hercules, CA). GAPDH was used as a housekeeping gene and as an endogenous control. qRT-PCR conditions were as follows: 50 °C for 2 minutes, 95 °C for 10 minutes (95 °C for 10 seconds, 56 °C for 45 seconds, 72 °C for 30 seconds) × 30 cycles. Relative quantitation's of HDAC 1, HDAC 2 and HDAC 7 mRNA expressions were normalized to GADPH and fold changes were calculated using a 2 -ΛΛCT method. Primers utilized for both the histone deacetylases and GAPDH are listed in Table 1. Statistical analysis Statistical analyses were performed using GraphPad Prism Software Version 6.0 (San Diego, CA). A t-test were used for statistically significance. P < 0.05 was determined to be significant. Results In Figure 1 we compare mRNA expression of HDAC 1 from normal controls and lupus patients. Individual differences were noted among the patients. In addition, statistically significant differences in HDAC1 mRNA expression were observed between lupus compared to non-lupus. Furthermore, HDAC 1 mRNA expression levels were significantly higher (p < 0.0039) in patients with SLE compared to controls. In Figure 2, we determined the mRNA expression levels of HDAC 2 from normal and lupus patients. Our analysis indicated that HDAC 2 expression levels was significantly higher (p < 0.0427) in our SLE patients compared to our controls. However, HDAC 7 mRNA expression levels among SLE and control [2]. In addition, suberorylanide hydroxamic acid (SAHA), a HDAC inhibitor, in combination with TSA can reduce IL-6, IL-12, IFN-γ, and IL-10 production leading to a decrease in inflammation response as well as decreases in proteinuria and glomerulonephritis in murine models [3,4]. Due to the effects of HDACi in their ability to reduce inflammation in murine models, researchers have postulated their use for patients to control inflammation such as those that occur with SLE [5][6][7][8]. However, studies regarding HDAC expression levels in patients with SLE are lacking. Here we compare expression levels of HDAC 1, HDAC 2 and HDAC 7 among patients with SLE to their age, sex, and ethnicity matched controls. Study populations Patients participating in this study were part of the LU-PUS study at the Brody School of Medicine at East Carolina University. This study consisted of a total of 224 participants. The participants representing this study were ninety-six women diagnosed as having SLE based on SLE disease activity index (SLEDAI) scores and by anti-dsDNA antibody analysis and ninety-one controls that were age, sex, and ethnicity matched. No males were analyzed in this present study. Informed consent was obtained from all of our participants for blood samples to conduct our analysis. Also, this project was granted IRB approval from both The US Food and Drug Administration and from East Carolina Brody School of Medicine. Blood collection and isolation of PBMC For our analysis, blood samples were collected by venipuncture of the antecubital vein between 9:00 AM-12:00 PM. In order to maintain a similar circadian pattern between our participants with SLE and their matched control participants, collections were conducted at the same time of day and the same day of the week. Peripheral blood mononuclear cells were isolated from whole blood using PAXgene RNA Blood tubes at East Carolina University in Greenville, NC, placed in dry ice and stored at -80 °C until shipped to the National Center for Toxicological Research for analysis. RNA isolation and quantitative real time PCR RNA was extracted from peripheral blood mononuclear cells (PBMC) using a PAXgene RNA kit (QIAGEN, Valencia, CA). After extraction, all samples were tested for RNA integrity and concentration using a Bio-Rad Experion Automated Electrophoresis System (BIO-RAD, Hercules, CA). cDNA was created from RNA extractions using a Clontech Advantage® RT-for-PCR Kit (Clontech, Mountain View, CA). the preset study, we were interested in determining if HDAC expression was altered in Lupus patients as compared to non-Lupus patients. Our results demonstrated that the mRNA expression levels of Class I HDACs among SLE patients compared to controls were significantly different. Specifically, HDAC 1 and 2 were significantly up-regulated in SLE patients compared to controls. However, HDAC 7, which is a member of the class II HDACs, did not show a significant difference in expression level between SLE and controls patients. Of the histone deacetylases studied, HDAC 1 had significantly patients were not significantly different (Figure 3 and Figure 4) (p < 0.4644). Our results provide evidence that histone deacetylases may be involved in the pathogenesis of SLE and Class I HDACs should be further investigated as potential therapeutic targets. Furthermore, these results demonstrated that an increase in HDAC 2 (p < 0.053) and 7 (p < 0.0259) expression were observed more in lupus patients with a SLE disease activity index (SLEDAI) of 4 or higher, when compared to patients with more modest disease, Figure 5 and Figure 6. However, a decrease in expression of HDAC 1 (p < 0.0026) was noted in lupus patients with higher SLEDAIs. Discussion Histone deacetylases (HDACs) are critical for the maintenance of gene and chromosome silencing. Furthermore, HDACs assist in chromatin modification and transcriptional regulation of an organism's genome. In higher mRNA expression than HDAC 2 and 7. There is increasing evidence that over expression of HDAC 1 is linked to various malignancies such as cancer and kidney damage [9]. In addition, studies have shown that an increase in HDAC 1 expression is linked to decrease survival rates [10,11]. This suggests that HDAC 1 expression could be a useful biomarker for disease progression in SLE patients. Furthermore, HDAC 1 may be a potential therapeutic target for pharmacological design. On the other hand, HDAC 2 showed a significant difference in expression between SLE and control patients however, this difference was not as significant as HDAC 1 expression. Researchers have provided evidence that HDAC 2 expression levels can be linked as a possible biomarker for survival; especially in cases of oral cancer [12]. However, in cancer models, HDAC 2 has been shown to play an anti-apoptotic role [13]. With one of the hallmarks of SLE being apoptotic complications, HDAC 2 targeting could prove a potential avenue for therapeutic analysis. This is further underscored by the fact that this study demonstrated an increase in expression of HDAC 2 in the SLE population. This suggests that HDAC 2 should be further investigated in SLE. Acknowledgement Parts of this work was funded by a grant to Dr. Lyn-Cook from the FDA Office of Women's Health. : Although no significant difference was noted between lupus and non-lupus in expression of HDAC 7, lupus patients with a SLEDAI of 4 or greater showed an increased expression of HDAC7 (p < 0.0259) when compared to those with a SLEDAI of less than 4.
2018-07-31T20:04:08.520Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "02256b3f28c60acaaa8d9cf4345ee39fe637ac07", "oa_license": "CCBY", "oa_url": "https://www.clinmedjournals.org/articles/jrdt/journal-of-rheumatic-diseases-and-treatment-jrdt-4-060.pdf?jid=jrdt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "02256b3f28c60acaaa8d9cf4345ee39fe637ac07", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3918368
pes2o/s2orc
v3-fos-license
Comparative analysis for renal stereotactic body radiotherapy using Cyberknife, VMAT and proton therapy based treatment planning Abstract Purpose We conducted this dosimetric analysis to evaluate the feasibility of a multi‐center stereotactic body radiation therapy (SBRT) trial for renal cell carcinoma (RCC) using different SBRT platforms. Materials/methods The computed tomography (CT) simulation images of 10 patients with unilateral RCC previously treated on a Phase 1 trial at Institution 1 were anonymized and shared with Institution 2 after IRB approval. Treatment planning was generated through five different platforms aiming a total dose of 48 Gy in three fractions. These platforms included: Cyberknife and volumetric modulated arc therapy (VMAT) at institution 1, and Cyberknife, VMAT, and pencil beam scanning (PBS) Proton Therapy at institution 2. Dose constraints were based on the Phase 1 approved trial. Results Compared to Cyberknife, VMAT and PBS plans provided overall an equivalent or superior coverage to the target volume, while limiting dose to the remaining kidney, contralateral kidney, liver, spinal cord, and bowel. Conclusion This dosimetric study supports the feasibility of a multi‐center trial for renal SBRT using PBS, VMAT and Cyberknife. | INTRODUCTION With an incidence of 62 700 new cases in 2016, kidney and renal pelvic cancers account for around 4% of newly diagnosed cancers in the USA. 1 Renal cell carcinoma (RCC) is the predominant and most lethal histology accounting for about 87% of these malignancies. 2,3 Historically, RCC has been labeled as a "radio-resistant tumor" and surgical nephrectomy was considered the cornerstone of treatment for RCC. A gradual shift in RCC treatment modalities began in the 1990s with the introduction of laparoscopic nephrectomy, high intensity focused ultrasound, cryoablation, radiofrequency ablation, and tyrosine kinase inhibitors but radiation remained rarely used. 2 Two main factors contributed to the underutilization of radiotherapy in treating RCCs: the high metastatic potential of the cancer and an inability to safely deliver high dose curative intent radiation to the primary tumor due to the anatomic proximity of the kidneys to other radio-sensitive structures such as small bowel. However, the successful use of both image-guided conventional radiotherapy and more recently stereotactic radio-surgery in the local treatment of extracranial and intracranial RCC metastases, respectively, challenged the role of radiation therapy in the management of RCC. While previous literature suggested that RCC is radio-resistant to small fraction sizes, 4 higher fraction dose delivered through Stereotactic Body Radiation Therapy (SBRT) can achieve promising rates of local control and acceptable toxicity. 5 SBRT allows for the accurate delivery of high dose radiation to specific extracranial targets while potentially avoiding toxic doses to adjacent structures. The use of SBRT in the treatment of local RCC was first reported by Qian et al. 5 who achieved a local control rate of 93% at a mean follow-up of 12 months. 6 A few years later, we reported our experience of a Phase I trial of SBRT using the Cyberknife platform and emphasizing the safety and efficacy in non-surgical RCC treatment. 7 SBRT can also be delivered with other platforms including volumetric modulated arc therapy (VMAT) or pencil beam scanning (PBS) proton therapy with each field covering the target uniformly. Dosimetric differences between these platforms have not been well studied in RCC and remain a major barrier for the implementation of large multi-institutional trials. Therefore, we conducted this study to assess the dosimetric feasibility of using non-robotic platforms for delivering curativeintent renal SBRT as a precursor for a future multi-institutional trial. 2.A | Patients selection An institutional review board-approved phase I dose-escalation trial of SBRT using Cyberknife for primary treatment of non-surgical patients with localized RCC (NCT00458484) was initiated at our facility (Institution 1) since June 2006. The primary tumor was deemed to be resectable by an experienced urologic oncologist, but patients were referred to this phase I trial due to underlying medical conditions prohibiting surgical excision such as low probability of tolerating the general anesthesia, the surgery itself, or the postoperative recovery period. 7 At the time of the diagnostic biopsies, at least three fiducials markers were placed within and around the renal mass. 7 Within 1 week after fiducials insertion, computerized tomography (CT) simulation was acquired. Patients were treated to the primary tumor plus 0-3 mm margins with radiotherapy doses of 24, 32, 40, and 48 Gy in four fractions. Inclusion and exclusion criteria, radiation technique, dosimetric planning, and initial results were previously reported. 7 The institutional review board also approved the current dosimetric study. Among 19 patients with unilateral RCC treated according to the phase I trial with 48 Gy in four fractions, ten patients were randomly selected for this study. CT simulation images were then anonymized and shared with Institution 2. 2.B | Treatment planning and dosimetric variables Using the anonymized CT images, treatment planning was performed using five different platforms with a prescription dose to the planning target volume (PTV) of 48 Gy in three fractions. These plat- 2.C | Statistical analysis For each of the seven variables defined above (V 100% , V 14Gy , D 1cc -Bowel, D 1cc -Stomach, D 0.3cc -Cord, D 5% -Contralateral K., V 17Gy -Liver) we calculated the mean and the standard deviation (SD) across all 10 patients for each treatment planning platform. For statistical testing, V 100% across different platforms was considered paired as the planning measurement were applied to the same CT images. We therefore performed a two tailed paired t-test with a confidence interval of 95% to compare V 100% , D 0.3cc -Cord, and D 1cc -Bowel of institution 1 Cyberknife to the other platforms and assess for target dose conformity. We used Bonferroni correction to account for multiple comparisons. D 1cc -Bowel and D 0.3cc -Cord in each institution were represented in value plots. All statistical analysis was done using Minitab â version 17.3.1 (Minitab Inc., State College, PA). Table 2 shows the mean and the SD of each of all seven variables. | RESULTS Tumor coverage was excellent while also sparing the ipsilateral kidney. The V 100% was greater than or equal to 97.4% for all the platforms and the V 14Gy ranged between 45.6% (VMAT -Institution 1) and 65.1% (Cyberknife -Institution 2). Mean V 100% was the lowest for Cyberknife at institution 1. The D 0.3cc -Cord constraint was satisfied for the five platforms [ Fig. 2(a) and Table 2]. For several cases, D 1cc -Bowel constraint was not achieved [ Fig. 2(b)]. The mean D 1cc -Bowel satisfied the dose constraint only for VMAT -Institution 2, while the other platforms had slightly higher means ranging between 1.76 Gy (Cyberknife -Institution 1) to 5.14 Gy (Cyberknife -Institution 2) above the dose constraint (Table 2). Table 3 show the Pvalues and the 95% confidence intervals of the paired t-test for V 100% , D 0.3cc -Cord, and D 1cc -Bowel. Using Bonferroni correction to account for multiple comparison, the P-value for V 100% was statisti- The potential OAR toxicity associated with SBRT remains a limiting factor for adequate dose delivery. While VMAT in institution 1 was associated with higher D 0.3cc -Cord, VMAT and PBS planning at institution two resulted in lower D 0.3cc -Cord. With the OAR constraints used in our Phase I trial and for this study, no grade 3 or 4 toxicities were reported. 7 These constraints were developed initially for the institution 1 protocol prior to the initiation of the phase I trial. Since then, the International Radiosurgery Oncology Consortium for Kidney (IROCK) consortium has been adopted internationally to help guide uniform dose constraints. 9 These constraints are likely conservative and were strictly met for all the organs except the D 1cc -Bowel. In this feasibility study, no specific constraint optimization algorithm was used and the PTV expansion can be relaxed in cases were bowel toxicity is a concern. Regardless, the D 1cc -bowel show better overall dose sparing. 12 In the phase I protocol, image acquisition, target localization, and alignment correction were repeated during treatment delivery at intervals of 30-60 s. 7 To limit respiratory motion, 4D-CT images can be acquired with abdominal compression and plans could therefore incorporate an ITV. Contrary to gated radiotherapy, using an ITV would lead to an increase in the volume of the normal tissue irradiated. Given the large kidney size (3 9 6 9 12 cm 3 ), the relatively small respiratory-induced motion may mitigate the risk of non-gated treatment delivery, 13 especially that near complete sparing of a substantial volume of the kidney is usually associated with compensatory preservation of the renal function. 14 Compared to Cyberknife, VMAT and PBS have the advantage of decreased treatment times, 15 which decreases the risk of intrafraction movement, increases the number of patients treated per day, and provides more comfort to the patient. 15 Finally, difference in planning techniques between institutions could have been a confounder and thus this dosimetric study should be interpreted as an initial proof of concept that needs further support in multi-institutional settings. CONFLI CT OF INTEREST None. T A B L E 3 Paired t-test for V 100% , D 0.3cc -Cord, and D 1cc -Bowel.
2018-04-03T03:15:03.707Z
2018-03-14T00:00:00.000
{ "year": 2018, "sha1": "f7c6dc60923f2caf27b804d53dab1b1bd29bc9c5", "oa_license": "CCBY", "oa_url": "https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.12308", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7c6dc60923f2caf27b804d53dab1b1bd29bc9c5", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
134037669
pes2o/s2orc
v3-fos-license
SiC MOSFETs for Offshore Wind Applications Space and weight are critical factors for offshore wind applications during the construction, operation, and maintenance phases. Superior material properties of silicon carbide enable the development of power devices capable of switching fast as well as handling high power. Thus, this paper performs the quantitative evaluation of the total converter loss and efficiency at different switching frequencies in order to observe the potential performance gains that SiC MOSFETs can bring over Si IGBTs for such applications. When simulating the detailed converter losses in a three-phase, two-level topology; the turn-on and turn-off switching energy losses obtained from the laboratory measurement and the conduction losses acquired from the datasheet are used as a look-up table input. Additionally, the simulated results are compared with the analytical and the numerical solutions. In conclusion, this analysis gives an insight into how SiC MOSFET outperforms Si IGBT over all switching frequency ranges with the advantages becoming more visible at higher frequencies. Introduction The potential power produced from wind is directly proportional to the cube of wind speed [1]. Compared to land, water has less surface roughness, so average wind speed is higher over open water, and consequently higher power potential. Moreover, offshore wind is stronger and steadier, whereas, onshore wind is disrupted by hills and buildings making it more turbulent. Besides, offshore wind turbines have less environmental constraints compared to those on onshore. These are some of the major driving forces for harvesting offshore wind energy [2]. For a long time silicon (Si)-based power devices have dominated the power electronics and power system applications. However, these devices are confronting some fundamental limits in the performance, such as breakdown voltage, operating temperature, and frequency, due to their inherent limitations of material properties. Recent literature in silicon carbide (SiC) has demonstrated the significant potential of SiC devices to fulfil such demands and requirements. Table 1 shows a comparison of the material properties of 4H-SiC with Si [3]. The column "factor", in Table 1, indicates the ratio of 4H-SiC versus Si where most of the material properties of SiC are superior than that of Si. For example, bandgap energy is about 3 times higher in SiC compared to Si which can be translated to switching devices with higher operating temperature; breakdown field intensity is about 10 times higher in SiC and this can lead to devices with higher breakdown voltage and still have the same conduction loss; higher thermal conductivity means faster heat dissipation which results in higher power density; and likewise, higher drift velocity enables faster transportation of carriers, and thereby faster switching of devices can be achieved. In offshore wind applications, space and weight savings are of paramount importance as these factors directly influence the cost and size of the mechanical design of the tower/nacelle. The outstanding material properties of SiC can fulfil the demand because the power electronic conversion system with these devices will be compact, efficient, and thermally stable, and accordingly can be easily mounted in the nacelle of wind turbine. At present, most installed offshore wind turbines are based on relatively low voltage, < 690 V [4,13]. On the other hand, the share of wind power is likely to increase in future as per the forecast [5], therefore a shift towards higher voltage is necessary to increase the power density of the system. For instance, for the same power, higher voltage means smaller current which leads to reduction in size of cable and power loss. In addition, high voltage SiC devices lead to simpler converter topologies resulting in high power density of the system. SiC is recognised as a potential technology for wind power applications in [6,7]. Reference [8] compared the performance of SiC MOSFET with Si IGBT, where the IGBT was non-punch through (NPT) type, optimised for the conduction loss. Similarly, in [9], a performance comparison between SiC MOSFET with punch through (PT) type of Si IGBT (optimised for the switching loss) was investigated. Note that an IGBT is a bipolar component; hence, carrier life time can be optimised as per the applications, i.e., a trade-off between switching loss (dv/dt) and conduction loss can be made. However, this paper covers the detail loss comparison at varying load current as an extension to the work presented in [9]. Most importantly, the loss in a backto-back converter for offshore wind application is simulated using the experimentally measured data as a look-up table input in Section 4. Further, it verifies the simulation results with the numerical and analytical calculations. Finally, Section 5 summarises the major conclusions. Methodology and measurement setup A standard double pulse test methodology is used for evaluating the stresses, such as dv/dt, di/dt, and switching energy loss in the device under test (DUT). An equivalent circuit with a hard switched arrangement is shown in Fig. 1 a), where the current path during the turn-on and turn-off processes are routed by blue and green dotted lines. A low inductance hardware setup is displayed in Fig. 1 b). Switching current (I d ) is measured by a shunt resistor, SSDN -414 -01 (400 MHz, 10 mΩ) from T&M research. In the similar manner, switching voltage (V ds ) is measured by high voltage differential probe (THDPO200, 200 MHz). The chosen measuring equipments prove to be good enough to track the transient waveforms of the selected devices [10]. An isolated gate driver with an adjustable output voltage is used for driving the SiC MOSFET where the gate voltage is set to 20 V for turn-on and -5 V for turn-off. The same gate driver is used for driving the Si IGBT with a small modification to achieve the required gate voltage (± 15 V). Table 2 shows the key electrical parameters of the SiC MOSFET versus the Si IGBT taken at 25 • C and 125 • C from the manufacturers datasheet [11,12]. R ds /R ce is the on-state resistance of MOSFET/IGBT, V CEO is the on-state knee voltage, and R d is the resistance of the free-wheeling diode. The column "difference" enlists the percentage increase (lead by + sign) or decrease (preceded by -sign) of the corresponding on-state parameters from 25 • C to 125 • C. An inductor with a single layer winding is used as load in order to ensure minimum stray capacitance (measured to be 10 pF using impedance analyser, E4990) so that the true switching characteristics of DUT are reflected. Table 2. Key electrical parameters of SiC MOSFET versus Si IGBT module [11,12]. Parameters SiC Table 3. A summary of measurements of DUTs. V ds = 600 V, I d = 300 A and T j = 25 • C. Table 3 summarises a sample measurement of SiC MOSFET versus Si IGBT at V ds /V ce of 600 V, I d /I c of 300 A and T j of 25 • C. Fig. 2 a) -d) exemplify the turn-on and turn-off transients of SiC MOSFET and Si IGBT for the summarised cases. For the selected value of gate resistances, the turn-off (E of f ), turn-on (E on ) and total losses (E tot ) are plotted with varying load current, and are illustrated in Fig. 2 e) and f), which will be used as a look-up table input for the simulation of converter losses in Section 4. Selection of the converter topology and simulation of losses The most popular and commercialised voltage source converter (VSC) topologies in offshore wind applications are mainly two: the first is three-phase, two-level type and the second is three-phase, three-level neutral point diode clamped (NPC) type [4,13]. Former topology is employed for the low voltage and latter for the high voltage applications. These converters are mounted in a back-to-back configuration such that each share the same dc-link as illustrated in Fig. 3. One converter acts as a rectifier (connected on the generator-side) and another converter acts as an inverter (connected on the grid-side). When IGBTs/MOSFETs along with anti-parallel diodes are used, the converter allows bidirectional flow of power. In this paper, the three-phase, two-level topology, as shown in Fig. 3, is chosen where the main purpose is to study the power losses in the back-to-back converter and perform the comparison between all-Si and all-SiC devices. Using MATLAB simulink, the switching loss obtained from the laboratory measurements and the conduction loss from the datasheet are used as a look-up table or polynomial functions based on the curve fitting for simulating the total converter loss. The loss is simulated for a space vector PWM (SVPWM) [14], dc-link voltage (V dc ) of 760 V, modulation index (m) of 1, and a load current (I orms ) of 300 A. The converter output voltage (V orms ) is about 460 V ( √ 3/(2 · √ 2) · V dc · m) and the power rating is approximately 240 kW. Grid Step-up Transformer Figure 3. Schematic diagram of a three-phase, two-level back-to-back converter configuration for low voltage offshore wind applications. Current in one of the top switches and diodes of grid-side inverter is denoted by I T 1 and I D1 , correspondingly. Numerical verification of the simulation results In the numerical verification method, the turn-on, turn-off, and total switching power losses of IGBT T1; P sw−on−T 1 , P sw−of f −T 1 , and P sw−tot−T 1 , correspondingly; are computed by counting the number of switching events during the fundamental cycle of the output. Table 4 lists the reading of currents and the corresponding energy losses from Fig. 5 a) and b), respectively. Note that Fig. 5 b) is derived from Fig. 2 sum of E of f is 38.19 mJ and P sw−of f −T 1 is 2.90 W, and thus, P sw−tot−T 1 is 13.03 W, as listed in Table 5 in the row named "Numerical". As the percentage differences between numerically calculated and simulated losses with reference to the numerical losses are below 4 %, as shown in Table 5, indicated by row "Error", the simulation results have reasonable accuracy. Table 4. Table 4. Reading of currents and energy losses (refer Fig. 5 a and b). Analytical verification of the simulation results The switching loss of IGBT/MOSFET can be estimated using analytical approximations as long as the energy loss during switching is linearly dependent on the collector current, as given by Equation 1 [15]. Compared to the simulated results, as presented in Table 5, the error was found to be in the range of 5 % -10 %. Moreover, considering the sinusoidal dependency of duty cycles versus time, the on-state power loss of IGBT (P con−tr−T 1 ) and diode (P con−diode−D1 ) can be calculated by using Equation 2 and Equation 3, respectively [16,17]. In these equations where ± are present, the upper sign applies for an inverter mode (motor operation) and lower sign for a rectifier mode (generator operation). For a MOSFET, the first term in Equation 2 will be zero because it does not possess knee voltage. For the better clarity, the description of symbols used in Equation 1 -3 are enlisted in Table 6. The differences in analytically calculated and simulated conduction losses with respect to that with analytically calculated are found to be below 3 %. Evaluation of inverter power loss at different switching frequencies The detailed loss breakdown at various switching frequencies (1 kHz to 50 kHz) is shown in Fig. 6. The chosen IGBT is a PT type optimized for the fast switching over the conduction loss. The legends in bar chart are: turn-on switching loss (P sw−on ), turn-off switching loss (P sw−of f ), diode recovery loss (P rec ), conduction loss in a transistor (P con−tr ), and conduction loss in a diode (P con−diode ). SiC MOSFET helps to reduce the switching loss, which is a dominant part of total loss in an IGBT inverter, particularly the P sw−on . Inverter with SiC MOSFET can switch at higher frequency compared to that with Si IGBT with almost the same total power loss: an example of which is indicated in the bar chart where the frequency is about 6 times higher in SiC than in Si for the same total inverter power loss of about 4 kW. Simulation results show that the conduction loss in the Si IGBT inverter is higher by a factor of 2 at 25 • C to that in the SiC MOSFET inverter. Furthermore, the results reveal that the total conduction loss (P con ) is approximately equal to the total switching loss (P sw ) at about 15 kHz and 25 kHz for the all-Si and all-SiC inverters, accordingly, i.e., for the low frequency region, the conduction loss is a dominating part of the total inverter loss. It should be pointed out that the turn-off power loss of the selected devices are almost equal because the PT IGBT switches off as fast as SiC MOSFET. The all-Si inverter has approximately 3.3 times higher P sw than that with the all-SiC inverter, the major share being the combined loss of P sw−on and P rec . Therefore, in order to use Si IGBT at higher switching frequency, a practical solution would be the use of SiC diode as an anti-parallel diode instead of Si diode (a Hybrid solution) as this solution leads to the reduction in P rec , and subsequently its influence on P sw−on . Nonetheless, SiC MOSFET is a better solution over PT IGBT for all range of switching frequencies, unlike in a NPT IGBT, as discussed in the previous work, where the losses were comparable for lower frequencies (≤ 3 kHz) [8]. Evaluation of back-to-back converter efficiency at different switching frequencies A comparison of the total converter efficiency in a back-to-back converter using the all-SiC devices with the all-Si devices is depicted in Fig. 7. Converter with the SiC MOSFET shows 0.86 % and 5.04 % higher efficiencies at 1 kHz and 50 kHz, correspondingly over their Si IGBT counterparts, as marked in Fig. 7. The simulated switching losses of the back-to-back converters are equal, but the conduction losses are not. For instance, in the case of a rectifier mode, the simulated P con−tr is lower and P con−diode is higher by a factor of 10 compared to those, correspondingly, in the case of an inverter mode. Hence, SiC MOSFET offers lower on-state and switching losses compared to Si IGBT. Moreover, MOSFET structure possesses an intrinsic diode which switching performance is almost like a SiC Schottky diode, and thereby, the need of an additional anti-parallel diode can be eliminated, however, there is no such possibility in IGBT structure. The VSC with high voltage SiC MOSFETs (when available in ≈ 10 kV range) will be the most attractive candidate for high voltage direct current connections to offshore wind-farms as efficiency become more important. Comparison of a SiC MOSFET and a Si IGBT efficiencies in a three-phase back-toback converter illustrates that at all the switching frequencies, the higher efficiency SiC MOSFET provide a performance advantage over their Si IGBT counterparts. At switching frequencies of 1 kHz, 3 kHz (practical at today's offshore applications) and 50 kHz, converter with the SiC MOSFET shows 0.86 %, 1.05 % and 5.04 % higher efficiencies over Si IGBT. Conclusion In conclusion, the simulation revealed that the solution with SiC MOSFET results in lower losses compared to that with Si IGBT over all the switching frequency ranges; the advantages of SiC being more pronounced at higher frequencies. For instance, the simulation showed that the back-to-back converter with SiC MOSFET is 0.86 % and 5.04 % more efficient at 1 kHz and 50 kHz, respectively over Si IGBT. Furthermore, it is also illustrated that for the same output power the inverter switching frequency can be increased by approximately 6 times in the SiC MOSFET compared to that in the Si IGBT with the similar total power loss. Besides, the conduction loss in Si based converter is 2 times and 1.56 times higher than those in SiC based at 25 • C and 125 • C, respectively. Thus, the reduction in loss can be utilized in a number of ways to optimize the circuit design, such as, increase efficiency, reduce cooling requirement. Especially, in a grid-connected offshore wind system, by increasing the operating frequency, the size of passive components, namely, filter and transformer, can be reduced resulting in higher power density of the system, which is a critical factor.
2019-04-27T13:12:14.212Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "4f6f4fddcdb357481ee853baf00df5d9a7acb859", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1104/1/012032", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e2fc23f26aededcbab8dd28a2e60f2e9b501be46", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
8038605
pes2o/s2orc
v3-fos-license
Impact of clerkship attachments on students’ attitude toward pharmaceutical care in Ethiopia Objective The study objective is to investigate the impact of mandatory clinical clerkship courses on 5th-year pharmacy students’ attitudes and perceived barriers toward providing pharmaceutical care (PC). Methods A cross-sectional survey was conducted among 5th-year pharmacy students undertaking mandatory clinical clerkship in the University of Gondar, Ethiopia. A pharmaceutical care attitudes survey (PCAS) questionnaire was used to assess the attitude (14 items), commonly identified drug-related problem/s (1 item) during clerkships, and perceived barriers (12 items) toward the provision of PC. Statistical analysis was conducted on the retrieved data. Results Out of the total of 69 clerkship students, 65 participated and completed the survey (94.2% response rate). Overall, 74.45% of participants opinioned a positive attitude toward PC provision. Almost all respondents agreed that the primary responsibility of pharmacists in the healthcare setting was to prevent and solve medication-related problems (98.5%), practice of PC was valuable (89.3%), and the PC movement will improve patient health (95.4%), respectively. Unnecessary drug therapy (43%), drug–drug interactions (33%), and non-adherence to medications (33%) were the most common drug-related problems identified in wards. Highly perceived barriers for PC provision included lack of a workplace for counseling in the pharmacy (75.4%), a poor image of pharmacist’s role in wards (67.7%), and inadequate technology in the pharmacy (64.6%). Lack of access to a patient’s medical record in the pharmacy had significant association (P<0.05) with PC practice, performance of PC during clerkship, provision of PC as clinical pharmacists, and Ethiopian pharmacists benefiting by PC. Conclusion Ethiopian clinical pharmacy students have a good attitude toward PC. Efforts should be targeted toward reducing these drug therapy issues, and aiding the integration of PC provision with pharmacy practice. Introduction The roles of a pharmacist in clinical pharmacy practice have widened from providing the traditional compounding and dispensing services to patient-oriented care such as optimization of medication therapy and promoting health, wellness, and disease prevention by pharmaceutical care (PC). 1 PC is an evidence-based practice that ensures a rational use of drug therapy for diseases and prevention of drug-related problems. 2 Several studies have shown that PC services can influence optimal drug therapy, save lives, and enhance a patient's quality of life. 3,4 Clinical pharmacy practice is well recognized and practiced in developed countries like the United States, Australia, Brazil, and many European countries. 5 Developing countries like India, Pakistan, and Middle East have changed their curriculum to emphasize the need to 386 Tsega et al provide more clinical knowledge in addition to industrial knowledge to pharmacists. 6 In Ethiopia, a new clinical pharmacy program was established in 2008 with an objective of training patient-centered pharmacy practitioners by extending the 4-year undergraduate pharmacy program to a 5-year clinical pharmacy program with clerkship. 7 In March 2009, the School of Pharmacy of Jimma University launched the nation's first graduate program for training both the clinical pharmacy practitioners as well as the faculty members for the new undergraduate clinical pharmacy program as part of the National Harmonized Modular Curriculum program for the Bachelor of Pharmacy degree. 8,9 Successively, the University of Gondar-School of Pharmacy (UoG-SP) adopted the 5-year clinical pharmacy program (10 semesters, 300 credit hours), including 4 years of academic study with 1-year clinical clerkship. These clerkship courses are conducted in various departments for 43 credit hours/week, and include ambulatory care (7 credit hours), pediatrics (7 credit hours), psychiatry (3 credit hours), drug information service (3 credit hours), hospital pharmacy (5 credit hours), internal medicine (7 credit hours), surgery (3 credit hours), community pharmacy (5 credit hours), and gynecology and obstetrics and family planning (3 credit hours). The clinical clerkship only starts after the 4th year students take elective courses in areas such as diabetes, HIV/AIDS, and pharmaceutical manufacturing with ongoing continuous assessment combined with 6 months of academic research. The main focus of these clerkships is to provide knowledge to student pharmacists about PC and ensure the optimal use of medicines by the patients, identify and resolve drug-related problems through patient information retrieval and assessment, develop patient-specific pharmacotherapy care plans, provide patient education and training, and collaborate with other healthcare teams in various wards. Clinical clerkship involves active experiential rotations of pharmacy students in various clinical wards and provides opportunity to assist PC services in various pharmacy practice sites under the supervision of expert preceptors, guiding them to build the patient-pharmacist interaction and ensure positive working experience with interdisciplinary healthcare teams in the clinical decision-making processes. To date, there have been no studies evaluating the attitude of clinical pharmacy students undertaking mandatory clinical clerkship courses in Ethiopia. Hence, this study determined the impact of mandatory clinical clerkship courses on the 5th-year pharmacy students' attitudes and perceived barriers toward providing PC. Methods This cross-sectional survey was conducted at UoG-SP. All the 5th-year clinical pharmacy students who underwent 1-year mandatory clinical clerkship courses in UoG-SP were approached to participate in the study. Ethical approval was obtained from the Institutional review board of UoG-SP. Assessment tool A self-administered survey was used in the study. The investigators designed the study survey based on the standard pharmaceutical care attitudes survey (PCAS). PCAS is a validated instrument that measures students' attitude toward PC. It was developed in the United States and has already been used in previous studies conducted in Nigeria, the Kingdom of Saudi Arabia, Qatar, Pakistan, and the United States. [10][11][12][13][14][15][16] The draft survey was distributed to ten faculty members at UoG-SP to assess its readability and content validity, and was further pretested among four randomly selected pharmacy students at UoG-SP for clarity, relevance, and acceptability. Refinements were made as required to facilitate better comprehension and organize questions before distributing the final survey among the study population. The 5th-year students who finished mandatory clinical clerkship and completed the course were requested to complete the questionnaire. No financial incentives were provided to the study participants who were encouraged to participate voluntarily. content of the study tool The final structured survey questionnaire was mainly composed of four sections with a total of 32 items, comprising of pharmacy students' sociodemographic characteristics (5 items), pharmacy students' attitudes toward PC based on the PCAS survey (14 items), commonly identified drug-related problem/s during clerkships (1 item), and pharmacy students' perceived barriers toward provision of PC (12 items). Sociodemographic characteristics included sex, age, religion, marital status, and social habits. To assess the students' attitudes toward PC, a 14-item questionnaire (PCAS) with a 5-point Likert scale ranging from "strongly agree =5" to "strongly disagree =1" was utilized. The PCAS score of each statement (range 1-15) were summed. Scores greater than nine were interpreted to be a positive attitude, eight as neutral, and 1-7 as a negative attitude. 17 The Likert scale measured the extent of student agreement with statements regarding PC. These statements were adopted from previous studies using the PCAS survey. [10][11][12][13][14][15][16] Eleven were positively worded, two were negatively worded, and one was related to 387 impact of clinical clerkship courses on Ethiopian pharmacy students' attitudes motivation. The two negative items (6 and 13) were reverse scored during the analysis so that higher scores reflected more positive attitudes toward PC. One question was related to the practicality of commonly identified drug-related problems observed in wards and encouraged participants to select at least one to three options out of ten options. Ten items were included to identify the perceived barriers that would prevent clerkship students from providing PC to patients, by means of a 3-point Likert scale (high, medium, and low). The Likert scale indicated the extent to which each of the listed barriers would impede the provision of PC by the students in the future. The barriers included: inadequate drug information resources, lack of access to patient medical records in the pharmacy, lack of therapeutics knowledge, lack of understanding of PC, inadequate training in PC, time constraints, and other barriers. Data from the completed survey was first entered in Microsoft Excel 2007, and then analyzed using Statistical Package for Social Science Version 22, Chicago, IL, USA. Sociodemographic characteristics were summarized as descriptive statistics and inferential analysis. The mean, standard deviation, median, interquartile range, and percentage were computed for all PCAS statements. Correlation between barrier statements was determined using multiple logistic regression. In all statistical analyses, a P-value of ,0.05 was considered to be statistically significant. Results A total of 69 clerkship students were approached out of which 65 agreed to participate and completed the survey. The overall response rate was 94.2%. students' attitude toward Pc The mean ± standard deviation of the PCAS scores representing pharmacy students' attitude toward providing PC are tabulated in Table 1. Overall, 74.45% of clerkship students had very positive attitude toward PC provision. Almost all respondents agreed that the primary responsibility of pharmacists in the healthcare setting was to prevent and solve medication-related problems (98.5%), practice of PC was valuable (89.3%), and the PC movement will improve patient health (95.4%), respectively. Further, 73.8% of the participants agreed that the PC movement will benefit the pharmacists in Ethiopia. However, 80% of respondents believed that the provision of PC took too much time and effort. Out of the 14 PCAS statements, items 6 and 13 were reversely scored, and only 3.1% and 10.8% clerkship students strongly disagreed with these statements respectively. Most commonly identified drug-related problems The most commonly identifiable drug-related problems observed by the students, during their clerkship in the ward included unnecessary drug therapy (43%), drug-drug interactions (33%), non-adherence to medications (33%), additional need for drug therapy (15%), and non-compliance to medications (17%), respectively ( Figure 1). students' perceived barriers for Pc provision Barriers perceived highly by students as that prevented them from providing PC included lack of workplace for counseling in the pharmacy (75.4%), a poor image of pharmacist's role in wards (67.7%), inadequate technology in the pharmacy (64.6%), inadequate drug information resources in the pharmacy (53.8%), lack of access to patient medical records in the pharmacy (50.8%), and inadequate training in PC (50.8%) ( Table 2). The P-values within each barrier statements are summarized in Table 3. Significant positive correlation was noted between each barrier. Lack of access to patient medical records in the pharmacy was found to have a significant association with lack of therapeutic knowledge (0.000), lack of understanding PC (0.002), inadequate technology (0.000), and a poor image of pharmacy (0.0001). In addition, significant association was found between lack of therapeutic knowledge and statements lack of access to medical records (0.000), lack of understanding PC (0.001), inadequate technology (0.000), poor pharmacist's role in wards (0.000), and inability to deal with a different sex (0.004) ( Table 3). Discussion This study is the first to investigate several aspects related to the attitudes and barriers experienced by Ethiopian pharmacy students with respect to PC provision during clerkships. It showed that the clerkship courses taught to the final year pharmacy students in Ethiopia made their attitude highly positive PC provision in the medical wards is a multifaceted process that involves identifying, preventing, and resolving drug therapy problems. Some of the most common types of drug therapy problems observed by the pharmacy students during the course of clerkship were unnecessary drug therapy (43%), followed by drug-drug interactions (33%) and nonadherence (33%). The most common barrier highlighted by the clerkship students in our study as that impeding the provision of PC was the lack of space for counseling in the pharmacy (75.4%). The percentage of students stating this barrier was much higher than that observed in Qatar study (44%). 14 Some of the reasons for this perceived barrier might be lack of infrastructure, smaller facilities, and patient overload. Another perceived barrier preventing the provision of PC in our study was the poor image of the pharmacist in Ethiopian hospitals. This is unfortunate, since several studies have demonstrated the effectiveness of pharmacist-delivered PC services to improve clinical, humanistic, and economic conditions. 17,19 The development of PC necessitates positive attitude among the public. The students also identified inadequate technology in pharmacies as an impediment in delivering PC. This finding is consistent with the previous studies conducted in community pharmacies in People's Republic of China and Northern Ireland. 20,21 The study confirmed that Ethiopian pharmacies have limited access to electronic resources and have limited resources in general. The implementation of PC in Ethiopia will require availability of technological support in pharmacies along with the needed resources that facilitate patient care, such as electronic medical records, electronic decision support, and computerized patient-order entry. The data from our study suggests that participants who had more practical experience had more positive attitude toward PC. It is also possible that since this was the first batch of pharmacy students undertaking mandatory clinical clerkships in Ethiopia, the general attitude toward providing PC was positive. More research is needed to evaluate how the knowledge acquired during clerkship training translates into an actual practice in Ethiopia once the pharmacy students graduate. As the clinical pharmacy is in its nascent stages in Ethiopia, PC is not yet offered at pharmacy practice sites. If the pharmacy students are exposed to high-level training during their clerkship at experiential training sites, the students might be more likely to assume an active role in offering PC during their professional practice. Recognition of the value of pharmacy is paramount in preventing and managing drug therapy problems. Students in our study showed good skills in identifying and managing drug therapy problems. The clerkship experience will be helpful to them in terms of applying their theoretical skills in providing patient-centered care. These findings are consistent with a study conducted by Rovers et al 22 that showed that experiential training helped clerkship students identify drug therapy problems upon using a guided interview process to interview patients. Although this study showed a high degree of positive attitude among the final year students toward the implementation of PC, it is important for these same students, upon graduation, to want to implement PC service in hospitals and other practice sites. This may help to boost their professional image in their workplace and improve the care delivery process in hospitals. Engaging clinical pharmacists in the entire patient care process, from initial assessment through documentation to follow-up evaluation in various patient wards, can create a significant difference in improving patient care. Clinical pharmacists can aid in designing and developing PC through individual patient care plans and treatment interventions. Future researches may include interventional studies that could evaluate the extent of team collaboration and communication support to PC. Limitations We used a standardized PCAS questionnaire using the cross-sectional study method to assess the attitude of the first-batch clinical clerkship students to reduce selection bias. Open-ended questions were limited in the questionnaire and the study was pilot tested. Further, the attitude and barriers were tested using different questions to reduce information bias. However, the study is not without limitations. It was conducted in a single institution and the results might not be generalizable to other pharmacy schools in Ethiopia. In addition, data presented here is self-report, and some of the respondents may provide extreme responses as compared to others, due to the motivations and beliefs of the respondents, and might be subject to recall bias. Conclusion Ethiopian clinical pharmacy students have a good attitude toward PC. Unnecessary drug therapy, drug interactions, and medication non-adherence were some of the drug therapy issues that pharmacists encountered while providing PC. Efforts should be targeted toward reducing these drug therapy issues and aiding the integration of PC provision with pharmacy practice.
2018-04-03T04:18:19.161Z
2015-05-20T00:00:00.000
{ "year": 2015, "sha1": "79fa74768021e777fc6cd642115fa4d47dedf89a", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=25124", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "84421de024f0fb5598f195a88e850cbd25d7c8d5", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229939100
pes2o/s2orc
v3-fos-license
Dietary Black Seed Effects on Growth Performance, Proximate Composition, Antioxidant and Histo-Biochemical Parameters of a Culturable Fish, Rohu (Labeo rohita) Simple Summary Stress-related losses are of major concern in aquaculture practices. Black seed is a medicinal plant species widely used as natural antioxidants and hepatic-nephric protector. Rohu is a commercially valuable culturable fish species. The present study was undertaken to assess the effects of dietary black seeds on the growth performance and antioxidant status of rohu. Fingerlings were fed on diets containing 0.0%, 1.0% and 2.5% black seed for 28 days. The results showed that rohu fed on black seed supplemented diets has increased growth rate. Moreover, black seed supplementation improved the muscles protein contents and antioxidant status as indicated by decreased lipid peroxidation and increased antioxidant enzymes levels in the liver, kidney, gills and brain of rohu. The black seed fed rohu showed decreased hepatic–nephric marker key-functioning marker enzymes levels. The histo-architecture of liver and kidney remained unchanged following black seed supplementation. Black seed is cheap and locally available in Pakistan. On the basis of the present study results, 2.5% black seed supplementation is suggested in rohu diet to increase its growth and avoid oxidative stress related losses. The results of the present study will be useful for nutritionists, aquaculturists and researchers in formulating aqua feeds. Abstract This feeding trial was conducted to investigate the effects of dietary black seed (Nigella sativa) supplementation on the growth performance, muscles proximate composition, antioxidant and histo-biochemical parameters of rohu (Labeo rohita). Fingerlings (8.503 ± 0.009 g) were fed on 0.0%, 1% and 2.5% black seed supplemented diets for 28 days. Fish sampling was done on the 7th, 14th, 21st and 28th day of experiment. The results of the present study indicated that black seed supplementation significantly increased growth performance and muscles protein contents of rohu over un-supplemented ones. Lipid peroxidation levels significantly decreased in all the studied tissues (liver, gills, kidney and brain) of black seed fed rohu, whereas the antioxidant enzymes (catalase, glutathione-S-transferase, glutathione peroxidase and reduced glutathione) activities were increased in all the studied tissues of black seed supplemented rohu at each sampling day. The hepatic-nephric marker enzymes levels were decreased for black seed fed rohu. The present study showed that tested black seed levels are safe for rohu. Black seed is cheaply available in local markets of Pakistan; therefore, based on the results of the present study, it is suggested that black seed has potential to be used as natural growth promoter and antioxidant in the diet of rohu. Introduction Aquaculture is the fastest-growing animal food producing sector and its products are valuable source of animal protein, omega-3 polyunsaturated fatty acids, vitamins and essential micronutrients. Globally, aquaculture products demand is rapidly increasing, which has led to the intensification of aquaculture practices [1]. Intensification of the aquaculture industry has resulted in a stressful environment, which is responsible for the immunosuppression, oxidative stress and increased risk for infectious diseases in farmed fish [2]. Although several growth promoters, synthetic hormones and chemotherapeutics were being used for enhancing fish yield and for the treatment of diseases [3] (pp. 118-127), their continuous use has posed serious adverse effects on fish, environment and its consumers health [4]. Currently, consumers' demand for safe and quality farmed fish is increasing, and to meet these demands, researchers have intensified efforts to develop safe fish feed additives/supplements to substitute traditionally used synthetic hormones and chemotherapeutics agents [5]. The supplementation of medicinal plants and their derivatives in aqua feed has attracted a lot of attention globally; therefore, it has become a subject of active scientific investigations [6]. There are more than 60 different medicinal plants species reported to be used in aquaculture industry [7][8][9]. These plants species possess several bioactive constituents and nutrients that make them potent pharmacological and therapeutic agents [10]. Being biodegradable, cheaper, easily available and eco-friendly, medicinal plants are a promising substitute to synthetic hormones and antibiotics that were traditionally used in aquaculture industry [11]. Black seed (Nigella sativa) is a medicinal plant belonging to the family ranunculaceae and it has been known as far back as 1400 years ago [12]. Due to its numerous therapeutic properties, black seed is widely cultivated and used in different regions of the world [13,14]. The pharmacological properties of this plant are mainly attributed to its seeds, which have several bioactive constituents such as thymoquinone, dithymoquinone, thymol, nigellicine and nigellidine [15]. Rohu is cultured under semi-intensive polyculture system and is highly prestigious among other culturable carp species; therefore, it is highly demanded as food fish in Pakistan [29]. N. sativa is a native medicinal plant species, cultivated in almost all the provinces of Pakistan and, accordingly, it is cheaply available in local markets [30][31][32]. Therefore, the present study was undertaken to investigate the effects of dietary N. sativa supplementation on the growth performance, proximate composition, antioxidant and histo-biochemical parameters of rohu fingerlings. Fish Culture and Diets Preparation Healthy rohu fingerlings were procured from a government fish seed hatchery in Lahore and acclimatized under laboratory conditions for two weeks prior to this feeding trial. Experimental fish (8.503 ± 0.009 g) were randomly distributed into three groups and kept in well-aerated glass aquaria filled with 50 L of water (60 fingerlings/group). The experiment was done in triplicates and lasted for 28 days. The commercially available floating carps feed prepared using soybean meal and plant protein products, grain and cereal products, yeast, DCP, vitamins and trace minerals, was purchased from Oryza organics, Pvt. Ltd., Lahore, Pakistan, and used as a basal diet for the present study. Black seed was purchased from a local shop in Lahore and grinded into powder form. Three diets containing 0.0%, 1% and 2.5% black seed were prepared by mixing black seed powder into the basal diet and fed to group 1, 2 and 3 fish respectively. The concentrations of black seed used in this study were selected following Bektas et al. [33]. Fish were fed twice a day at the rate of 5% of their wet body weight and ration size was adjusted on weekly basis after each sampling. The three quarters of aquaria water was siphoned daily and replaced by clean well-aerated water. Water Quality Parameters During the experimental period, water quality parameters such as dissolved oxygen, temperature and pH were monitored on a daily basis using portable meters (YSI EcoSense DO200A, Yellow Springs Incorporated, Yellow Springs, OH, USA; ADWA, AD1020 pH/mv/ISE). The average water dissolved oxygen, temperature and pH were 5.6 mg L −1 , 27 • C and 7.8, respectively. Fish Sampling Fish sampling was done on a weekly basis (7th, 14th, 21st and 28th day). On each sampling day, 15 fingerlings/group (5 from each replicate) was harvested using a hand net. The fish were anesthetized using clove oil and their length-weight was recorded prior to their dissection. The liver, kidney, gills and brain of experimental fish were excised, cleaned of extraneous tissues in phosphate buffer solution then immediately snap frozen in liquid nitrogen. In the laboratory, all collected tissues were shifted into an ultra-low freezer (−86 • C) until used for further analysis. Fish dorsal muscles were stored at −20 • C until used for their proximate composition analysis. Preparation of Tissue Homogenates The tissues homogenates were prepared in 0.1 M sodium phosphate buffer (pH 7.4) using tissue homogenizer (Scilogex, Cromwell Avenue, Rocky Hill, CT, USA). One portion of each tissues homogenate was stored at −20 • C, which was used for assessing lipid peroxidation and remaining portion of homogenates were centrifuged at 4 • C at 13,500 rpm for 30 min (Allegro 64A centrifuge) to obtain post mitochondrial supernatant (PMS). The PMS was stored in clean labeled eppendorf and stored at −20 • C until used for further biochemical assays. Proximate Composition Analysis The basal feed and N. sativa seeds used in the present study were analyzed for their proximate composition [34]. Fish dorsal muscles were also analyzed for their proximate composition (% moisture, fat, ash and protein contents) [34]. Oxidative Stress and Antioxidant Defense Markers Assessment The oxidative stress and antioxidant defense enzymes activities were estimated in the liver, kidney, gills and brain of the fish. The lipid peroxidation level was estimated following the protocol of Faheem and Lone [35]. The reaction mixture (1 mL of 10% tissue homogenate, 1 mL of 10% tricholoro acetic acid and 1 mL of 0.67% thiobarbituric acid) was incubated in boiling water bath for 45 min and cooled at room temperature followed by its centrifugation (2500× g) at 4 • C for 10 min. The absorbance of supernatant was recorded at 532 nm and values were expressed as nmole of thiobarbituric acid reactive substances (TBARS) formed per gram of tissue. The biochemical analysis for estimating the tissue antioxidant enzymes activities, catalase (CAT), glutathione-S-transferase (GST) and glutathione peroxidase (GPx), was performed using PMS [36]. CAT activity was measured using 1 mL of 10% PMS with equal volumes of 0.1 M sodium phosphate buffer and 0.09 M H 2 O 2 . The decrease in sample absorbance was recorded after every 30 s at 240 nm and CAT activity was expressed as nmol H 2 O 2 consumed/min/mg of protein. For tissue GST estimation, change in absorbance of reaction mixture (0.1 M sodium phosphate buffer, 1 mM 1-choloro-2,4-dinitrobenzene (CDNB), 1 mM reduced glutathione and 10% PMS) was recorded at 340 nm and values were expressed as nmol of CDNB conjugates formed/min/mg of protein. The glutathione peroxidase (GPx) activity was estimated by recording the change in absorbance of reaction mixture (0.1 M sodium phosphate buffer, 10 mM EDTA, sodium azide, 1 mM reduced glutathione, 2 mM NADPH, 0.09 M H 2 O 2 , 1 IU/mL glutathione reductase and 10% PMS) at 340 nm and values were expressed as nmoles of NADPH oxidized/minute/mg of protein. The reduced glutathione (GSH) levels were determined by recording absorbance of reaction mixture (0.1 M phosphate buffer (1 mL), 10 Mm DTNB (1 mL) and 1 mL of PMS) at 412 nm and values were expressed as nmol GSH/gram of tissue. The protein contents were estimated using Bradford reagent with bovine serum albumin as standard [36]. Liver and Kidney Functioning Tests The hepatic-nephric enzymes levels such as alkaline phosphatase (ALP), alanine and aspartate aminotransferase (ALT and AST), urea and creatinine were determined using liver and kidney tissues homogenates respectively. The Randox colorimetric kits (UK) were used for performing all these biochemical analysis and optical density was noted using UV-visible spectrophotometer (Hitachi U-2000) following manufacturer instructions. Histological Preparations At the end of experiment, a small portion of liver and kidney was excised and preserved in 10% formaldehyde solution until used for histological preparations [34]. Data Analysis The obtained data was presented as mean ± S.E.M (standard error of mean). The data was checked for normality and homogeneity of variance by performing Kolmogorov-Smirnov and Levene tests, respectively. Then, data was analyzed by performing ANOVA followed by Tukey's honest significant difference test to check significant difference between means (p < 0.05). All statistical analysis was performed on GraphPad Prism-8.1.0 (325). The data obtained from histological studies was not subjected to any statistical analysis instead visually examined to find any potential difference between black seed supplemented and un-supplemented groups. Ethical Statement The study was carried out in accordance with the principles of the Basel Declaration and recommendations in the proceedings of the meeting of Departmental Doctoral Program Committee, Zoology, University of the Punjab, Lahore, Pakistan. The protocol was approved by Punjab University Advanced Studies and Research Board via letter no: D/7566/Acad. Growth Performance The effects of black seed supplemented diets on the growth performance of rohu fingerlings are presented in Table 1. The results indicated that dietary inclusion of different levels of black seed has positive effects on the growth rate of rohu throughout the study period. A significant increase in %WG and SGR was found for rohu fed black seed supplemented Animals 2021, 11, 48 5 of 14 diets in comparison with un-supplemented ones at each sampling day. The results also indicated that group-3 rohu (fed with 2.5% black seed supplemented diet) has statistically higher % ADWG and PER when compared with other groups throughout the study period. The CF and HSI index of black seed fed rohu was found in similar with un-supplemented ones. The fish survival rate was 100% for all the studied groups (Table 1). Values with the * in the same row are statistically different (* p < 0.05, ** p < 0.01, *** p < 0.001 and **** p < 0.0001). where, Percentage Weight Gain, %WG = 100 × (final body weight-initial body weight)/Initial body weight; Specific Growth Rate, SGR = 100 × ln (final body weight/Initial body weight)/days of the experiment; Average Daily Weight Gain, ADWG = (final body weight-initial body weight)/days of the experiment; Protein Efficiency Ratio, PER = weight gain/protein intake; Condition Factor, CF = 100 × body weight (g)/body length (cm 3 ); Hepatosomatic Index, HSI = 100 × liver weight/body weight and Percentage Survival Rate, % SR = 100 × final number of fish/initial number of fish. Proximate Composition of Black Seed, Basal Diet and Dorsal Muscles of Rohu The proximate composition of N. sativa seeds showed 24.7% crude protein, 32.5% crude fat, 4.1% ash and 5.8% moisture contents. The basal diet (0.0% black seed) showed a composition of 12.81% moisture, 20.37% ash, 5.5% fat and 26.88% crude protein levels. The experimental diet (basal diet +1.0% black seed) showed a composition of 12.75% moisture, 20.42% ash, 5.6% fat and 26.89% crude protein levels. The experimental diet (basal diet +2.5% black seed) showed 12.71% moisture, 20.39% ash, 5.5% fat and 26.91% crude protein levels. The results of proximate composition revealed that dietary black seed supplementation has not statistically affected moisture and fat contents in muscles tissue of rohu at each sampling day; however, the ash contents were found to be significantly increased following black seed supplementation at 21st and 28th day of sampling. The crude protein levels in muscles tissue of rohu was found to be significantly increased following dietary black seed supplementation in comparison with control at each sampling point ( Table 2). Antioxidant Status The effects of dietary black seed supplementation on the oxidative stress and antioxidant enzymes activities in all the studied tissues of rohu are shown in Figures 1-5. The black seed supplemented rohu showed decreased lipid peroxidation levels in all the studied tissues (liver, kidney, gills and brain) when compared with un-supplemented ones ( Figure 1A-D). The antioxidant enzymes (CAT, GST and GPx) was found to be increased in all the studied tissues of rohu fed black seed supplemented diets when compared with rohu fed with basal diet throughout the study period (Figures 2-4). Furthermore, GSH levels were elevated in all the studied tissues of rohu fed with black seed supplemented diets in comparison with control ( Figure 5A-D). Liver and Kidney Histo-Biochemical Parameters The results indicated decreased hepatic ALP, AST and ALT levels in rohu fed blackseed-supplemented diets in comparison with rohu fed basal diet. Moreover, the dietary black seed supplementation decreased urea and creatinine levels ( Table 3). Among the tested concentrations, 2.5% black seed supplementation was found to be most effective. The histo-architecture of liver (hepatocytes, sinusoids and pancreatic tissue) and kidney (glomerulus, Bowman's space, renal tubules and hematopoietic tissue) of rohu fed black seed supplemented diets was in similar with control group (Figure 6A-F). The results of the present study showed higher protein efficiency ratio for black seed supplemented rohu at each sampling day. In agreement with present study results, Oreochromis niloticus fed black seed meal supplemented diets showed higher protein efficiency ratio [47]. The results of the present study also demonstrated that inclusion of black seed in rohu diets has not altered its hepatosomatic index. Likewise, the hepatosomatic index of Salmo caspius fed with 3% garlic supplemented diet remained unchanged [48]. In the present study, muscle protein contents were increased for rohu fed with black seed supplemented diets. The body composition of Oncorhynchus mykiss fed on diets supplemented with black seed oil (1 and 1.3%) demonstrated increased protein contents, thus improving its nutritional value [45]. These results are concomitant with our findings. Dicentrarchus labrax fed with 1% Thymus vulgaris supplemented diet has showed higher protein contents in its fillets [38]. Black seeds are a rich source of several essential amino acids (methionine, threonine, lysine, arginine, glutamic acid, leucine, tyrosine and proline) and (16)(17)(18)(19).9%) proteins [49] (pp. 1314-1315). Thus, their inclusion in the diet has improved the muscles protein contents in rohu. Several environmental stressors are responsible for production of intracellular reactive oxygen species causing oxidative stress in fish, which is indicated by the loss of its antioxidant enzymes activities [36]. There are several types of synthetic antioxidants used as feed additives [50] (pp. 1652-1657); however, accumulation of synthetic antioxidants and their metabolites residues in flesh has increased consumers' concerns regarding consumption of farmed fish [51]. Therefore, researchers are focusing on plant-based natural antioxidants to replace traditional synthetic antioxidants in fish feed [10]. Medicinal plants are a potential candidate to be used as natural antioxidant in aqua feed. There are significant reports regarding the use of medicinal plants as natural antioxidant in animal nutrition [52] (pp. 89-100) although fewer in aquaculture practices [9]. Lipid peroxidation is an extensively used biomarker to assess oxidative-stress-related damages in animal tissues [35]. In our study, dietary black seed supplementation decreased lipids peroxidation level in all the studied tissues of rohu. Sparus aurata fed with Trigonella foenu graecum supplemented diets for three weeks showed decreased lipid peroxidation levels in their muscles [42]. Thymoquinone (30-80%) is a major bioactive compound present in black seed which has potential to inhibit iron-dependent lipid peroxidation (Fenton reaction) in a concentration-dependent manner [53]. Enzymatic and non-enzymatic antioxidants such as CAT, GST, GPx and GSH are set to maintain lowest level of reactive oxygen species (ROS) in cells and, therefore, are an essential component of defense response of an organism [54]. The activities of antioxidant enzymes differ among different tissues and were found to be higher in those tissues that have higher oxidative potential [34]. The results of the present study revealed increased CAT, GST, GPx and GSH levels in all the studied tissues of rohu fed black seed supplemented diets. The dietary Ocimum gratissimum supplementation increased serum CAT and GST activities in Clarias gariepinus [40]. Dietary Curcuma longa supplementation for two months increased hepatic GPx and GSH levels in Ctenopharyngodon idella [55]. The increased activities of these antioxidant enzymes are attributed to the bioactive phytochemical components found in black seed (O-cymene; 2-isopropylidene-5-methylhex-4-enal; limonen-6-ol, pivalate; longifolene; phenol,4-methoxy-2,3,6-trimethyl; l-(+)-ascorbic acid 2,6-dihexadecanoate and 1-heptatriacotanol), which has free radicals scavenging properties, thus making them a potent antioxidant agent [56]. Liver is an important organ with high metabolic rate. Phosphatases and transaminases are vital for assessing liver function as they regulate several metabolic processes involving synthesis and deamination of amino acid during fluctuating energy demands under different nutritional, physiological and environmental situations [57]. The results of the present study demonstrated decreased hepatic ALP, AST and ALT levels for rohu fed black seed supplemented diets. The dietary Allium sativum (2.5% and 5%) supplementation for six weeks has decreased serum ALP and AST levels in Cyprinus carpio [58]. The dietary inclusion of Aloe vera (0.5%, 1%, 2% and 4%) decreased serum AST and ALT levels in GIFT tilapia [39]. These reports are in agreement with our results. The hepatoprotective action of black seed has been largely attributed to its thymoquinone contents, which protects the liver from injuries through different mechanisms such as increased activity of antioxidant enzymes, inhibition of iron dependent lipid peroxidation and lipogenesis in the hepatocytes [59]. Nitrogenous products such as urea, uric acid and creatinine are useful indicators for evaluating the state of the kidney and gills of fish. Among all of these, creatinine is most important as it represents more than 50% of nitrogenous waste excreted through the fish kidney [60]. In the present study, nephroprotective effects of black seed have been elucidated by decreased creatinine and urea levels in rohu fed black seed supplemented diets over un-supplemented ones. The inclusion of black seed in the diet of Lates calcarifer has decreased its serum creatinine level [24]. The herbal (Thymus vulgaris and Rosmarinus officinalis) supplemented diets has decreased serum urea and creatinine levels in Dicentrarchus labrax juveniles for 45 days [38]. The hepatic-nephric beneficial effects of black seed were further confirmed by regular liver and kidney histoarchitecture of rohu fed supplemented diets in similar with un-supplemented ones. The bioactive ingredients (carvacrol, α-tocopherol, thymoquinone and nigellicine) abundantly found in black seed are ascribed to scavenge ROS, thereby maintaining normal histoarchitecture and metabolic enzymes levels [61]. Conclusions In conclusion, the results of the present study gave a new insight on the use of black seed as natural growth promoter, antioxidant and hepatic-nephric protector in aqua feed of rohu. The results of the present study elucidated that all the studied levels of black seed (1% and 2.5%) are safe and have positive effects on the growth performance, antioxidant and histo-biochemical parameters of rohu. Black seed is suggested as a potential candidate to be used as feed additive in intensive fish culturing practices to avoid stress-related losses and ultimately to enhance the overall fish production. Funding: This research work was partially supported by Chiang Mai University. APC was covered by CMU. Institutional Review Board Statement: The study was carried out in accordance with the principles of the Basel Declaration and recommendations in the proceedings of the meeting of Departmental Doctoral Program Committee, Zoology, University of the Punjab, Lahore, Pakistan. The protocol was approved by Punjab University Advanced Studies and Research Board via letter no: D/7566/Acad.
2020-12-31T09:05:07.208Z
2020-12-29T00:00:00.000
{ "year": 2020, "sha1": "fdaaf509d4dc3c14366dc621e5aad8988c94a786", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/11/1/48/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d1931a640b5333afb72a67fd3b9ccb57dae462a", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
24449959
pes2o/s2orc
v3-fos-license
Cerebral blood volume and oxygen supply uniformly increase following various intrathoracic pressure strains Intrathoracic pressure (ITP) swings challenge many physiological systems. The responses of cerebral hemodynamics to different ITP swings are still less well-known due to the complexity of cerebral circulation and methodological limitation. Using frequency-domain near-infrared spectroscopy and echocardiography, we measured changes in cerebral, muscular and cardiac hemodynamics in five graded respiratory maneuvers (RM), breath holding, moderate and strong Valsalva maneuvers (mVM/sVM) with 20 and 40 cmH2O increments in ITP, moderate and strong Mueller maneuvers (mMM/sMM) with 20 and 40 cmH2O decrements in ITP controlled by esophageal manometry. We found cerebral blood volume (CBV) maintains relative constant during the strains while it increases during the recoveries together with increased oxygen supply. By contrast changes in muscular blood volume (MBV) are mainly controlled by systemic changes. The graded changes of ITP during the maneuvers predict the changes of MBV but not CBV. Changes in left ventricular stroke volume and heart rate correlate to MBV but not to CBV. These results suggest the increased CBV after the ITP strains is brain specific, suggesting cerebral vasodilatation. Within the strains, cerebral oxygen saturation only decreases in sVM, indicating strong increment rather than decrement in ITP may be more challenging for the brain. Moderate and strong intrathoracic pressure (ITP) swings frequently occur voluntarily and involuntarily in daily life (e.g., defecation, diving, heavy weightlifting and coughing), and during positive airway pressure therapies in patients (e.g., patients with sleep apnea or respiratory failure in critical care). ITP swings challenge many physiological systems [1][2][3][4][5] . The hemodynamic consequences under different ITPs are of major interest for scientists and clinicians. Currently, our knowledge is mainly from systemic and peripheral hemodynamics (e.g., heart rate (HR), left ventricular stroke volume (LVSV), blood pressure, peripheral vascular resistance 1, 2, 6-8 ). How ITP changes influence cerebral hemodynamics (CH) which is regulated by cerebral autoregulation (CA) is still less known. CA is a complex physiological process counteracting the cerebral perfusion pressure (CPP) changes to maintain cerebral blood flow (CBF) and oxygenation. CPP is the difference between mean arterial pressure (MAP) and intracranial pressure (ICP), and usually dominated by MAP because ICP is much smaller compared to MAP. Thus normally the function of CA can be checked by measuring changes in CBF under the manipulation of MAP 9 . However, this regular approach of examining CA may not work when ITP changes because strong ITP swings are directly transmitted to ICP and considerably influence CPP 3,10 . Changes in cardiac output and central blood volume (BV) also influence the function of CA 11 . These aforementioned factors make it more demanding to investigate cerebral hemodynamics during ITP swings. Methodological limitation is another issue hindering the study of dynamic cerebral autoregulation and hemodynamics under different ITPs. Functional magnetic resonance imaging (fMRI) cannot be integrated with echocardiography, so its application is limited in this topic as it cannot explore the interactions between the CH and systemic hemodynamics. Transcranial Doppler (TCD) can only assess blood flow velocity and its reliability is based on the assumption of unchangeable vessel diameter 12 , which is not longer valid under hypercapnia/hypoxia. Besides, keeping a precise location of the TCD probe during measurement is usually difficult 9,11,13 . Results Changes in CBV and cerebral NIRS signals. The linear mixed model (LMM, random slope model, considering that the NIRS signals are normalized to the baseline) fitted slopes of CBV changes versus time in BH, mMM, sMM, mVM and sVM are 0.21, 0.24, 0.3, 0.19 and 0.5, and they are statistically significant (p-values are < 0.0001, 0.003, 0.0025, 0.043 and 0.0011, respectively), suggesting that CBV is generally monotonically increasing in each RM protocol. CBV mainly increases in the recovery phase and it is not significantly different from its baseline within the strains (Paired t-tests, Fig. 1). After false discovery rate (FDR) control, the increment of CBV during recovery phase is still significant in BH and tends to reach statistical significance in mMM, sMM, mVM and sVM (p-value increases to 0.05, 0.1, 0.1 and 0.1, respectively). The results of LMMs and t-test suggest that the changing patterns of CBV should hold in spite of slight increase of p-values after FDR, i.e., CBV slowly but steadily increases after ITP strains. The stable CBV within the strain may indicate the function of CA maintaining adequate blood and oxygen supply. To test this hypothesis we use Paired t-test to compare the changes of cerebral StO 2 in the stable last 5 s of RM to its baseline. Significant decrement is only observed in sVM (Fig. 2). To explore the changes in other hemodynamics associated with increasing cerebral perfusion post strain we compare their mean changes within maneuver to the ones within 15 s recovery period (Paired t-test, Fig. 3). Cerebral OI and/or HbO 2 significantly increase in recovery phase, except for sMM and mVM in which the increments tend to reach statistical significance. HHb shows no significant change. Changes in MBV. The main effect of changes in MBV shows only non-significant trends (the fitted LMM slopes of MBV versus time in BH, mMM, sMM, mVM and sVM are −0.05, 0.15, 0.04, −0.18 and −0.49; p-values are 0.429, 0.149, 0.639, 0.129 and 0.075), due to the non-monotonic changes of MBV in the whole RM protocol (Fig. 1). MBV significantly increases within the strain and decreases to baseline level in the recovery period in VM (Paired t-tests, Fig. 1(e,f)). Significant decrement is found within the stain of sMM ( Fig. 1(c)) and in the recovery phase of BH ( Fig. 1(a)). Within the stain of mMM, MBV decreases in the first 5 s of RM and tends to reach statistical significance (p = 0.068). After FDR control these changes are still significant, except for the decrement of BV after BH (p-value increases to 0.1, which implies a trend to significance). The relationship between changes in CBV, MBV and ITP. The changes of CBV and MBV are significantly different in all RMs (p-values of categorical variable 'tissue' fitted by LMM in all RMs are smaller than 0.0001). Paired t-tests find significant differences between CBV and MBV within the recovery phase of BH, mVM and sVM, and within the strain of sMM, mVM and sVM (Fig. 1). These differences are still significant after FDR control. The mean changes in ITP are 1.16 ± 6.71, 22.58 ± 7.93, 43.65 ± 8.54, −28.83 ± 11.9, −52.49 ± 12.9 (mean ± standard deviation) cmH 2 O in BH, mVM, sVM, mMM and sMM, respectively. The relative large standard deviation suggests large individual differences. But the changes in ITP can be clearly graded as 5 levels. Our LMM with random slope and random intercept still allows us to quantify the dynamic changes between ITP and CBV (MBV) in the 5 RMs because it has taken the individual differences into account by containing a random slope in ITP. Our model shows that the changes in ITP can predict the changes in MBV (fitted slope is 0.113, p < 0.001), but they cannot predict the changes in CBV (fitted slope is 0.0005, p = 0.965). These results suggest that the changes in MBV during the strains can be partially explained by changes in ITP, but CBV seems to be independent from ITP strains. LVSV and HR changes. One-way repeated ANOVA gives significant main effect of changes in LVSV in mVM (F(3,30) = 6.858, p = 0.007) and sVM (F(3,30) = 20.112, p < 0.0001). After BH the mean LVSV falls in the first 5 s of recovery and then increases (Fig. 4). In mVM and sVM, LVSV significantly decreases within the last 5 s of strains and increases post stains. The decrement of LVSV is more profound in sVM compared to mVM (LVSV: 59.15 ± 4.04 ml vs. 75.99 ± 5.6 ml, p = 0.0005). Significant increasing LVSV during recovery phase is found in mVM, sVM and sMM. Significant main effect of changes in HR is only found in sVM (F(3,30) = 6.28, p = 0.012). HR only significantly increases within the last 5 s of strain in sVM (Fig. 4). During recovery HR increases in BH but decreases in mVM and sVM. Table 1 below summarizes the changing directions of CBV, MBV, ITP, LVSV and HR in the each RM protocol. There is correlation between MBV and LVSV (r = −0.16, p = 0.04), and between MBV and HR (r = 0.20, p = 0.01). Neither LVSV (r = 0.03, p = 0.71) nor HR (r = −0.08, p = 0.28) correlates to CBV. Discussion We characterize changes in cerebral and peripheral hemodynamics by measuring cerebral and muscular blood volume (CBV and MBV) together with cardiac stroke volume (i.e., LVSV) and heart rate (HR) within the strain and in the recovery phase of 5 graded respiratory maneuvers (RMs) in the same subjects. The changes in intrathoracic pressure (ITP) induced by the RMs are carefully monitored by esophageal manometry. We find that despite of the various changes of ITP within the RMs, CBV is stable compared to its baseline but MBV changes in the same direction as ITP. Cerebral StO 2 only decreases within strong Valsalva maneuver. In the recovery phase, CBV increases after the release of strain in each RM protocol despite of opposing peripheral parameters associated with the strain, whereas MBV still changes in the same direction as ITP except in breath holding and strong Mueller maneuver. These results suggest that 1). The measured CBV with FDMD NIRS truly reflects BV from the brain and not from peripheral tissues; 2). Intact cerebral autoregulation is able to cope with various ITPs within RMs and capable to maintain an adequate cerebral perfusion and oxygenation, with a special challenge for strong Valsalva maneuver; 3). Increasing CBV is a common phenomenon following the release of different degrees of strain in each RM and is therefore controlled by local mechanisms to increase the cerebral oxygen supply independent of changes in systematic parameters and ITP. Are our NIRS results reliable? This is the first question we need to answer before further discussion. The superficial contamination is well known in NIRS 26,27 , raising the question if any further conclusions can be reliably drawn from our results. Currently most of the available NIRS devices are based on CW technique in which hemodynamics is calculated with MBLL. They are contaminated by superficial tissues because light travels much longer distance than the source-detector-distance d due to scattering but it is unable to measure the light travelling path-length in the deep tissue (i.e., it is unable to distinguish the absorption from superficial and deep tissues) 15 . MBLL uses differential path length factor to account for the increased light travelling pathway due to scattering, whose values derive from literatures (i.e., fixed values) but actually vary between subjects 15 . This differential path length factor method also has the problem of inducing cross-talk between the measured HbO 2 and HHb 15 . Spatially resolved spectroscopy (SRS) enables CW NIRS to measure the absolute value of StO 2 in brain and muscle via a multi-distance design and differentiating the light attenuation A with respect to d (i.e., ∂ ∂ A d / ) 34 .The ∂ ∂ A d / becomes linear and the photons detected by each channel approximately probe the same volume, given that SRS NIRS has more than two different d and small channel-separation (i.e., the distances between detectors or light sources are small enough to do differential calculus ∂ ∂ A d / ). Thus the superficial influence can be subtracted and the hemodynamics calculated with SRS method is more sensitive to deep tissues 15,34 . In reference 34 the introduced device (NIRO-300 from Hamamatsu Photonics KK, Japan) has three different d with a separation of 1 mm, and its reliability of measuring cerebral StO 2 has been validated in patients undergoing carotid endarterectomy 35 . Nevertheless the parameters calculated with MBLL in SRS NIRS measurement are still vulnerable to superficial interference 26,35 . Cerebral StO 2 measured by SRS NIRS devices with only two different source-detector distances and larger channel-separation (e.g., more than 1 cm) is also contaminated by extracranial tissues 27 , probably because of the nonlinearity of ∂ ∂ A d / (i.e., the linearity cannot be assessed with only two source-detector distances) and the light probes different superficial volumes. Therefore, we suggest CW NIRS signals calculated with either MBLL or SRS of two source-detector distances and large channel-separation should be interpreted cautiously; the ones measured with SRS method like NIRO-300 (i.e., more than two source-detector distances, and the channel-separation is small) should be more sensitive to deeper tissues. We use FDMD NIRS, a different technique that is superior to the CW NIRS because it can measure absorption and reduced scattering of the measured tissue and effectively suppress the superficial contamination (see Method Section 4.3). Its reliability of measuring cerebral hemodynamics has been well validated in previous studies [30][31][32] . We find that CBV and MBV change differently in our experiment, suggesting that they are measured from different sources. To compare our results with the ones of previous studies, we find that in VM our CBV changes disagree with the ones calculated with MBLL 23, 24, 26 but agree with the ones calculated with SRS in NIRO-300 recordings 26 . The changes in our cerebral StO 2 during VM also fit the ones measured with NIRO-200 (an updated version of NIRO-300) 24 . We believe our results of cerebral hemodynamics measured by FDMD NIRS are reliable. Increasing CBV after the strains of all RMs conflicts our second hypothesis, which may indicate a new character of dynamic CA or be regulated by other mechanisms. This observational study cannot allow us to clarify the underlying physiological mechanisms. But our results plausibly suggest brain specific mechanisms, as we do not find this phenomenon in the muscular hemodynamics (left biceps brachii muscle). The increased CBV can be reasonably explained by a residual vasodilatation as the regulation of CA is not instantaneous, or reflect the ITP CBV MBV LVSV HR Stable phase within maneuver Table 1. The summary of ITP, CBV, MBV, LVSV and HR changes within the strains and recovery phases. intrinsic characteristics of cerebral hyperemic responses 3,24,36 . Several other cerebral mechanisms (e.g., neurovascular coupling 37 , cerebrovascular CO 2 reactivity 38 , astrocytic vasodilating pathway 39 ) should also be considered as potential candidates calling for future studies. Our results further suggest that the changes in CBV are not dominated by systemic factors including ITP swings, LVSV and HR because CBV changes homogeneously but the systemic parameters change heterogeneously in partly opposing directions among different RMs. In our results, neither LVSV nor HR correlates to CBV change, and the relative stable CBV within the strains cannot be predicted by the ITP swings. People may argue that the increased CBV could also indicate inability of CA to suppress the increased MAP to maintain a relatively constant cerebral perfusion after strains (i.e., the increased CBV is induced by increased MAP). The change in MAP may influence the CBV but we do not think this is the major factor inducing the increased CBV, although we do not measure MAP. First, recent studies showed that the ability of CA to damp a rise in MAP is greater (approximately twice) than the one to withstand a reduction in MAP 40 . Second, previous studies reported that the MAP after VM and MM increases very quickly (several seconds) and it returns to baseline levels in about 10 seconds 3, 41, 42 . By contrasts we find that the CBV increases slowly for more than 10 seconds and it cannot return to the baseline levels even 20 seconds after the release of strain (Fig. 1). Third, Zhang et al. found that the increment in CBF post the VM strain still exists even after the overshoot of blood pressure is abolished by ganglionic blocker trimethaphan in healthy subjects 3 , suggesting that the increasing MAP does not dominate the increasing cerebral perfusion post VM strains. The uniformly increased CBV post strains can bring more oxygen supply to the brain as we find cerebral HbO 2 and OI increase. A hypothesis of the functional implication of the increased CBV and vasodilatation is that it may facilitate the washout of the accumulated brain metabolites during the stains. A recent study 43 showed that inspiration is the major driving force for cerebrospinal fluid (CSF) flux which can clean metabolites from the brain. Cerebral perfusion may influence the metabolite clearance as recent animal study found that chronic cerebral hypoperfusion can compromise the clearance of perivascular amyloid beta thus impairing microvascular function 44 . Therefore, whether the increased CBV is associated with increased CSF flux driven by the restored inspiration is an interesting topic for people in the field of neurology. Strong increment in ITP is more challenging for the brain. First, cerebral StO 2 significantly decreases within sVM but remains stable within sMM. Given the relative constant CBV within sVM, the decline of cerebral StO 2 is due to increased oxygen extraction. The decreased cerebral StO 2 can partly explain why sometime people have syncope during strong VM. Second, MBV increases within sVM, which can be explained by the pooling of blood in the venous system in muscle. However, this venous pooling effect unlikely exists in the brain as no change is found in CBV. The venous pooling may compromise the CA as more blood is retained in muscular veins. Third, LVSV is much smaller in later phase of sVM compared to sMM (59.15 ± 4.04 ml vs. 83.24 ± 8.7 ml). Although HR significantly increases within sVM, it still may be not sufficient enough to compensate the reduced systemic blood supply. ITP is a major factor controlling changes in MBV as they change in the same direction except in the recovery phase of BH and sMM. Our results of LMM analysis also suggest that change in ITP in the strain is a strong predictor of the change in MBV. We only find relative week correlations between LVSV and MBV (r = −0.16, p = 0.04), and between HR and MBV (r = 0.20, p = 0.01). ITP may control MBV via mechanically changing the stresses of muscular vessel wall or inducing sympathetic activities in vessels, and thus leading to vasodilatation/ vasoconstriction. Increased MBV within VM suggests that blood is detained in the muscle, thus venous return should decrease. The reduced venous return can decrease preload and thus LVSV decreases. Our data show that the higher the ITP is (mVM vs. sVM), the increment of MBV and decrement of LVSV are more profound. These results suggest that the degree of impaired venous return may be related to the degree of increment in strain. Increased LVSV after VM could be explained by increasing venous return and vasoconstriction in peripheral tissues, because the increment in MBV within RMs could generate high vasoconstriction tone. Our study suggests that it should be careful to use MM to imitate obstructive sleep apnea event as the CH changes differently. Decreasing CBV after obstructive sleep apnea events was reported in previous studies with NIRS 20,45 , conflicting our results of increasing CBV after MM. Considering the similar changes in ITP, we hypothesize that hypoxia and/or hypercapnia occurring during obstructive sleep apnea events may account for the differences. Cerebral StO 2 does not change in the stable phase of MM compared to baseline, indicating no hypoxia within MM; while decreased cerebral StO 2 has been repeatedly found during obstructive sleep apnea events 20, 21 . We do not measure CO 2 but it is likely to cumulate during MM because of the block of the exhaling. Another mechanism contributing to the differences is arousal system. In obstructive sleep apnea the termination of apnea event is usually associated with cortical arousal, which can independently increase sympathetic nerve activity and accentuates vasoconstriction 46 . There are several limitations in our study. First, we do not measure CO 2 which limits us to discuss the role of CO 2 reactivity in regulating CH. Recent study in healthy people using pseudo-continuous arterial spin labeling MRI showed that under constant CO 2 ITP strain alone can induce local perfusion in somatosensory/motor cortices but not in the prefrontal cortex 47 . Our results may suggest that perfusion in forehead may still be stable during ITP strains even with mild hypercapnia. Based on the multi-methodologies approach introduced in this study, future studies adding CO 2 measurement may further better our knowledge of the dynamic changes in cerebral autoregulation and cerebral circulation under various ITPs. Second, our echocardiography is only performed in the later phase of strain. One of the major limitations of echocardiography in this research field is to obtain clear cardiac signals without movement artifacts induced by the initiations of Valsalva and Mueller maneuvers and the hyperventilation after the release of the ITP strains. Future studies measuring the LVSV and HR within the whole strain may provide more information to interpret changes of cerebral and muscular NIRS signals. Third, we do not measure the MAP. The lack of a quantitative correlation between the observed hemodynamics and objective measurements of MAP limits the ability to specify the physiological origin of the observed hemodynamics. It is suggested that finger photo-plethysmography (Finapres) based on the volume clamp technology is the only unsupervised method for continuous non-invasive blood pressure measurement 48 . But this technology suffers from accuracy and reproducibility problems [49][50][51][52][53] . The Finapres systolic blood pressure measurements do not fulfill the British Hypertension Society nor the Association for the Advancement of Medical Instrumentation standard criteria. Furthermore, very little is known about whether the blood pressure measured with Finapres and intravascular method are equivalent in assessing dynamic CA. A recent study reports significant difference between these two methods in assessing dynamic CA in patients with acute brain injury and the authors suggest invasive arterial blood pressure monitoring should be preferred for dynamic CA assessment 54 . Since in our protocol the ITP swings challenges the dynamic CA, we are uncertain about the accuracy of the Finapres recordings in this scenario and we recommend cautious interpretations of future results assessed by Finapres. In addition, Finapres measures are not feasible in our experimental design as subjects needed to close their noses during the strains with their right hands, and the NIRS muscular sensor was fastened on their left bicep brachii muscle with bandage which hindered the calibration of Finapres using the upper arm cuff. Nevertheless, our study indicates that FDMD NIRS may provide an economic and easy way to assess the CH under ITP swings. Our results probe the basic physiological connections between cerebral, muscular and cardiac hemodynamic changes under various ITP changes, which may provide insights into cerebral circulation and neurovascular coupling. Our results may also have clinical significances for patients with sleep apnea and for respiratory management in critical care. Methods This study was approved by the local ethical commission of Northwest Switzerland, and was in compliance with the declaration of Helsinki; all subjects gave their written informed consent to participate in the study. The methods were carried out in accordance with the relevant guidelines and regulations. Subjects. 11 healthy adults (m/f: 6/5; age: 37.3 ± 4.9 yrs; BMI: 23.1 kg/m 2 (20.8-25.6)) participated in this study. One male subject did the experiment twice in two weeks. The echocardiography failed in his first recording. None of the subjects had any sleep disorders, ischemic heart disease, chronic heart failure, cerebrovascular disorders, hypertension, diabetes, obesity, or any mental health problems. Protocol. Every subject did 5 RMs (each RM lasted for 15 s and followed by 20 s recovery phase) in supine position: end expiratory BH, post-inspiration mVM and sVM with increased 20 and 40 cmH 2 O ITP from normal breathing (i.e., expiratory effort against closed upper airway), post-expiratory mMM and sMM with decreased 20 and 40 cmH 2 O ITP from normal breathing (i.e., inspiratory effort against closed upper airway). The subjects closed their mouths and needed to close their noses with the help of their right hands during the maneuvers. The interval between RMs was at least 3 minutes. An adult esophageal balloon catheter (CooperSurgical, Trumbull, CT, USA) was placed by nasal insertion into the esophagus to monitor changes in ITP. The output of the catheter was connected to XLTEK (Natus Neurology, Excel-Tech Ltd., WI, USA) via a dual airflow differential pressure transducer model PT2 Dual (BRAEBON Medical Corporation, Kanata, Ontario, Canada). We provided visual feedback during the entire strain enabling the subjects to adjust and maintain the ITP. All subjects practiced performing the RMs before official recordings. Frequency-domain multi-distance (FDMD) near-infrared spectroscopy (NIRS). FDMD NIRS (Imagent, ISS, Champaign IL, USA) measurements were conducted over the middle of left forehead (i.e., in the middle of the area that below the hairline and above the left brow) and the left bicep brachii muscle. The principle of FDMD NIRS has been well introduced 29,55 . The major absorbers of near-infrared light in the human tissues were HbO 2 and HHb. In Imagent system, the light emitters (8 laser diodes, 4 at 690 nm wavelength and 4 at 830 nm wavelength, so they were coupled into 4 sources) were modulated at 110 MHz and the light can penetrate into the measured tissues with a depth of approximate 3-4 cm. The back-scattering light from tissues can be picked up by a 3-mm-diameter optical fiber bundle that was connected to photomultiplier tube detector. To yield a multi-distance measurement, the four coupled light sources were aligned and placed at 2 cm, 2.5 cm, 3 cm and 3.5 cm from the detecting optical fiber bundle. The light intensity (I DC ), modulation amplitude (I AC ) and phase measured from different distances varied linearly (the linearity was monitored by the R 2 of the fitted linear regression model). Therefore, to submit the measured I DC , I AC and phase to linear regression we can obtain the following equations 29, 32, 55 phase p hase where r was the known source-detector distance, S AC , S DC and S phase were the slopes and C AC , C DC , C phase were the intercepts. The superficial influences in each channel can be attributed to the intercepts and the residuals of the linear regression. The cerebral hemodynamic parameters were calculated with the slopes, i.e., to combine any two of these three slopes we can estimate absorption and reduced scattering coefficients of the measured tissue and then to further calculate HbO 2 and HHb 29 . Then total hemoglobin reflecting BV changes was calculated as the sum of HbO 2 and HHb, and the ratio of HbO 2 over total hemoglobin provided an index for changes of StO 2 . Normally people chose S AC and S phase , considering that I AC and phase were less contaminated by the environment light. The sample rate of our FDMD NIRS recording was set as 10.4 Hz. Before the start of every measurement, the NIRS device was calibrated on optical phantom blocks. Echocardiography. Transthoracic echocardiography (Vivid7, GE Healthcare, USA) was performed during the last stable 5 s of RM and in the first 10 s of recovery period to measure LVSV and HR. A 10-s baseline measurement was performed at rest state. LVSV was assessed using pulsed-wave Doppler sampled in the left ventricular outflow tract (LVOT). It was calculated automatically with the default formula (LVOT Diam ) 2 × 0.785 × (LVOT VTI ), where LVOT Diam was the LVOT diameter and LVOT VTI was the LVOT velocity time integral. Data preprocessing. The mean LVSV and HR during 10 s baseline were calculated, as a baseline value for the multiple comparisons of LVSV and HR changes within maneuvers and in the recovery phase. The LVSV and HR during the last 5 s of maneuver were averaged respectively, and the first 10 s of recovery was segmented into two phases (i.e., first 5 s recovery and second 5 s recovery) within which the mean LVSV and HR were calculated. The similar data average and segmentation have been used for characterizing LVSV dynamics in a study of 15-second MM by Bradley et al. 6 . It frequently happened that one heart beat pulse was shared by two consecutive segmentations, because we chose a fixed 5 s time window to segment the 15 s continuous LVSV and HR recordings. In this case, those pulses were excluded from the average process. The mean value of the ITP in the last 5 s of the strain period was calculated in each RM. Since the subjects needed to adjust and maintain the ITP level with the help of visual feedback, the ITP values of the first few seconds of the stain were usually unstable thus we treated its values of the last 5 s within the stain as a more stable and reliable recordings to indicate the level of ITP strains. NIRS signals were averaged every 1 s (i.e., down sample to 1 Hz) to improve the signal to noise ratio. The reliability of FDMD NIRS measurement depended on the linearity of the raw optical signals on distances, i.e., the linear dependence R 2 of equations (1) and (3) should be highly close to 1 18,29,56 . We checked the R 2 of the regression fit and the p-values for the linearly fitted absorption and reduced scattering coefficients. The raw optical data were discarded if the R 2 smaller than 0.97 in either modulation amplitude or phase shift 18,56 . Then the data were further averaged every 5 s to down sample to 0.2 Hz, in order to be better compared with the cardiac and ITP recordings (i.e., echocardiography data were down sampled and averaged every 5 s and the mean ITP of the last 5 s of strain was treated as the stable ITP value achieved by the subject). In each RM, the mean value of 5 s BV recordings before the start of strain was calculated as baseline and was subtracted from following recordings. Then the relative BV changes were normalized to their baselines in each RM in order to make a valid comparison on group level. The same normalization was also done to cerebral StO 2 , HbO 2 , HHb and OI. Statistical analysis. Data were expressed as the mean ± standard error (SE) after preprocessing. Linear mixed model (LMM, more specifically, LMM with random slope) was used to fit the changes of CBV and MBV with time (continuous variable) as explanatory variable, respectively. The LMM was applied to each type of RM separately. LMM was also used to investigate if changes of CBV and MBV showed significant difference in the same RM, via adding a categorical variable 'tissue' (i.e., brain or muscle) and merging CBV and MBV as the dependent variable into the model. LMM (LMM with random slope and random intercept) was also used to check if the changes in ITP manipulated by the 5 RMs can predict the corresponding changes in CBV and MBV, respectively. The explanatory variable was the mean ITP values of the last stable 5 s within the strains of the 5 RMs (i.e., the changes of ITP in the 5 RMs were taken together as the explanatory variable, which represented the graded changes of ITP), and the dependent variables were the changes of CBV and MBV in the last 5 s of the strains, respectively. We did not consider the ITP changes during the recovery period, because the strong hyperventilation following the release of the strain can induce movement artifacts in ITP recordings. We compared the mean values of BV changes of every 5 s to their baseline with Paired t-tests. The same tests were also used to assess if BV significantly changed in recovery compared to the last 5 s within RMs, and to compare if the changes in paired CBV and MBV at the same time point were different. One-way repeated ANOVA was performed to test the main effect of changes in LVSV. Degrees of freedom were corrected using Greenhouse-Geisser (or Huynh-Feldt) correction if the Greenhouse-Geisser estimate of sphericity was smaller (or larger) than 0.75, but original degrees of freedom were reported for sake of readability. Fisher's least significant difference (LSD) post hoc test was performed to do pairwise comparisons on LVSV data. The same One-way repeated ANOVA and post hoc tests were also performed to HR changes. We presented the results of multiple comparisons without any correction. Whereas the false discovery rate (FDR) of multiple comparisons was controlled by Benjamini-Hochberg procedure 57 , and our interpretation of the results involving in the ones after FDR control (i.e., p-values were adjusted using FDR method). Pearson product-moment correlation coefficient was computed to assess the relationship between the CBV (MBV) and LVSV, between CBV (MBV) and HR, respectively. The statistically significant level was p < 0.05. All signal preprocessing was carried out in MATLAB (The Math-Works, Inc., Natick, MA, USA). All statistical analyses were performed using R. Data availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2018-04-03T04:41:17.355Z
2017-08-21T00:00:00.000
{ "year": 2017, "sha1": "0462502c3e46eb7a5af262252e3306090ee4a6d7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-08698-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c29a25222ba5f816c288c4740911d66fea989fa1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21384027
pes2o/s2orc
v3-fos-license
The MUK five protocol: a phase II randomised, controlled, parallel group, multi-centre trial of carfilzomib, cyclophosphamide and dexamethasone (CCD) vs. cyclophosphamide, bortezomib (Velcade) and dexamethasone (CVD) for first relapse and primary refractory multiple myeloma Background Multiple myeloma is a plasma cell tumour with an annual incidence in the UK of approximately 40–50 per million i.e. about 4500 new cases per annum. The triple combination cyclophosphamide, bortezomib (Velcade®) and dexamethasone (CVD) is an effective regimen at relapse and has emerged in recent years as the standard therapy at first relapse in the UK. Carfilzomib has good activity as a single agent in the relapsed setting, and it is expected that efficacy will be improved when used in combination with dexamethasone and cyclophosphamide. Methods MUK Five is a phase II open label, randomised, controlled, parallel group, multi-centre trial that will compare the activity of carfilzomib, cyclophosphamide and dexamethasone (CCD) with that of CVD, given over an equivalent treatment period (24 weeks), in participants with multiple myeloma at first relapse, or refractory to no more than 1 line of treatment. In addition, the study also aims to assess the utility of a maintenance schedule of carfilzomib in these participants. The primary objective of the trial is to assess whether CCD provides non-inferior activity in terms of ≥ VGPR rates at 24 weeks, and whether the addition of maintenance treatment with carfilzomib to CCD provides superior activity in terms of progression-free survival, as compared to CCD with no maintenance. Secondary objectives include comparing toxicity profiles, further summarizing and comparing the activity of the different treatment arms and analysis of the effect of each treatment arm on minimal residual disease status. Discussion The development of carfilzomib offers the opportunity to further explore the anti-tumour efficacy of proteasome inhibition and, based on the available evidence, it is important and timely to obtain data on the activity, toxicity and tolerability of this drug. In contrast to ongoing phase III trials, this phase II trial has a unique subset of participants diagnosed with multiple myeloma at first relapse or refractory to no more than 1 line of treatment and will also evaluate the utility of maintenance with carfilzomib for up to 18 months and investigate minimal residual disease status to provide information on depth of response and the prognostic impact thereof. Trial registration The trial is registered under ISRCTN17354232, December 2012. (Continued from previous page) Discussion: The development of carfilzomib offers the opportunity to further explore the anti-tumour efficacy of proteasome inhibition and, based on the available evidence, it is important and timely to obtain data on the activity, toxicity and tolerability of this drug. In contrast to ongoing phase III trials, this phase II trial has a unique subset of participants diagnosed with multiple myeloma at first relapse or refractory to no more than 1 line of treatment and will also evaluate the utility of maintenance with carfilzomib for up to 18 months and investigate minimal residual disease status to provide information on depth of response and the prognostic impact thereof. Trial registration: The trial is registered under ISRCTN17354232, December 2012. Keywords: First relapse multiple myeloma, Primary refractory multiple myeloma, Carfilzomib, Phase II Background Multiple myeloma (MM) is a plasma cell tumour with an annual incidence in the UK of approximately 40-50 per million i.e. about 4500 new cases per annum [1]. For younger fitter patients the current standard of care is induction therapy typically using a novel agent-based regimen consolidated with high-dose melphalan and peripheral blood stem cell rescue (termed autologous stem cell transplantation, ASCT). For older less fit patients, frontline regimens include a novel agent along with steroids and an alkylating agent, but without consolidation with ASCT. With these regimens, the majority of patients will enter a plateau phase lasting some 3-5 years, however patients will relapse and require further anti-myeloma therapy. Current standard treatment at first relapse in the UK is the use of bortezomib (Velcade®), commonly with dexamethasone [2]. Increasingly, a third agent is added, either an alkylating agent such as cyclophosphamide (CVD), an anthracycline, doxorubicin (PAD) or thalidomide. The triple combination CVD is an effective regimen at relapse, producing response rates of up to 70 % [3][4][5][6] and has emerged in recent years as the standard therapy at first relapse in the UK, with dose adjustments tailored to age and performance status. Up to 8 cycles are administered, although many patients have their treatment withdrawn before completing 8 cycles as a consequence of neurotoxicity, more frequently seen with the intravenous (IV) mode of delivery of bortezomib. Recently, a phase 3 study comparing intravenous (IV) with subcutaneous (SC) mode of delivery has reported equivalent efficacy with significantly reduced neurotoxicity [7]. The publication of these results has led to widespread changeover from the IV to the SC use of bortezomib. The development of carfilzomib, an irreversible epoxyketone inhibitor of the proteasome, offers the opportunity to further explore the anti-tumour efficacy of proteasome inhibition, particularly as some patients have disease that does not respond to bortezomib, or develop resistance after initial response. Phase I and II studies have shown carfilzomib monotherapy can be safely administered and produces encouraging response rates. Similarly, when given in combination with lenalidomide and low dose dexamethasone, carfilzomib is well tolerated, and there are no significant overlapping toxicities [8][9][10][11][12][13]. The Myeloma UK (MUK) study, MUK five, has been developed to further explore carfilzomib combination therapy in the relapsed or primary refractory setting. Experience with bortezomib confirms improved efficacy with good tolerability when used in combination with dexamethasone and cyclophosphamide (CVD). Carfilzomib has good activity as a single agent in the relapsed setting, and it is expected that efficacy will be improved when used in combination with dexamethasone and a third agent. Hence it is important and timely to obtain data on the activity, toxicity and tolerability of Carfilzomib in combination with cyclophosphamide and dexamethasone (CCD) in the first relapse setting. This triplet regimen has recently been reported to be a well tolerated and active regimen in the frontline setting in older patients not suitable for ASCT [14]. The study has been developed through the Myeloma UK (MUK) Early Phase Clinical Trials Network, an innovative collaboration which brings together clinical specialists and researchers, the pharmaceutical industry and NHS regulatory bodies to conduct a prioritised and strategic portfolio of myeloma clinical trials. Study aims This study will compare the activity of CCD with that of the current standard therapy at relapse, CVD, given over an equivalent treatment period (24 weeks), in participants with multiple myeloma at first relapse, or refractory to no more than 1 line of treatment. In addition, the study also aims to assess the utility of a maintenance schedule of carfilzomib in these participants. Primary objective To assess whether CCD provides non-inferior activity with regard to the short-term outcome measure of ≥ VGPR rates at 24 weeks, and whether the addition of maintenance treatment with Carfilzomib to CCD provides superior activity in terms of the longer-term outcome measure of progression-free survival (PFS), as compared to CCD with no maintenance. Secondary objectives To compare the toxicity profile of CCD with that of CVD, overall and specifically with respect to peripheral neuropathy To explore the non-inferiority of CCD without maintenance as compared to CVD for longer-term outcome measure of PFS To further summarise the activity of CCD as compared to CVD with regard to: -Complete response at 24 weeks -Overall response at 24 weeks and within 12 months -Maximum response within 12 months -Maximum response overall -Time to maximum response -Duration of response -Overall survival -Time to next treatment To determine the proportion of participants with a negative minimal residual disease (MRD) status at the end of initial treatment phase (24 weeks of treatment) in both the CVD and CCD arms and to correlate MRD-negativity at the end of the initial treatment phase (24 weeks) with PFS for all participants To assess the effect of maintenance Carfilzomib on MRD status at 6 months and at 12 months post second randomisation To correlate treatment outcomes (Complete response CR, overall response rate ORR) and PFS with genetic subgroups To summarise treatment compliance. Study design The MUK five trial is designed as a phase II randomised, controlled, parallel group, multi-centre clinical trial for participants with symptomatic myeloma at first relapse, or refractory to not more than 1 line of therapy. The trial has received national research ethics approval from the NHS National Research Ethics Service London, Fulham (REC Number: 12/LO/1078). A total of 300 participants will be randomised on a 2:1 basis to either 6 cycles of CCD or 8 cycles of CVD (both equivalent to 24 weeks of therapy), with follow-up to disease progression. Participants in the CCD arm who, at the end of the initial 6 cycles of CCD do not have evidence of disease progression, will be randomised to receive maintenance therapy with Carfilzomib or to receive no further treatment. Participants in the CVD arm will not receive maintenance therapy (Fig. 1). In order to compare the regimens with regard to activity, the trial has been designed to incorporate two co-primary endpoints: response and progression-free survival. This allows the trial to assess the activity of the two regimens within a fixed period of 24 weeks of treatment, i.e. not incorporating the maintenance phase in the CCD arm, and to compare the activity of the whole CCD regimen with and without maintenance therapy, and the whole CCD regimen without maintenance with the CVD regimen by evaluating the longer term endpoint of PFS. Sample size The sample size was calculated according to each of the co-primary endpoints ≥ Very good partial response (VGPR) The primary endpoint analysis is based on a noninferiority comparison of CCD vs. CVD. Clinical discussion taking into account recent data anticipates a ≥ VGPR rate of approximately 30-40 % with CVD at first relapse [4,15,16]. Considerable discussion was given to whether this comparison should be performed as a superiority or non-inferiority comparison, since CCD is expected to improve ≥ VGPR rates by up to 10 %. In this setting, however, a phase II superiority trial to detect this small improvement would likely be unfeasible. A non-inferiority design is therefore used, assuming a ≥ VGPR rate of 35 % with CVD, and a ≥ VGPR rate of 45 % with CCD, with a non-inferiority margin of −5 %. Randomising participants on a 2:1 basis in favour of CCD, a total of 291 participants (CCD = 194, CVD = 97) are required to exclude a difference of −5 % from the 90 % confidence interval with 80 % power (1-sided 5 % significance level). Progression-free survival The primary PFS comparison will be focused on those participants undergoing maintenance randomisation in the CCD arm, to compare maintenance with carfilzomib vs. no maintenance. It is assumed that median PFS with CVD with no maintenance will be approximately 14 months from the time of initial treatment [17]. It is hypothesised that a similar median PFS will be observed with CCD therapy with no maintenance. It is also anticipated that approximately 80 % of participants randomised to receive CCD therapy will be progression-free at the end of 24 weeks of initial treatment [4]. With 80 % power, a 2-sided alpha of 0.2, and assuming follow-up of at least 18 months for all participants from the time of maintenance randomisation, 70 participants per arm (109 events) are required to detect a hazard ratio of 0.67, corresponding to an increase in median PFS of 6 months with Carfilzomib maintenance therapy, from the time of maintenance randomisation. Taking the required sample size for the response rate endpoint it is therefore expected that approximately 160 participants will be eligible for maintenance randomisation, providing a sufficient sample size to assess the maintenance therapy comparison, allowing approximately 10 % dropout. Recruitment Process The trial is expected to take up to 36 months to complete recruitment. Participants will be recruited from NHS hospitals throughout the UK which are approved research sites within the Myeloma UK Early Phase Clinical Trial Network. Participants will be approached during standard clinic visits for management of their disease and will be provided with verbal and written details, in the form of a participant information sheet. Following information provision, participants will have as long as they need to consider participation (normally a minimum of 24 h) and will be given the opportunity to discuss the study with their family and other healthcare professionals before they are asked whether they would be willing to take part in the study. Participants who satisfy all the study inclusion criteria and none of the exclusion criteria listed in Table 1 will be invited to provide written informed consent. (Participants previously immunofixation negative who are now immunofixation positive need to demonstrate a greater than 5 g/l absolute increase in paraprotein to be eligible for inclusion). -Participants with measurable disease as defined by one or more of the following criteria (assessed within 21 days prior to randomisation): o Serum paraprotein ≥5 g/L (For IgA participants whose disease can only be reliably measured by serum quantitative immunoglobulin Randomisation Written informed consent for entry in to the trial must be obtained and eligibility must be confirmed prior to randomisation. Randomisation will be administered by telephone by the University of Leeds Clinical Trials Research Unit (CTRU), using an automated 24 h telephone system. Participants will be randomised on a 2:1 basis to either the CCD or CVD trial arm. A computer generated minimisation program that incorporates a random element will be used to ensure treatment groups are well-balanced for the following characteristics: β2 microglobulin at trial entry (<3.5, 3.5-5.5, >5.5); prior bortezomib (Velcade®) treatment (y/ n); prior ASCT (y/n); timing of first relapse or primary refractory disease (primary refractory, first relapse: < 12 months, first relapse: ≥ 12 months). Participants on the CCD arm only, who have no evidence of progressive disease at the response assessment at the end of CCD therapy, will be randomised again to receive either carfilzomib maintenance or no maintenance treatment, provided they fulfil the eligibility criteria listed in Table 2. Participants will be randomised on a 1:1 basis to either carfilzomib maintenance or no maintenance. A computer generated minimisation program that incorporates a random element will be used to ensure treatment groups are well-balanced for the following characteristics: response category at the end of treatment with CCD (PR, MR or SD vs. VGPR or CR); prior ASCT (y/n). Intervention Participants randomised to CVD will receive the following regimen: bortezomib subcutaneous 1. Participants in the CCD arm, who are randomised to receive carfilzomib maintenance, will receive carfilzomib IV 36 mg/m 2 (days 1, 2, 15 and 16) for 6 months, after which the frequency of dosing will be reduced to carfilzomib IV 36 mg/m 2 (days 1 and 2) for a further 12 months. Each drug may be reduced due to toxicity. If no resolution of toxicity is seen after adjustment to the lowest dose level, treatment will be discontinued. Trial assessments Baseline investigations are to be performed within 14 days prior to randomisation, after written informed consent has been obtained. The required baseline assessments include a physical examination, medical history, ECOG performance status and ISS stage, as well as haematology, biochemistry, radiological and bone marrow sample assessments. Assessments during treatment, including a physical examination, safety and laboratory tests will be performed at multiple time-points during cycles 1 -6 for the CCD regimen and cycles 1 -8 for the CVD regimen. Response assessments will be carried out at the end of treatment phase (6 months in each arm), according to International Myeloma Working Group (IMWG) criteria, and will include bone marrow examination for MRD using multi-parameter flow cytometry. Table 1 MUK five study inclusion and exclusion criteria (Continued) -Previous or concurrent malignancy within the past 3 years with the exception of a) adequately treated basal cell carcinoma, squamous cell skin cancer, or thyroid cancer; b) carcinoma in situ of the cervix or breast; c) prostate cancer of Gleason Grade 6 or less with stable prostatespecific antigen levels; or d) cancer considered cured by surgical resection or unlikely to impact survival during the duration of the study, such as localised transitional cell carcinoma of the bladder or benign tumours of the adrenal or pancreas -Significant neuropathy (Grades 3-4, or Grade 2 with pain) within 14 days prior to randomisation -Participants with haemorrhagic cystitis -Any history or known hypersensitivity to any of the study medications or excipients -Participants undergoing active treatment for infiltrative lung disease -Contraindication to any of the required concomitant drugs or supportive treatments, including hypersensitivity to all anticoagulation and antiplatelet options, antiviral drugs, or intolerance to hydration due to pre-existing pulmonary or cardiac impairment -Contraindication to a programme of oral or IV hydration -Participants with pleural effusions requiring thoracentesis or ascites requiring paracentesis within 14 days prior to randomisation -Any other clinically significant medical disease or condition that, in the Investigator's opinion, may interfere with protocol adherence or a participant's ability to give informed consent During maintenance in the CCD arm, assessments will be performed on days 1, 2, 15 and 16 of each cycle (or days 1 and 2 if administering the reduced schedule) and will include a physical examination and laboratory tests. Assessments at 6 and 12 months will also include bone marrow sampling for MRD assessment. Assessments at the end of initial and maintenance treatment include a physical examination, laboratory tests and a bone marrow aspirate (post-initial treatment). Follow-up will be performed 4-weekly until disease progression, and will involve a physical examination and laboratory tests. Outcome measures The study has two co-primary endpoints, the proportion of participants achieving at least VGPR 24 weeks post initial randomisation and progression-free survival. A key secondary endpoint is the proportion of participants experiencing ≥ grade 3 neuropathy or ≥ grade 2 neuropathy with pain during the initial treatment period (8 cycles of CVD or 6 cycles of CCD). All secondary endpoints are listed in Table 3. Statistical methods and analysis Participants will be grouped as follows for analysis: (a) All participants randomised to the CVD arm in the initial randomisation (b)All participants randomised to the CCD arm in the initial randomisation (c) Participants in the CCD arm who were, at the maintenance randomisation, randomised to receive no maintenance therapy (d)Participants in the CCD arm who were, at the maintenance randomisation, randomised to receive maintenance therapy (e) Participants in the CCD arm who do not receive maintenance therapy. This includes participants who are, at the maintenance randomisation, randomised to receive no maintenance therapy, plus those who do not undergo maintenance randomisation either because of not being eligible or not being willing. N.B. in order to avoid bias due to the imbalanced randomisation schedule, analyses may include all participants in the CCD arm, adjusting for the effect of maintenance therapy as appropriate to provide a comparator group representing CCD arm participants who do not receive maintenance therapy. Primary endpoint analysis The primary endpoint analysis will include all participants who have received at least one full cycle of their allocated chemotherapy. Participants who received less than one full cycle, defined as missing more than 2 doses of either Carfilzomib or bortezomib in cycle 1 and then stopping trial treatment, will be monitored and analysed separately as applicable. A non-inferiority analysis of the proportion of participants achieving at least VGPR 24 weeks post initial randomisation will compare groups (a) and (b) in terms of the proportion of participants achieving ≥ VGPR 24 weeks post randomisation, with a null hypothesis of inferiority of the CCD arm as compared to the CVD arm, and an alternative hypothesis of non-inferiority of CCD, using a pre-specified non-inferiority margin of −5 %. Further analysis using logistic regression will adjust for the minimisation factors (β2 microglobulin at trial entry, prior bortezomib treatment, prior autologous stem cell transplant, and timing or first relapse or primary refractory disease). Treatment and covariate estimates with corresponding standard errors, odds ratios, 90 and 95 % confidence intervals and p-values will be presented. Table 2 Maintenance randomisation inclusion and exclusion criteria Inclusion criteria for maintenance randomisation -Completed at least 24 weeks of CCD treatment in line with the protocol (must have received a minimum of 5 cycles and achieved a maximum response to initial therapy). -No evidence of disease progression -Adequate hepatic function, with ALT or AST <3 times the upper limit of normal and serum direct bilirubin ≤42.5 μmol/L (2.5 mg/100 ml) within 14 days prior to randomisation -Absolute neutrophil count (ANC) ≥1.0 × 10 9 /L within 14 days prior to randomisation. Growth factor support received in the previous cycle of treatment is permissible. -Haemoglobin ≥8 g/dL (80 g/L) within 14 days prior to randomisation (participants may be receiving red blood cell [RBC] transfusions in accordance with institutional guidelines) -Platelet count ≥75 × 10 9 /L (≥50 × 10 9 /L if myeloma involvement in the bone marrow is >50 %) within 14 days prior to randomisation. Platelet support received in the previous cycle of treatment is permissible. -Creatinine clearance (CrCl) ≥20 mL/min or plasma creatinine ≤120 μmol/L within 7 days prior to randomisation, either measured or calculated using a standard formula (eg, Cockcroft and Gault) -Female participants of child-bearing potential must have a negative pregnancy test prior to treatment and agree to use dual methods of contraception for the duration of the study and for 30 days following completion of study. Male participants must also agree to use a barrier method of contraception for the duration of the study and for 30 days following completion of study if sexually active with a female of child-bearing potential. Exclusion criteria for maintenance randomisation -Uncontrolled hypertension or uncontrolled diabetes -Any other clinically significant medical disease or condition that, in the Investigator's opinion, may interfere with protocol adherence -Pregnant or lactating females -Significant neuropathy (Grades 3-4, or Grade 2 with pain) The main comparison for PFS will be a superiority analysis of groups (c) and (d). PFS curves will be calculated using the Kaplan-Meier method and the median progression-free survival estimates and progression-free survival estimates at 12 and 24 months with corresponding 80 % and 95 % confidence intervals will be presented by treatment group. A log-rank test, stratifying for the minimisation factors, will be used to compare PFS between treatment groups. Cox's proportional hazards model (if appropriate), adjusting for the minimisation factors, will also be used to compare PFS between the treatment groups. Treatment and covariate estimates, standard errors, hazard ratios, 80 % and 95 % confidence intervals, as well as p-values will be presented. An exploratory non-inferiority comparison will also be performed to compare groups (a) and (e). If the main comparison between (c) and (d) shows no evidence of improved PFS for either treatment group, the superiority analyses described above will also be performed for (a) vs (b). Analysis of response and PFS will be performed using the data recorded on the case report form, which will be centrally reviewed for quality assurance. Secondary endpoint analysis The main comparisons and analysis methods for each secondary endpoint are given in Table 3. Short term endpoints (i.e. those evaluable at 24 weeks post randomisation / the end of initial treatment) generally compare groups (a) and (b). Longer term endpoints generally compare groups (c) and (d), and (a) and (e). The majority of endpoints will be analysed according to the analysis population, including all participants who receive at least one cycle of their allocated chemotherapy, as with the primary endpoint analysis. The safety population will include all participants who receive at least one dose of any trial treatment and will be used for the toxicity and safety endpoints. Treatment compliance will be produced for all participants according to the trial pathways. Exploratory analyses to investigate the association between clinical outcomes (ORR and PFS) and genetic subgroups will also be performed. Flow cytometry for MRD detection will be performed as reported by Rawstron et al. [18], with a 0.01 % limit of detection, assessing 500 000 cells incubated with 6color antibody combinations including CD138/CD38/ CD45/CD19 with CD56/CD27 in all cases and CD81/ CD117 in some cases, as required. All analyses are predefined in a statistical analysis plan prior to any analysis being undertaken. Frequency of analyses A Data Monitoring and Ethics Committee (DMEC) will be set up to independently review data on safety, activity, protocol adherence and recruitment. This committee, in light of this data, and any advice and evidence they wish to request, will if necessary report to the Trial Steering Committee (TSC) if there are any concerns regarding the activity or safety of the trial treatments. An interim analysis for inferiority will be performed once half the number of patients has been recruited. This interim analysis will assess for inferiority of CCD, as compared to CVD, in terms of the response co-primary endpoint. If, at the time of this interim analysis, CCD is found to be significantly inferior as compared to CVD, the DMEC will report to the TSC with a recommendation of early trial closure. The analysis will be detailed in a DMEC Interim Statistical Analysis Plan. Final analysis is planned to take place in two stages. Stage 1 of the analysis is planned to take place after all participants have completed the 24 weeks of initial treatment and will include the short term endpoints relating to the initial treatment period or 24 weeks post initial randomisation time-point. Stage 2 of the final analysis is planned to take place when all participants have completed their full follow-up. This analysis will consider all long-term endpoints not included in stage 1 of the analyses. Discussion Multiple myeloma is the 17th most common cancer in the UK, accounting for around 1 % of all new cancer cases. Myeloma incidence rates have increased overall in Great Britain since mid-1970s [1]. The majority of patients will enter a plateau phase lasting some 3-5 years, with therapies based on thalidomide or alkylating agents. Patients will relapse and require further anti-myeloma therapy. The use of bortezomib and lenalidomide has revolutionised the treatment of relapsed disease with both drugs being shown to improve response rates, progression-free and overall survival, when compared to dexamethasone in randomised clinical trials in relapsed participants [2]. The triple combination of bortezomib, dexamethasone and an alkylating agent, such as cyclophosphamide (CVD), is an effective regimen at relapse. The development of carfilzomib offers the opportunity to further explore the anti-tumour efficacy of proteasome inhibition. In phase I and phase II clinical studies, carfilzomib demonstrated robust and durable efficacy and acceptable safety and tolerability profile in participants with relapse and/or refractory multiple myeloma [19]. Based on the available evidence, it is important and timely to obtain data on the activity, toxicity and tolerability of this drug. In contrast to ongoing phase III trials such as ASPIRE, FOCUS, ENDEAVOR and CLARION [19], this phase II trial has a unique subset of participants diagnosed with multiple myeloma at first relapse or refractory to no more than 1 line of treatment. The MUK five trial will also evaluate the utility of maintenance with carfilzomib for up to 18 months and investigate MRD status to provide information on depth of response and the prognostic impact thereof. The findings of this trial will inform healthcare professionals, patients, and their caregivers about the activity and safety profile of CCD in this setting.
2017-08-08T14:18:29.703Z
2016-05-17T00:00:00.000
{ "year": 2016, "sha1": "24597ad0c99c1fc73ad6429d0d121b9702282097", "oa_license": "CCBY", "oa_url": "https://bmchematol.biomedcentral.com/track/pdf/10.1186/s12878-016-0053-9", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "24597ad0c99c1fc73ad6429d0d121b9702282097", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6127768
pes2o/s2orc
v3-fos-license
Association of Air Pollution with Increased Incidence of Ventricular Tachyarrhythmias Recorded by Implanted Cardioverter Defibrillators Epidemiologic studies have demonstrated a consistent link between sudden cardiac deaths and particulate air pollution. We used implanted cardioverter defibrillator (ICD) records of ventricular tachyarrhythmias to assess the role of air pollution as a trigger of these potentially life-threatening events. The study cohort consisted of 203 cardiac patients with ICD devices in the Boston metropolitan area who were followed for an average of 3.1 years between 1995 and 2002. Fine particle mass and gaseous air pollution plus temperature and relative humidity were measured on almost all days, and black carbon, sulfate, and particle number on a subset of days. Date, time, and intracardiac electrograms of ICD-detected arrhythmias were downloaded at the patients’ regular follow-up visits (about every 3 months). Ventricular tachyarrhythmias were identified by electrophysiologist review. Risk of ventricular arrhythmias associated with air pollution was estimated with logistic regression, adjusting for season, temperature, relative humidity, day of the week, patient, and a recent prior arrhythmia. We found increased risks of ventricular arrhythmias associated with 2-day mean exposure for all air pollutants considered, although these associations were not statistically significant. We found statistically significant associations between air pollution and ventricular arrhythmias for episodes within 3 days of a previous arrhythmia. The associations of ventricular tachyarrhythmias with fine particle mass, carbon monoxide, nitrogen dioxide, and black carbon suggest a link with motor vehicle pollutants. The associations with sulfate suggest a link with stationary fossil fuel combustion sources. VOLUME 113 | NUMBER 6 | June 2005 • Environmental Health Perspectives Research | Article A large number of epidemiologic studies have found an association between shortterm episodes of increased particulate air pollution and cardiovascular morbidity and mortality (Brook et al. 2004). Respirable particulate matter has been specifically implicated in the triggering of myocardial infarction (D'Ippoliti et al. 2003;Peters et al. 2001), arrhythmias (Peters et al. 2000), decompensation of heart failure patients (Morris and Naumova 1998;Schwartz and Morris 1995;Wellenius et al., in press), and the exacerbation of myocardial ischemia (Pekkanen et al. 2002;Wellenius et al. 2003). Particulate-related changes in autonomic nervous system activity, as assessed by heart rate variability, have been observed in both experimental animal studies (Godleski et al. 2000) and human panel studies (Creason et al. 2001;Gold et al. 2000;Liao et al. 1999Liao et al. , 2004Pope et al. 1999), suggesting sympathetic activation or vagal suppression after particulate air pollution exposure. Such changes in autonomic tone may increase the risk of ventricular arrhythmias in vulnerable patients (Huikuri et al. 2001). Ventricular tachyarrhythmias, primarily ventricular tachycardia and ventricular fibrillation, are common precursors to sudden cardiac death (Bayes de Luna et al. 1989;Myerburg et al. 1992). Implanted cardioverter defibrillators (ICDs) passively monitor for ventricular tachyarrhythmias that, if not terminated, could be life threatening. On detecting such an arrhythmia, the ICD can apply cardiac pacing or cardioverter shock to restore normal rhythms. The ICD also records the date and time of arrhythmias plus intracardiac electrograms immediately before and during these events. In a pilot study of 100 Boston area ICD patients with follow-up for up to 3 years, we found increased risk of an ICD therapeutic discharge on days after elevated air pollution concentrations (Peters et al. 2000). In this pilot study, we did not collect data on patient characteristics or medication. However, we did find stronger air pollution associations among patients with frequent ICD discharges. This study was designed to confirm the pilot study observations. In a larger sample of ICD patients in Boston with longer followup, we identified ventricular tachyarrhythmias by review of ICD-recorded electrograms. We assessed the association between community air pollution and ventricular tachyarrhythmias using time-series methods. We also evaluated modification of the air pollution association by patient medical conditions, antiarrhythmic medications, and recent arrhythmias. implantation, periods when the patient was a hospital inpatient, and periods between clinical visits when the patient was not followed up at the New England Medical Center. Subjects who died or who were lost to follow-up were censored at their last clinical follow-up. The intracardiac electrograms for each ICD-detected arrhythmia were reviewed by an electrophysiologist (M.S.L.) blinded to air pollution levels. Ventricular tachyarrhythmias were identified based on atrial-ventricular dysynchrony, onset interval, stability, morphology of the tachycardia, and response to therapy. We excluded sinus tachycardias, arrhythmias originating outside the ventricle (e.g., atrial tachycardia, atrial fibrillation, atrial flutter, sinus tachycardia), and noise or oversensing events. An episode day was defined as one or more ventricular arrhythmic events on a given calendar day. Data collection and preliminary analyses have been described previously (Dockery et al., in press Air pollution. Ambient concentrations of gaseous air pollutants were measured by the Massachusetts Department of Environmental Protection between 1995 and 2002 at six sites for ozone, nitrogen dioxide, and/or sulfur dioxide and four sites for carbon monoxide in the Boston metropolitan area. We calculated the average air pollution concentration across the reporting air pollution monitoring stations for each hour accounting for differences in the annual mean and the standardized deviations of each monitor (Schwartz 2000 ) was measured by ion chromatography (model 120; Dionex, Sunnyvale, CA) starting on 25 September 1999, and particle number (PN) by condensation particle counter (TSI Inc., Shoreview, MN) starting on 13 October 1999. We did not consider PM 10 (particulate matter with a diameter < 10 µm), which was measured on a 1-in-6-day schedule. The hourly surface observations from the National Weather Service at Logan Airport in East Boston were extracted from climatic records (Earth-Info, Inc., Boulder, CO). Daily minimum temperature and mean relative humidity were calculated from the hourly observations. Statistical analyses. Following the analytic methods used in the pilot study (Peters et al. 2000), we assessed the association of arrhythmias with air pollution using time-series methods. We merged the patient-specific record of days on study and ICD-detected ventricular arrhythmias with the daily mean air pollution and weather measurements. The association of arrhythmic episode-days with air pollution was analyzed by logistic regression using generalized estimating equations (Diggle 1988;Zeger et al. 1988) with random effects for patients, a linear trend, sine and cosine terms with periods of one, one-half, one-third, and one-quarter year, quadratic functions of minimum temperature and mean humidity, indicators for day of the week, and an indicator for a previous arrhythmia within 3 days. We considered mean air pollution concentrations on the same day and lags of 1, 2, and 3 days. The lag structure of the data was estimated by evaluating each lag day (0 to 3) separately and jointly in an unconstrained distributed lag model (Pope and Schwartz 1996). We have found consistently elevated (although not statistically significant) risk estimates associated with air pollution concentrations on the day of (lag 0) and the day before (lag 1) the arrhythmia (Dockery et al., in press). Therefore, in this article we report only the effects of 2-day running mean air pollution concentrations. We explored potential modification of the air pollution associations in multivariate logistic regression including interactions between air pollution and indicators of patient characteristics. Patients were stratified by reported ejection fraction before implantation (≤ 35% vs. > 35%), prior myocardial infarction, and the diagnosis of coronary artery disease before implantation (not sufficient numbers of patients for specific analyses of other cardiac diagnoses). We assessed modification of the air pollution associations by usual prescribed medications (reported at more than half of clinical follow-ups) grouped as beta-blockers, digoxin, and other antiarrhythmics (amiodarone, sotalol, mexilitine, and quinidine). The strongest predictor of a ventricular arrhythmia was an arrhythmia in the previous 3 days. Therefore, in addition to controlling for prior arrhythmias, we assessed the modification of the air pollution association by a prior ventricular arrhythmia. We present odds ratios (ORs) and 95% confidence intervals (CIs) based on an interquartile range (25th percentile-75th percentile) increase in each air pollution concentration. p-Values are reported for the effects of air pollution and for the interactions of air pollution with posited effect modifiers. We characterize p-values < 0.05 as statistically significant, and p-values < 0.10 as marginally significant. For air pollutants and subgroups of events with statistically significant associations, we examined the risk of arrhythmias versus quintiles of air pollution concentration. Patient population. A total of 307 patients had Guidant ICDs implanted at the New England Medical Center between June 1995 and the end of 1999. There were 203 patients followed up with residential ZIP codes within 40 km (25 miles) of the ambient air pollution monitoring site at the Harvard School of Public Health. These ICD patients had a total of 635 person-years (pyr) of follow-up or an average of 3.1 years per subject. Patients were predominantly men (75%) with an average age at implantation of 64 years (range, 19-90 years). The rate of ventricular episode days per person-year was higher among men (1.22/pyr) compared with women (0.62/pyr), and increased with age at implantation. Eighty-three percent of the patients were reported to be white, 3% African American, 5% Hispanic, 3% Asian, and 7% of undetermined or unknown race/ethnicity. Among the patients reported to have had a myocardial infarction before ICD implantation, the rate of ventricular arrhythmias (1.73/pyr) was almost three times the rate among the patients without a prior myocardial infarction (0.61/pyr). The patients with low ejection fraction (≤ 35%) before implantation had a rate of ventricular episodes (1.48/pyr) approximately three times larger than that of patients with ejection fraction > 35% (0.45/pyr). The most common preimplantation diagnosis was coronary artery disease (70%) followed by cardiomyopathy (36%). Nine patients (4%) were classified as having primary electrical disease, and four of these had ventricular arrhythmic events. Four patients (2%) had long QT syndrome (a congenital disorder characterized by prolongation of the QT interval on the electrocardiogram), but only one of these had an event during follow-up. Patients with coronary artery disease had the highest Article | Air pollution and ventricular arrhythmias Environmental Health Perspectives • VOLUME 113 | NUMBER 6 | June 2005 rates of detected ventricular arrhythmias (1.30/pyr) compared with those with other diagnoses (0.50/pyr). Eighty-nine percent of these patients were prescribed antiarrhythmic medications. The rates of ventricular arrhythmic episode days was higher among those prescribed digoxin (1.68/pyr) or other antiarrhythmics (1.45/pyr) than among those prescribed beta-blockers (0.92/pyr) or no regular antiarrhythmic medication (0.88/pyr). Approximately one-quarter (164) of the 670 ventricular arrhythmias followed a previous ventricular arrhythmia within 3 days. We found that having a prior arrhythmia (within 3 days) was a very strong predictor for a subsequent arrhythmia (OR = 7.2; 95% CI, 5.9-8.9). The gaseous pollutants were measured on essentially all follow-up days (Table 1). Daily CO and NO 2 , both indicators of motor vehicle emissions, were highly correlated with each other (r = 0.61), positively correlated (r > 0.4) with BC, PM 2.5 , and SO 2 , but negatively correlated with O 3 . Air pollution association. We found positive associations between ventricular arrhythmic episode days and mean air pollution on the same and previous days, but none of these associations approached statistical significance (Table 2). We did not find consistent increased susceptibility to the effects of air pollution on risk of ventricular arrhythmias based on patient characteristics. We found marginally significant (p < 0.10) interaction of the associations with CO with ejection fraction (stronger with low ejection fraction), preimplantation diagnosis of coronary artery disease (weaker with coronary artery disease), and prior myocardial infarction (weaker with prior myocardial infarction), and of the associations with NO 2 with prior myocardial infarction (stronger with prior myocardial infarction). No other interactions approached statistical significance. We saw no evidence that any of the prescribed drugs modified the associations of ventricular arrhythmias with air pollution. The interaction of a prior ventricular arrhythmia with air pollution was statistically significant for PM 2.5 , BC, NO 2 , SO 2 , and CO and marginally significant for SO 4 (Table 3). For ventricular arrhythmias within 3 days of a prior event (Table 3), we found statistically significant positive associations with PM 2.5 , BC, NO 2 , CO, and SO 2 , marginally significant associations with SO 4 , but no associations with O 3 or PN. For ventricular arrhythmias more than 3 days after a previous ventricular arrhythmia, we found no associations with any air pollutants (Table 3). We assessed the risk of ventricular arrhythmias stratified by a prior ventricular tachyarrhythmia versus quintiles of air pollution (Figure 1). We found generally increasing risk with increasing quintiles of PM 2.5 , BC, and CO and weaker suggestions of an exposure response with NO 2 , SO 2 , and O 3 . Discussion In this study of 203 New England Medical Center ICD patients living in the Boston metropolitan area with up to 7 years of followup, we found the risk of any ICD-detected ventricular tachyarrhythmia was positively but not significantly associated with increased exposure to air pollution on the days before the arrhythmia (Table 2). We found statistically significant associations of air pollution with increased risk of ventricular arrhythmias among patients with an arrhythmia within the previous 3 days. These findings suggest that air pollution may provoke ventricular tachyarrhythmias only in the presence of acutely predisposing conditions that increase ventricular electrical instability. We did not find consistent indications that the air pollution associations with ventricular arrhythmias were modified by indicators of chronically impaired cardiac function, including a prior myocardial infarction, a diagnosis of coronary artery disease, or an ejection fraction < 35%, or by prescribed antiarrhythmic medications. These results are broadly consistent with those of previously published studies of air pollution associations with tachyarrhythmias leading to ICD therapeutic discharge. In this study, we found significantly increased risk of ventricular arrhythmias with PM 2.5 , BC, CO, NO 2 , and SO 2 among patients with a recent prior ventricular arrhythmia. In the pilot study (Peters et al. 2000), ICD patients in Boston with frequent (> 10) discharges during followup had an exposure related increase in ICD discharge associated with PM 2.5 , BC, CO, and NO 2 . A recent study assessed the association of air pollution in Vancouver, British Columbia, Canada, with ICD discharges among 50 patients with an average of 2.2 years of followup Vedal et al. 2004). In crude analyses, the rate of ICD discharge increased with quartiles of NO 2 and CO concentration on the same day ). However, there were no statistically significant positive associations with ICD discharge with NO 2 or CO after adjusting for temporal patterns and numerous weather parameters. The lack of significant associations may be caused by overcontrol of some variables, as these investigators suggest. Both of these previously reported studies (Peters et al. 2000;Vedal et al. 2004) focused on ICD therapeutic discharge without characterization or validation of the underlying arrhythmia. Of the almost 2,000 arrhythmias identified and recorded by the ICD devices in this study, 8% were classified as oversensing, 4% were sinus tachycardias, 18% were supraventricular arrhythmias, and 70% were ventricular arrhythmias. Thus, 30% of the ICDdetected arrhythmias were not the potentially life-threatening ventricular tachyarrhythmias defined as the primary outcome for this study. An important question in these analyses is the appropriate exposure averaging time and the lag between exposure and cardiac arrhythmia. In the pilot study, we found associations with air pollutants 2 days before the arrhythmias and with the 5-day mean air pollution (Peters et al. 2000). In this study, ventricular arrhythmias were positively associated with ambient air pollution on the same and the previous calendar days. Temporality would require that air pollution exposure precede the arrhythmia. This temporal association is clearly true for associations with air pollution on the previous day, but mean air pollution on the same calendar day would include hours after as well as before the detected arrhythmia. Using the pollutant concentrations from the specific 24 hr preceding the arrhythmia would likely provide a better estimate of each subject's exposure and allow investigation of exposures in the hours before the arrhythmia. For these patients living in eastern Massachusetts, air pollution exposure was estimated based on a single or a small number of monitors in the Boston metropolitan area. This would lead to misclassification of air pollution exposure, but this misclassification would be independent of the risk for ventricular arrhythmias. Such nondifferential misclassification of exposure produces an attenuated estimate of associations (and larger CIs) in epidemiologic studies assuming linear associations. If these observations are true, then studies with improved estimation of subject specific air pollution exposures would be expected to find stronger, more statistically significant associations. We found associations with CO, NO 2 , BC, and PM 2.5 . These four pollutants had high day-to-day correlations with each other and were strongly correlated with SO 2 . It would not be possible to differentiate the independent effects of these pollutants. Nevertheless, the associations with these specific pollutants are consistent with an effect from air pollution from motor vehicle sources. Animal studies in Boston have suggested that changes in indicators of cardiac function are specifically associated with motor vehicle pollution (Clarke et al. 2000). Analysis of daily mortality in Boston and five other cities suggested that motor vehicle pollution was more strongly related to cardiovascular mortality than to respiratory mortality (Laden et al. 2000). Cardiovascular emergency department visits in Atlanta, Georgia, were significantly associated with these same markers of motor vehicle air pollution-NO 2 , CO, PM 2.5 , BC, and fine particle organic carbon (Metzger et al. 2004). For Atlanta emergency department visits for dysrhythmias, positive associations were found for these same motor vehicle pollutants, although these associations were not statistically significant because of the smaller number of events. We cannot exclude the possible role of sulfur oxides, which are generally considered to be indicators of air pollution from power plants and other stationary fossil fuel combustion sources. In this analysis, we found associations of ventricular tachyarrhythmias in subjects with a recent event associated with SO 2 (p = 0.013) and with SO 4 (p = 0.06). The positive, marginally significant associations with SO 4 are notable because SO 4 data were available only on a limited number of days (37%) compared with SO 2 and the other gaseous pollutants. Particulate SO 4 concentrations in Boston largely reflect secondary particles formed during long-range transport. Gaseous SO 2 concentrations reflect local sulfur emissions and were most highly correlated with motor vehicle pollutants. A major advantage of the ICD data is the passive monitoring of cardiac tachyarrhythmias. Nevertheless, ICD-detected ventricular arrhythmias were rare events in this followup, and the small number of subjects with multiple ICD-detected arrhythmias is a limitation. These patients clearly represent a highly selected cohort of special interest, because their previous history of cardiovascular disease might make them particularly sensitive to the effects of air pollution episodes. The observed associations of ventricular tachyarrhythmias with particulate air pollution in these subjects are large compared with previous studies. In a mortality time-series analysis in Boston and five other cities , each increase of 10 µg/m 3 in the 2-day mean PM 2.5 was associated with a 2% increase in the risk of cardiovascular mortality. For Boston ICD patients (Table 2), the observed associations imply an 11% (95% CI, -9 to 35%) increased risk of potentially fatal ventricular arrhythmias when scaled to the same 10 µg/m 3 in the 2-day mean PM 2.5 concentrations. Thus, the ICD patients had a risk of potentially life-threatening ventricular tachyarrhythmias associated with fine particles that was more than five times the risk of cardiovascular death in the general population. Among those at the highest risk-those with a recent prior ventricular arrhythmia-the increased risk of a new ventricular tachyarrhythmia was 97% (95% CI, 46-165%) associated with each 10-µg/m 3 increase in PM 2.5 . Conclusions We found that ventricular tachyarrhythmias among patients with ICDs increased with air pollution on the same and previous days, but these associations did not reach statistical significance. However, among patients with a recent tachyarrhythmia, the increased risk of a follow-up ventricular tachyarrhythmia associated with air pollution was large and statistically significant. These observations suggest that air pollution may act in combination with a cardiac electrical instability to increase the risk for ventricular tachyarrhythmias. Among such acutely vulnerable ICD patients, there was an exposure response with PM 2.5 , BC, NO 2 , CO, and SO 2 , which we interpret as indicators of mobile source pollution, and also evidence of an association with SO 4 , which we interpret as an indicator of power plant and other stationary fossil fuel combustion sources. ICDs have proven to be highly effective in reducing the risk of death in patients with high risk of cardiac arrhythmias. The passive monitoring of arrhythmias by these devices has provided a rich resource for understanding the role of air pollution episodes as potential triggers of these events.
2014-10-01T00:00:00.000Z
2005-02-18T00:00:00.000
{ "year": 2005, "sha1": "dcdadc139fd162e437de9709ebc13e0b50ad67a2", "oa_license": "CC0", "oa_url": "https://doi.org/10.1289/ehp.7767", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4acd0cf348ecce55898ab473a5e3ca504e47815", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256436394
pes2o/s2orc
v3-fos-license
Role of B Cell-Activating Factor in Fibrosis Progression in a Murine Model of Non-Alcoholic Steatohepatitis Non-alcoholic fatty liver disease (NAFLD) is the most prevalent chronic liver disease all over the world. Therapeutic strategies targeting its multidirectional pathways are required. Particularly, fibrosis is closely associated with its prognosis. We previously found that B cell-activating factor (BAFF) is associated with severity of NAFLD. Here, we determined the direct in vivo role of BAFF in the development of liver fibrosis. Histological and biochemical analyses were performed using wild-type and BAFF-deficient mice. We established a murine model of non-alcoholic steatohepatitis (NASH) using carbon tetrachloride injection accompanied by high-fat/high-cholesterol diet feeding. Additionally, in vitro analysis using mouse macrophage-like cell line RAW264.7 and primary hepatic stellate cells was performed. Hepatic steatosis and inflammation, and most importantly, the progression of liver fibrosis, were ameliorated in BAFF-deficient mice compared to those wild-type mice in our model. Additionally, BAFF deficiency reduced the number of CD11c+ M1-type macrophages in the liver. Moreover, BAFF stimulated RAW264.7 cells to secrete nitric oxide and tumor necrosis factor α, which drove the activation of hepatic stellate cells. This indicates that BAFF plays a crucial role in NASH development and may be a promising therapeutic target for NASH. Introduction Non-alcoholic fatty liver disease (NAFLD) is an exhibit of systemic disease based on lifestyle-related diseases such as obesity, diabetes, and metabolic syndrome. The number of patients with NAFLD is rapidly increasing with the rising prevalence of obesity, and it has become a global health problem [1,2]. NAFLD is not just a manifestation of a metabolic syndrome but has emerged as a driver of systemic disease [3]. Previous studies have suggested that the fibrosis stage is an independent and significant predictor of overall mortality and liver-related events [4][5][6]. Therefore, it is critical to identify the drivers that mediate the progression of non-alcoholic fatty liver (NAFL) to non-alcoholic steatohepatitis (NASH). As chronic hepatic inflammation has been shown to contribute to disease progression, one of the potential therapeutic targets of NAFLD is the suppression of inflammation and fibrosis by the regulation of immune abnormalities. We have been conducting research on the immunological aspects of NAFLD [7][8][9] development. B cell-activating factor (BAFF) (CD257) is a factor that promotes the expansion and differentiation of the B cell population [10,11]. BAFF belongs to the tumor necrosis factor (TNF)-ligand family and is secreted from activated T cells, B cells, and myeloid lineage cells such as macrophages and dendritic cells. Previously, we had reported that serum BAFF levels were increased, and that BAFF was preferentially expressed, in the visceral adipose tissue of high-fat diet (HFD)induced obese mice [12][13][14]. In addition, hepatic steatosis was attenuated in BAFF-deficient mice fed an HFD compared to in control mice [15]. Furthermore, serum BAFF levels in patients with NASH were higher than those in patients with NAFL [16]. Collectively, these data indicate that BAFF may be closely related to hepatic steatosis, inflammation, and fibrosis in NASH. However, the direct role of BAFF in hepatic fibrosis has not yet been elucidated. In this study, we established a murine model of NASH that led to the reproducible progression of steatohepatitis with significant fibrosis using carbon tetrachloride (CCl 4 ) injection accompanied by high-fat/high-cholesterol diet (HFHCD) feeding. To determine the in vivo role played by BAFF in the development of liver fibrosis, we studied the NASH model using BAFF-deficient mice and found that BAFF-deficient mice were protected from developing not only steatosis but also NASH and fibrosis. Our findings indicate that BAFF plays an important role in the development of liver fibrosis and may be a therapeutic target for NASH. Liver Inflammation Is Attenuated in BAFF −/− Mice in Murine Models of NASH In addition to steatosis, liver inflammation was attenuated in HFHCD/CCl 4 -treated BAFF −/− mice compared to in HFHCD/CCl 4 -treated WT mice (Figure 1d,e). We per-formed immunohistochemical and flow cytometric analyses of immune cell populations and assessed inflammation-related gene expression in the liver by real-time RT-PCR. In immunohistochemical analysis, the number of macrophages and crown-like structures (CLSs) in the liver was significantly lower in BAFF −/− mice than that in WT mice treated with HFHCD/CCl 4 (Figure 2a). Flow cytometric analysis showed that the proportion of F4/80 + CD11c + M1-like macrophages in the livers of HFHCD/CCl 4 -treated BAFF −/− mice was significantly lower than that in HFHCD/CCl 4 -treated WT mice (Figure 2b). In addition, the expression of TNF-α, interleukin (IL) 6, monocyte chemotactic protein (MCP)-1, and M1-macrophage-related markers, such as CD11c and inducible nitric oxide synthase (iNOS), was significantly lower in the livers of BAFF −/− mice than that in WT mice treated with HFHCD/CCl 4 (Figure 2c). The expression of iNOS in macrophages was confirmed by immunohistochemistry, and iNOS-positive areas in the livers of HFHCD/CCl 4 -treated BAFF −/− mice were significantly lower than in HFHCD/CCl 4 -treated WT mice ( Figure 2d). These data suggest that attenuated liver inflammation in BAFF deficiency is associated with a reduction in proinflammatory cytokines produced by macrophages in our models. To test whether the role of BAFF is limited to the specific case of HFHCD/CCl 4induced liver injury, we used two alternative model systems: (1) feeding with HFHCD for a longer period ( Figure 4a) and (2) feeding with choline-deficient, L-amino-acid-defined high-fat diet (CDAHFD) (Figure 4b). In both models, collagen deposits in the liver were significantly lower in BAFF −/− mice than in WT mice (Figure 4c,d). Thus, liver fibrosis progression was ameliorated in the absence of BAFF in at least three model systems. Hepatic Stellate Cell Activation Is Driven by Interaction with Macrophages in Murine Models of NASH A crucial downstream consequence of hepatic inflammation is the activation of hepatic stellate cells (HSCs), the principal fibrogenic cell type in the liver. To explore the underlying mechanisms of the differences in liver fibrosis in our models, we investigated the direct effects of BAFF on HSCs. However, receptors for BAFF, such as BAFF-receptor (BAFF-R), ransmembrane activator, calcium-modulator, and B-cell maturation antigen, were not expressed in primary cultured mouse HSCs. Furthermore, the addition of BAFF to HSCs in culture did not alter the gene expression of HSCs such as α-SMA. These data indicate that BAFF has little or no direct effect on HSCs. Thereafter, we focused on macrophages, which are one of the main sources of potent pro-fibrogenic signals [19]. In addition to hepatocytes [13], macrophages express BAFF-R [20]. Hence, we examined whether BAFF-R exerted its signaling function in macrophages Hepatic Stellate Cell Activation Is Driven by Interaction with Macrophages in Murine Models of NASH A crucial downstream consequence of hepatic inflammation is the activation of hepatic stellate cells (HSCs), the principal fibrogenic cell type in the liver. To explore the underlying mechanisms of the differences in liver fibrosis in our models, we investigated the direct effects of BAFF on HSCs. However, receptors for BAFF, such as BAFF-receptor (BAFF-R), ransmembrane activator, calcium-modulator, and B-cell maturation antigen, were not expressed in primary cultured mouse HSCs. Furthermore, the addition of BAFF to HSCs in culture did not alter the gene expression of HSCs such as α-SMA. These data indicate that BAFF has little or no direct effect on HSCs. Thereafter, we focused on macrophages, which are one of the main sources of potent pro-fibrogenic signals [19]. In addition to hepatocytes [13], macrophages express BAFF-R [20]. Hence, we examined whether BAFF-R exerted its signaling function in macrophages and found that nuclear factor (NF)-κB2 and RelB, the non-canonical NF-κB pathway, were induced in the mouse macrophage-like cell line (RAW264.7) treated with BAFF ( Figure 5a). Furthermore, as shown in Figure 5b, BAFF treatment upregulated the expression of genes encoding proteins related to inflammation and tissue repair, including iNOS and IL-10. These changes were inhibited by treatment with the recombinant BAFF-R Fc (Figure 5b). 5a). Furthermore, as shown in Figure 5b, BAFF treatment upregulated the expression of genes encoding proteins related to inflammation and tissue repair, including iNOS and IL-10. These changes were inhibited by treatment with the recombinant BAFF-R Fc ( Figure 5b). Finally, we investigated whether the effect of BAFF on macrophages influenced HSC activation. Isolated mouse HSCs were cultured with conditioned medium (CM) from BAFF-or phosphate-buffered saline (PBS)-treated-RAW264.7 cells in addition to lipopolysaccharide (LPS). The mRNA levels of TGF-β1 and α-SMA were significantly increased in HSCs following culture with CM from BAFF-treated-RAW264.7 cells (Figure 6a). Moreover, nitric oxide (NO) (LPS 344.7 ± 94.08 μM, LPS+BAFF 423.4 ± 83.28 μM: p < 0.05) and TNF-α levels in CM from BAFF-treated-RAW264.7 cells were significantly higher than those from PBS-treated RAW264.7 cells (Figure 6b). However, the IL-6 levels in CM were not different between the two groups. These results indicate that BAFF-treated macrophages generate inflammatory cytokines such as NO and TNF-α and activate HSCs, leading to the development of liver fibrosis in NASH. Finally, we investigated whether the effect of BAFF on macrophages influenced HSC activation. Isolated mouse HSCs were cultured with conditioned medium (CM) from BAFFor phosphate-buffered saline (PBS)-treated-RAW264.7 cells in addition to lipopolysaccharide (LPS). The mRNA levels of TGF-β1 and α-SMA were significantly increased in HSCs following culture with CM from BAFF-treated-RAW264.7 cells (Figure 6a). Moreover, nitric oxide (NO) (LPS 344.7 ± 94.08 µM, LPS+BAFF 423.4 ± 83.28 µM: p < 0.05) and TNF-α levels in CM from BAFF-treated-RAW264.7 cells were significantly higher than those from PBS-treated RAW264.7 cells (Figure 6b). However, the IL-6 levels in CM were not different between the two groups. These results indicate that BAFF-treated macrophages generate inflammatory cytokines such as NO and TNF-α and activate HSCs, leading to the development of liver fibrosis in NASH. Discussion The prevalence of NAFLD is increasing and liver-related morbidity and mortality will dramatically increase worldwide within the next few decades [2]. At present, lifestyle modification is the mainstay of therapeutic recommendations and no specific pharmacological treatment is available for NAFLD [21]. Drugs targeting metabolism, inflammation, and fibrogenesis are under development; however, several compounds have only improved the investigational endpoints in a subset of patients who are exposed to these drugs, indicating that targeting a single pathway may be insufficient [22]. To overcome these problems, combination therapies and/or therapies targeting multidirectional effects may be promising approaches. We previously found that BAFF is associated with NAFLD severity in Japanese patients [16]. Furthermore, BAFF deficiency prevents fat accumulation in the liver by suppressing the visceral adipose tissue inflammation and de novo lipogenesis in an HFD-fed mouse model [15]. Although liver fibrosis was not observed in liver specimens from HFDfed mice, the expression of fibrosis-related genes, such as TGF-β1, α-SMA, and Col-1a1, was significantly lower in the livers of BAFF −/− mice than in those of WT mice. Furthermore, BAFF promotes collagen production by dermal fibroblasts from patients with systemic sclerosis [23], and BAFF inhibition attenuates skin and liver fibrosis in mouse models of scleroderma [24]. These findings suggest that BAFF may play a role not only in metabolism and inflammation but also in fibrosis in the pathogenesis of NASH, which indicates that it is one of the therapeutic targets in NASH. Discussion The prevalence of NAFLD is increasing and liver-related morbidity and mortality will dramatically increase worldwide within the next few decades [2]. At present, lifestyle modification is the mainstay of therapeutic recommendations and no specific pharmacological treatment is available for NAFLD [21]. Drugs targeting metabolism, inflammation, and fibrogenesis are under development; however, several compounds have only improved the investigational endpoints in a subset of patients who are exposed to these drugs, indicating that targeting a single pathway may be insufficient [22]. To overcome these problems, combination therapies and/or therapies targeting multidirectional effects may be promising approaches. We previously found that BAFF is associated with NAFLD severity in Japanese patients [16]. Furthermore, BAFF deficiency prevents fat accumulation in the liver by suppressing the visceral adipose tissue inflammation and de novo lipogenesis in an HFD-fed mouse model [15]. Although liver fibrosis was not observed in liver specimens from HFDfed mice, the expression of fibrosis-related genes, such as TGF-β1, α-SMA, and Col-1a1, was significantly lower in the livers of BAFF −/− mice than in those of WT mice. Furthermore, BAFF promotes collagen production by dermal fibroblasts from patients with systemic sclerosis [23], and BAFF inhibition attenuates skin and liver fibrosis in mouse models of scleroderma [24]. These findings suggest that BAFF may play a role not only in metabolism and inflammation but also in fibrosis in the pathogenesis of NASH, which indicates that it is one of the therapeutic targets in NASH. In the present study, we report a new murine NASH model using HFHCD and CCl 4 , which develops the histological features of NASH with extensive fibrosis and severe inflammation. HFHCD has been widely used to establish mouse NASH models; however, the major disadvantage of this NASH model is that it does not fully progress to severe steato-hepatitis, even after long-term feeding. CCl 4 has been traditionally used for decades to induce liver injury and fibrosis in rodents [17,18,25]. Although the mechanical stimulation of CCl 4 differs from the natural history of NASH, we used CCl 4 as a fibrosis accelerator in this model. HSCs are a major source of activated myofibroblasts, and their activation is known to drive fibrosis in the liver [26]. HFHCD and CCl 4 treatment induced the proliferation and activation of HSCs, which were responsible for the rapid progression of fibrosis in our NASH model. Consistent with our previous study [15], hepatic steatosis was ameliorated in BAFF −/− mice compared to in WT mice (Figure 1). Moreover, liver inflammation was attenuated in BAFF −/− mice compared to in WT mice (Figure 2). Itoh et al. [27,28] have reported that CD11c + -activated macrophages induce the formation of hepatic CLSs in murine models and patients with NASH, which is consistent with the findings based on our model. BAFF deficiency reduced the number of macrophages, especially CD11c + M1-type macrophages, that produce inflammatory signals such as TNF-α and iNOS, and CLS formation in the livers of HFHCD/CCl 4 -treated mice (Figure 2). It is widely accepted that the M1/M2 balance of macrophages plays an important role in obesity, metabolic syndrome, and fatty liver disease [29]. Macrophages respond to a cocktail of injury signals, including BAFF, and secrete inflammatory substances that may contribute to NASH development and exacerbation. One of the most relevant findings of our present study is that BAFF-deficient mice are protected from the development of NASH and the progression of fibrosis (Figure 3), which may contribute to the higher liver weight in BAFF −/− mice compared to that in WT mice (Figure 1b). Furthermore, we confirmed the effect of BAFF on liver fibrosis in the other two models (Figure 4). One of the key features of NASH development and fibrosis progression is the activation of HSCs [26,[30][31][32]. In addition to mouse HSC, we observed that the human HSC cell line, LX-2, did not express BAFF-R in our preliminary study. Although the direct effects of BAFF on HSCs in vitro were not observed in our study, BAFF stimulated macrophages to secrete cytokines and other soluble factors that contribute to fibrosis by activating HSC (Figure 6). Similarly, previous reports have demonstrated that macrophages drive HSC activation through the release of soluble factors, such as cytokines, chemokines, and reactive oxygen species, and promote HSC survival [26,31]. Moreover, HSCs and macrophages interact and remodel the extracellular matrix and immune microenvironment among liver cells in the fibrotic liver [32]. Although the influence of other factors has not been disregarded, NO and TNF-α released from macrophages are partly involved in this process. These data indicate that this inflammatory switch is a definitive step in fibrosis progression. Recent reports have outlined the heterogeneity of hepatic macrophages in NASH [27,28,[33][34][35][36]. Satoh et al. [33] have reported that Ceacam1 + Msr1 + Ly6C − F4/80 − Mac1 + monocytes, which are segregated-nucleus-containing atypical monocytes, are critical for fibrosis. Furthermore, single-cell RNA sequencing has revealed that specific phenotypes of macrophages characterized by high expression of triggering receptor expressed on myeloid cells 2 were expanding in both mouse models and human NASH. These cells, also called NASH-associated macrophages, have been reported to be linked to CLS aggregates and are correlated with disease severity [34]. Initially, we investigated these cells from the liver in our models using flow cytometry; however, the number of these cells did not differ between WT and BAFF −/− mice. Further studies on the functional properties of these cells are necessary in this regard. The adipose tissue is also a key factor in the pathogenesis of NAFLD. We previously showed that oxidative stress in visceral adipose tissue, which is one of the underlying causes of NAFLD, regulated serum BAFF levels in HFD-fed mice [14]. Although we did not investigate the VAT in this model, similar mechanisms may play a role in regulating the BAFF level. We primarily focused on the metabolic aspects of BAFF in the liver; however, BAFF further plays an important role in regulating the immune system [37]. Recently, B lymphocytes have been reported as central mediators of the progression of NAFLD and NASH-associated hepatocellular carcinoma [25,[38][39][40][41]. Most of these reports have suggested the pathogenic role of B cells in NAFLD. Interestingly, in mice fed a methionine-choline-deficient diet, B2-cell depletion, including antibody-mediated BAFF neutralization, ameliorated parenchymal damage and lobular inflammation [40]. Furthermore, Novobrantseva et al. [25] have reported that B cells play an important antibody-independent role in tissue repair following liver injury and the development of liver fibrosis. Therefore, it is conceivable that B-celldirected therapies, including B-cell depletion and approaches targeting B-cell inflammatory mediators such as TNF-α, may ameliorate NASH progression [38]. However, Karl et al. [41] have recently reported that B cells have both detrimental and protective effects in a NAFLD mouse model. Therefore, future studies are warranted to investigate the immunological roles of BAFF and B-cell compartments prior to clinical trials. In summary, we demonstrated that BAFF depletion ameliorates NASH development and fibrosis progression. Although more preclinical evidence is required, our data suggest that targeting BAFF may be beneficial in treating NASH. Animals All study protocols complied with the guidelines of Ehime University (Ehime, Japan), and the protocol was approved by Ehime University Animal Research (No. 05TI70-16). Male C57BL/6J WT mice and BAFF −/− mice were purchased from CLEA Japan (Tokyo, Japan) and the Jackson Laboratory (Bar Harbor, ME, USA), respectively. They were maintained at the Department of Biological Resources, Integrated Center for Science, Ehime University, under controlled temperature, humidity, and light (12-h light/dark cycles). Six-week-old mice, WT mice, and BAFF −/− mice were fed HFHCD (20 g% fat, 18 g% cholesterol, 22 g% protein, and 45 g% carbohydrates; 1883.7 kJ [450 kcal]/100 g; D16010101; Research Diets, New Brunswick, NJ, USA). They received intraperitoneal injections of CCl 4 (Wako, Osaka, Japan) at a dose of 0.4 mL/kg diluted 1:25 in olive oil twice a week for 4 weeks from 9 weeks of age. Mice were maintained on each diet and sacrificed at 13 weeks of age. Serum and liver samples were stored at −80 • C until use. The liver was submerged in RNA later (Life Technologies, Carlsbad, CA, USA) overnight and stored at −20 • C. Serum AST and ALT levels were measured using a Hitachi 7180 autoanalyzer (Hitachi, Ltd., Tokyo, Japan). Histological and Morphometric Analysis Liver tissues were fixed in neutral-buffered formalin and embedded in paraffin. Threemicrometer-thick sections were stained with hematoxylin and eosin, SR, and α-SMA (Thermo Fisher Scientific, Waltham, MA, USA). Histological examination was performed in a blinded manner by two experienced liver pathologists with a histological scoring system for NAFLD [42]. The positive areas of each stain(s) were measured digitally using the ImageJ software (National Institutes of Health, Bethesda, MD, USA). To evaluate the degree of fat accumulation, the livers were fixed with osmium tetroxide (OsO 4 ), as described previously [15]. SR-positive areas or α-SMA-positive areas were measured using histological light-microscopy images (10×; 7 sections per animal, n = 8 animals/group). Measurement of Hepatic Triglyceride and Cholesterol Hepatic triglyceride and cholesterol levels were measured at Skylight Biotech (Akita, Japan) using the Folch technique with Cholestest TG and Cholestest CHO kits (Sekisui Medical, Tokyo, Japan), respectively. Collagen Content in Hepatic Tissue The total collagen content in the hepatic tissue was measured using a commercially available kit (QuickZyme Total Collagen Assay; QuickZyme Biosciences, Leiden, Netherlands). Quantitative Real-Time RT-PCR RNA was extracted using a RNeasy Plus Mini Kit (Qiagen, Hilden, Germany). Reverse transcription was performed using a High-Capacity cDNA Reverse Transcription kit (Applied Biosystems, Foster City, CA, USA), and real-time RT-PCR analysis was performed using SYBR Green I (Roche Diagnostics, Basel, Switzerland) on a LightCycler ® 96 (Roche Diagnostics). Primer sequences and annealing temperatures are provided in Table 1. Gene expression data were normalized to the housekeeping gene encoding hypoxanthine phosphoribosyltransferase (HPRT) 1 and expressed as a ratio of the values obtained for WT mice and control RAW264.7, respectively. The CellAmpTM Direct TB Green RT-qPCR Kit (TAKARA Bio, Shiga, Japan) was used for primary cultured stellate cells according to the manufacturer's protocol. Gene expression data were normalized to hypoxanthine phosphoribosyltransferase 1 and expressed as a ratio of the values obtained for the CM from PBS-treated-RAW264.7 cells in addition to LPS (Sigma-Aldrich, St. Louis, MO, USA). Isolation of Primary HSCs Primary HSCs were isolated from male C57BL/6 mice using the pronase-collagenase digestion method [43] and were cultured in Dulbecco's Modified Eagle Medium (DMEM) supplemented with 10% fetal bovine serum (FBS; Merck, Darmstadt, Germany). One day after culturing, the cells were treated with 100 ng/mL murine recombinant BAFF (R&D Systems, Minneapolis, MN, USA) for 24 h. Cell Lines Mouse macrophage-like cell line RAW264.7 was purchased from DS Pharma Biomedical Japan (Osaka, Japan). RAW264.7 cells were cultured in DMEM with high-glucose, L-glutamine, and sodium pyruvate (Thermo Fisher Scientific), supplemented with 10% FBS and 1% penicillin and streptomycin. The cells were treated with 2 µg/mL murine BAFF-R Fc (Enzo Life Sciences, Plymouth Meeting, PA, USA) or PBS. After 2 h, RAW264.7 cells were treated with 100 ng/mL murine recombinant BAFF or PBS for 24 h. To assess the soluble factors released from macrophages, RAW264.7 cells were treated with 100 ng/mL murine recombinant BAFF and 100 ng/mL LPS for 20 h, and their supernatants were added to primary cultured HSCs on the day of collection. NF-κB Activity Assay Nuclear protein extracts were prepared from RAW264.7 cells using the Nuclear Extract kit (Active Motif, Carlsbad, CA, USA) according to the manufacturer's protocol. As previously described, NF-κB activation was analyzed with the TransAM NF-κB Family kit (Active Motif) [13,14]. Nitrate and TNF-α Determination The NO concentration in the culture supernatants was determined using the Quan-tiChrom Nitric Oxide Assay Kit (BioAssay Systems, Hayward, CA, USA) according to the manufacturer's protocol. TNF-α concentration in the culture supernatants was measured using the enzyme-linked immunosorbent assay (R&D Systems). Statistical Analysis Data were analyzed using the JMP version 11.2.0 software (SAS Institute, Cary, NC, USA). Values are shown as the mean ± standard error of the mean or standard deviation. Normally distributed, skewed data and categorical data were analyzed using unpaired t-tests, Mann-Whitney U tests, and χ 2 tests, respectively. Differences were considered statistically significant at p < 0.05.
2023-02-01T16:19:50.826Z
2023-01-28T00:00:00.000
{ "year": 2023, "sha1": "8812bc1e5de46707a75374232a46bf06d37cfc2d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/3/2509/pdf?version=1674902413", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c551d5337e2d82af3b0a4317e46605d60015f028", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
109339827
pes2o/s2orc
v3-fos-license
Heterogeneous Wireless Sensor Network for Transportation System Applications , Introduction A recent study by the UK Government's Office of Science and Innovation, which examined how future intelligent infrastructure would evolve to support transportation over the next 50 years, looked at a range of new technologies, systems, and services that may emerge over that period [1,2].One key class of technology that was identified as having a significant role in delivering future intelligence to the transport sector is Wireless Sensor Networks (WSN) and in particular the fusion of fixed and mobile networks to help deliver a safe, sustainable, and robust future transport system based on the better collection of data, its processing and dissemination, and the intelligent use of the data in a fully connected environment.As future intelligent infrastructure will bring together and connect individuals, vehicles and infrastructure through wireless communications, it is critical that robust communication protocols are developed. Mobile wireless ad-hoc networks (MANETs) are selforganising networks where nodes exchange data without the need for an underlying infrastructure [3].MANETs have attracted extraordinary attention from the research community in recent years including in real transport applications.In the road transport domain, schemes which are fully infrastructureless and those which use a combination of fixed (infrastructure) devices and mobile devices fitted to vehicles and other moving objects are of significant interest to the transport community as they have the potential to deliver a "connected environment" where individuals, vehicles, and infrastructure can coexist and cooperate, thus delivering more knowledge about the transport environment, the state of the network, and who indeed is travelling or wishes to travel [4].This may offer benefits in terms of real-time management, optimisation of transport systems, intelligent design, and the use of such systems for innovative road charging and possibly carbon trading schemes as well as through the CVHS (Cooperative Vehicle and Highway Systems) for safety and control applications.Within the vehicle, the devices may provide wireless connection to various Information and Communications Technologies (ICT) components in the vehicle and connect with sensors and other devices within the engine management system [5].Advances in wireless sensor networking techniques which offer tiny, low power and MEMS (Micro-Electro Mechanical Systems) integrated devices for sensing and networking will exploit the possibility of vehicle-to-vehicle and vehicle-toinfrastructure communications [6]. In this paper, wireless sensor network applications in the transport systems and using middleware to integrate heterogeneous Wireless Cooperative Objects (WICOs) are discussed.Section 2 describes the EMMA project and its hierarchical approach and communication technologies.An overview of wireless sensor networks middleware and components of the EMMA middleware is presented in Section 3. Section 4 describes the different hardware platforms which are used as wireless cooperating objects in the prototype applications.Three applications for each hierarchical level and an inter-hierarchical level application are given in Section 5. Section 5 also presents encouraging results obtained from the experiments in investigating the feasibility of utilising EMMA middleware in real transport system applications.Conclusions are then presented in Section 6. The EMMA Project The EMMA project (Embedded Middleware in Mobility Applications project) is partly funded by the European Commission under the Information Society Technologies (IST) Priority of the 6th Framework Programme.The EMMA project was committed to deliver a middleware platform and a development environment which facilitates the design and implementation of embedded software for cooperative sensing objects [7,8].The EMMA network architecture (Figure 1) can be considered at three levels: within an engine level, at a vehicle level, and at the supra-vehicle level.Recently, many wireless sensor network applications have been developed for a variety of applications including transport monitoring and control.However, there are still numerous challenges to be overcome if wireless sensor devices are to communicate with each other in an intelligent, cost-effective, and reliable way. EMMA communication networks at the supra-vehicle level can be considered as mobile wireless sensor networks while vehicle level networks and engine level networks can be considered as static wireless sensor networks.The current wireless sensor networks employ conventional technologies to interact with other devices in the network, and many companies and organizations are developing various wireless communication interface and protocols for sensors.Bluetooth is currently the most widely used automotive wireless technology for in-vehicle communication and Wi-Fi is used for vehicle to vehicle communication by several pilot research projects such as the Car2Car consortium [9].ZigBee technology is able to provide the interconnection of low power wireless sensors within vehicles and vehicle to infrastructure.The ZigBee standard has evolved since its original release in 2004 and it is a low cost low power wireless networking standard for sensors and control devices.ZigBee provides network speed of up to 250 kbps and is expected to be largely used in typical wireless sensor network applications where high data rates are not required [10,11]. The EMMA project needed to discover which communication technologies are more suitable and how the networks are formed by WICOs from different levels.ZigBee, Bluetooth and Wi-Fi have been designed for short-range wireless applications with low power solutions and could be used at the EMMA infrastructure level.ZigBee can accommodate larger numbers of devices than Bluetooth.On the other hand, Bluetooth offers high bandwidth with relatively high throughput.EMMA network applications do not require high data rate communication technology as it is based on data exchange.ZigBee provides 250 kbps data rate and is expected to be enough for the EMMA sensor network applications.Notably, ZigBee uses low overhead data transmission and requires low system resources which are vitally important factors for embedded wireless sensor networks.Also mesh networking features in ZigBee technology allow devices to extend coverage and optimize radio resources.These features show that ZigBee is a suitable communication technology for the EMMA project applications. Middleware for Wireless Sensor Networks The term middleware refers to the software layer between the operating system and the applications.A middleware layer seeks primarily to hide the underlying network environment complexity by insulating applications from explicit protocol handling, disjoint memories, data replication, network faults, and parallelism.Further middleware masks the heterogeneity of computer architectures, operating systems, and communication technologies to facilitate application programming and management [12].The design and development of a successful middleware should address many challenges in WSN such as scarcity of resources, mobility, heterogeneity, data aggregation, quality of services, and security.Several middleware systems have been proposed for WSN but each addresses a different part of the problem space.Notable middleware for sensor networks are Impala, Mate, TinyDB, TinyCubus, TinyLime, and MiLAN [13].Most of these middleware are built on top of TinyOS [14] which is an open source operating system mainly designed for wireless sensor networks.The middleware can be classified as service-centric middleware and datacentric middleware.Service centric middleware is driven by commands while data centric middleware is driven by data. Service-centric middleware is described as a well-defined and self-contained function that does not depend on the context or the state of other services.Such a service is executed by explicitly calling it.After the completion of the service, a response is returned.This type of middleware is the principally used paradigm in traditional distributed systems, either with a procedural abstraction or based on objectorientation.Data-centric middleware is mostly concerned with the communication of data and provides a small general purpose API to send and receive data.There is no clientserver relationship but there is a distinction between data providers and data consumers.The data-centric approach is mainly followed in the area of sensor networks where the naming and type of data play a more important role than the specific device responsible for its processing. The EMMA Embedded Middleware Platform (EM2P) is designed to support a range of applications running on different WICOs.EM2P is designed in a modular fashion.The communication adapters form the interface between the communication module and the actual hardware drivers.The communication module has a generic part and two specialised parts for message and data-centric communication.The security add-on is configurable via the middleware API, but is actually used in the communication module.The same applies for the data connector.Installation and the configuration and monitoring module use message communication and are, therefore, built on top of the message communication.A general communication abstraction is used by the synchronisation module.The main components of the EM2P is shown in Figure 2. The middleware abstracts from the underlying communication technology by providing a high-level addressing mechanism and the communication functions do not imply a specific communication technology.The middleware converts the local representation of a data value to its network representation and vice versa when sending or receiving data.Messages can be sent directly to a specific WICO by knowing only its EMMA WICO address.An application can register for the reception of messages.A call back function is called when a message is received.The content of a message is completely controlled by the application and no data conversion is done by the middleware.Therefore, the application has to assure that the receiver understands the message contents.EM2P uses publish/subscribe communication, request/response communication, and data connector functionalities. EMMA Wireless Cooperative Objects The following sections explain three different platforms which are used in EMMA project prototype applications: Commercially available Crossbow MicaZ and TelosB and an off-the-shelf Xilinx ML403 FPGA board.C-based multithreaded NanoQplus [15] operating system is used in MicaZ and TelosB motes while a Linux-based Qplus [15] operating system is used in Xilinx ML403 FPGA.These devices with sensors, actuators, and related software are called WIreless Cooperating Objects (WICOs) which may be heterogeneous, but nevertheless able to cooperate together to achieve specific goals [8].available in the market, Crossbow MicaZ [16] family motes had been chosen for EMMA project as it features sensing and networking capabilities with low-power consumption using ZigBee as communication protocol.Figure 3 shows a Crossbow MicaZ mote. The MicaZ is a family of the Crossbow Mica motes where the radio transceiver uses the Chipcon CC2420 IEEE 802.15.4 (ZigBee) compliant chipset.This allows the MicaZ to communicate with other ZigBee compliant equipment.The software stack includes a MicaZ motespecific layer with ZigBee support and platform device drivers, as well as a network layer for topology establishment and single/multihop routing features.It is mainly used for research and development of low power wireless sensor network applications.The MicaZ mote platform is built around the Atmel AtMega128L processor which is capable of running at 7.37 MHz.The MicaZ motes have 128 Kbytes of program memory, 512 Kbytes of flash data logger memory, and 4 Kbytes of SRAM.Power is provided by two AA batteries, and the devices have a battery life of roughly one year depending on the application (very low duty cycle assumed).Sensor boards can be attached through a surface mount 51 pin connector, Inter-IC (I2C), Digital Input Output (DIO), Universal Asynchronous Receiver Transmitter (UART), and a multiplexed address/data bus. Engine Level WICO. In the Engine level application, the Crossbow TelosB [16] mote platform was chosen for the EMMA project prototype applications.Compared to the MicaZ mote, the TelosB mote has higher processing power which is required to implement with a multitasking approach applications: to run several threads of the middleware, the acquisition task, and keeping low latencies in the communication among the Engine WICOs.The high ADC resolution is necessary to satisfy one of the needs of the engine application that is the acquisition of an accurate analogue signal.Less important but good features are the presence of a USB connection, the on board sensors, and LEDs for demonstration and development purposes. The Crossbow TelosB (Figure 4) mote is a commercially available mote platform with Chipcon CC2420 IEEE Vehicle Level WICO. At the vehicle level, there is a need for introducing wireless communication to existing sensing technologies.In addition, it may be necessary to increase the processing power and memory space in order to ensure the EMMA middleware runs seamlessly without compromising the performance of each sensor.It is important to keep the low-power communications available in the other hardware alternatives used but it may be necessary for the units themselves to be capable of running much more complex algorithms.The hardware chosen for EMMA project for this level of WICO, therefore, reflects that extra processing power needed.The vehicle level WICO consists of two elements.Firstly, an off-the-shelf Xilinx ML403 FPGA [17] board is used as the foundation of the system.This FPGA contains a powerful Power PC microprocessor.Secondly, the functionality in this board is extended using a custom, built daughter board.This daughter board contains a number of different hardware devices required by the project including a 12 V automotive power supply, a CAN [18] port for interfacing to automotive ECUs and 2 further RS232 ports which are used to send and receive data over ZigBee and from the other devices as appropriate (e.g., GPS). Figure 5 shows Xilinx ML403 FPGA with TRW Conekt interface board.results of the project, but also to demonstrate the validity of the EMMA approach and its potential applications of heterogeneous wireless networks in transport systems. Engine Level Application. The application proposed in the EMMA project is a solution for new engine control architecture (Figure 6), characterised by the integration of new sensors (in-cylinder pressure, oil pressure, and valve lift, not available on current engines) without redesigning the ECU engine.The engine network (Figure 7), wirelessly connected with the ZigBee technology, is so composed: (i) 4 Cylinder WICOs, sampling the two sensors (a pressure sensor and a valve lift sensor) on each cylinder, (ii) oil pressure WICO, sampling by a sensor for the measurement of the oil pressure in the oil delivery head, (iii) ECU WICO, composed by a wireless node connected to the ECU (Electronic Control Unit). The role of the four Cylinder WICOs is to sample the two analogue channels (connected to the valve lift and the in-cylinder pressure sensors) and calculate the maximum value for the first and the integral over a whole engine cycle for the second.The oil pressure sensor is responsible for sampling the oil pressure sensor.Upon a request from the ECU (simulated by a LabVIEW software on a Laptop), the ECU WICO queries the other engine WICOs, collects the received messages, calculates their latency and validity, and returns the data to the ECU. Implementation. The Engine WICO consists of two main components: a TelosB mote and hardware adaptation module, necessary to properly interface the connection available on the board to the required conditioning electronics required by the engine sensors.The application has been evaluated by an ad-hoc test bench, where the values acquired from the real sensors are reproduced on the analogue outputs of an acquisition board, reading them from a set of measurement previously acquired from a real Multijet engine.The ECU is implemented by the software running on a Laptop. All the WICO applications have been programmed by the EM2P (EMMA middleware) functionalities.Several set of experiments have been carried out for data centric paradigm (request/response). Experimental Results.This engine application has been validated in three different scenarios: the whole test bench has been tested in the laboratory environment and in the environmental chamber, while the engine nodes (without the acquisition board) have been tested directly on a real engine.A set of tests have been performed for the three different environments based on EMMA request/response mechanism (data-centric), and for several values of RPM (from 1000 rpm to 6000 rpm).For each test, a set of log files (registering latencies, packet loss, and other WSN-related data) have been collected for offline analysis of the application performance. Laboratory Environment.The laboratory tests were carried out on a test bench using an engine simulator for the six different rpm values (1000 to 6000 rpm, step 1000).Table 1 summarises the results of each log file generated from each test run.The average and standard deviation of each cylinder's latency has been calculated using only complete packets, that is, where all cylinder WICOs returned a packet flag of 0. Figure 8 shows that the average latency of all Cylinder WICOs increases slightly in-line with the rpm, whilst the standard deviations are consistent throughout all rpm.These results suggest that it is also possible to make a data-centric measurement of the engine sensors for a single engine cycle by simply performing a request with the necessary advance.Packet loss is under 2% for all rpm values, which demonstrates good communication stability for the datacentric paradigm. Engine Environment.The engine environment tests were conducted as before, using a petrol engine to measure the influence of electromagnetic noise and the presence of metal objects in close proximity to the WICOs, in order to model the effects of "real world" conditions.The tests were carried out using an engine simulator for four different rpm values (1000, 2000, 3000, and 6000 rpm), all with the engine switched on.Table 2 summarises the results of each log file generated from each test run.The average and standard deviation of each cylinder's latency has been calculated using only complete packets, that is, where all cylinder WICOs returned a packet flag of 0. Figure 9 shows that the average WICO latency increases in-line with the rpm for Cylinder 1 WICO, Cylinder 3 WICO, and Cylinder 4 WICO.Although the average latency of Cylinder 2 WICO actually decreases slightly at rpm value 2000 it increases for higher rpm values.The standard deviations of all Cylinder WICO latencies are consistent at all rpm.This further supports the notion that for datacentric communication, it is possible to make a measurement of the engine sensors for a single engine cycle by simply performing a request with the necessary advance.Packet loss in the engine environment is low, around 1% for all rpm, demonstrating good communication stability for the datacentric paradigm. Environmental Chamber.Tests were carried out using an environmental chamber to control temperature and humidity conditions.The Cylinder 1 and Cylinder 2 WICOs were placed inside the chamber and the internal temperature varied for each test, whilst the remaining WICOs were placed outside the chamber during each test.Test runs were conducted using constant temperatures of −10, 10, 30, 50, and 80 • C but unlike the laboratory and engine tests, these tests were only undertaken at two different rpm, 1000 and 3000, so the main variable investigated here was the effect on the WICO performance of the temperature inside the chamber.Tables 3 and 4 summarise these tests. The results of the data-centric environmental chamber tests indicate that there is an effect of higher temperatures on the performance of the WICOs.To confirm this finding, the constant temperature tests were followed by a further variable temperature test.This applied a positive temperature ramp to the WICOs in the environmental chamber from -30 to 90 • C using an rpm of 1000.Table 5 presents the results of the variable temperature test which further illustrates the effects of temperature on the WICO latency values.The WICOs inside the chamber (Cylinder 1 and Cylinder 2) have higher standard deviations than those outside and overall packet loss is also slightly higher than in the other tests, at 3%. Figure 10 illustrates how the latencies of each WICO changed as the temperature ramp was applied, and clearly demonstrates the effect of increasing temperature on the WICO latency. For the laboratory and engine environments, the latency standard deviations of all Cylinder WICOs were remarkably consistent, which demonstrates robust data communication stability across both environments for the datacentric paradigms.For the Environmental Chamber tests, an increase in the average latency of the Cylinder WICOs within the chamber was observed along with an increase in the standard deviations, which indicates a slight loss in communication stability for the data-centric paradigms. The results of the environmental chamber tests clearly illustrate the impact of higher temperatures on WICO latency performance and the packet loss rate.The impact on latency performance is particularly noticeable in the Environmental Chamber tests where the differences in individual WICO latencies are primarily due to the experimental setup of the tests performed.During these tests, two Cylinder WICOs were placed in the Environmental Chamber (Cylinder 1 and 2 WICOs) while the others (ECU WICO and Cylinder 3 and 4 WICOs) were placed outside.As the temperature increases, the time to perform the channel sampling and the calculation increases for the TelosB in the Environmental Chamber.For this reason, both the operations are completed first for the Cylinder WICOs 3 and 4, and then for the Cylinder WICOs 1 and 2 at higher temperatures, not in the serial order as expected. As an example, where the bit rate of the ZigBee channel is around 40 kbps, if the answer length of each Cylinder WICO is around 800 bits, each answer will keep the ZigBee channel busy for at least 800 b/40000 bps (=20 ms).With the overhead of the ZigBee and the overhead introduced by the EMMA middleware, a difference between the latency values of two contiguous Cylinder WICOs of about 40 ms is understandable.This explains why the latency increases for the Cylinder WICOs 1 and 2 in the Environmental Chamber, and therefore the reason why they are the last in the answer message queue. The reasons for the increase in the loss rate can be attributed to the fact that all electronic devices have an operating point at which they can operate normally.As the temperature increases, I/V (current-voltage) characteristic of a device changes and the behaviour of the device can be different from what would be expected under "normal" conditions.For the WICOs, this change in I/V characteristics means that there is a possibility that a ZigBee transmitter chip behaves erroneously, which in turn produces corrupted data at the chip level.If an error does occur, repeated retransmission occurs on a chip as well as at the MAC level.The experimental results of all tests performed on the three scenarios have highlighted some key findings and issues. (1) The latencies are quite stable with the RPM, but they increase when the temperature reaches 50-60 degrees: this suggests that the TelosB mote requires further hardware design development for an automotive engine level application. (2) For the message centric version of the application, a loss packet rate lower than the data centric version has been observed: this suggest that in the two paradigms there is a different call-back implementation, or in the thread management in the OS. (3) For high values of RPM, ZigBee was unable to manage synchronously the connection between the ECU WICO and the 4 Cylinder WICOs: in fact the amount of data transmitted by each Cylinder WICO keep the RF channel busy for a number of millisecond comparable with the engine period, and this does not allow the ECU WICO to collect the data from the Cylinder WICO at the same time. Vehicle Level Application . Figure 11 shows the example application that was developed to test the EMMA system.The overall purpose of the vehicle WICO was to provide the absolute position of targets being tracked by the radar.This was achieved by combining the absolute position of the vehicle (based on GPS & vehicle dynamics data) and the relative position of the detected target (using automotive ACC radar).NMEA 0183 format data was used to transmit the GPS and vehicle data to the Radar WICO within the system.This standardised format was selected to ensure maximum interoperability of the individual WICOs in future setups. Implementation. All implementations on the Xilinx ML403 FPGA board follow the generic architecture layout described below in Figure 12.Creation of the applications was carried out utilising the Qplus [15] operating system, which is based on Linux and the EMMA middleware.GPS WICO.The interface to the Radar WICO was implemented using the publish/subscribe EM2P functionality to send the required GPS NMEA sentences to the main Radar WICO when they have been correctly received from the GPS unit.During development of the host vehicle tracking algorithm, it was discovered that only the GPGGA NMEA sentence was required for the application so all other sentences were filtered out by the GPS WICO. Vehicle Dynamics WICO.The interface to the Radar WICO was implemented using the request/response EM2P functionality to send the latest vehicle dynamics data to the main Radar WICO when requested.The WICO buffers the received data from the individual sensors and sends the latest full update when requested.The NMEA sentence received from the digital compass was modified to filter out all data except the required heading data.The integrity of all of the data received was checked before it is passed on to the Radar WICO. Radar WICO.The interfaces to the other two WICOs are defined above and attempt to test as many of the EM2P interface options as possible.The overall scheduling of the application was implemented so that the tracking algorithm runs after a full update of radar data has been received from the ACC radar.The request to the vehicle dynamics data was sent after the algorithm has run so that the next run of the algorithm has the latest vehicle dynamics and radar data.The GPS data was published from the GPS WICO totally asynchronous to the rest of the application.The host vehicle's GPS position was calculated in the Radar WICO using a Kalman filter algorithm that fuses GPS and Vehicle Dynamics data in order to update position at higher rate and overcome synchronism issues between the sensor WICOs.All of the targets reported from the ACC radar were in relative coordinates relative to the centre of the radar.These coordinates are then converted to the full GPS coordinate system and referenced to the tracked host vehicle position.This data was then available for fusing with other on board sensor data using the GPS co-ordinate system or for passing to the infrastructure for use in traffic management. Experimental Results. To validate the WICOs in a "realworld" environment for this application it was decided that a test bench environment (Figure 13) would be used to playback a variety of recorded scenarios using data from the Radar, GPS, and Vehicle Dynamics devices to ensure consistency.Three runs of the same data were undertaken, which would allow for the relevant metrics (message latency and lost messages) to be evaluated. (a) Message Latency Publish/Subscribe.It was only possible to check the timing of the publish/subscribe mechanism using a timestamp recorded in a log file.This was only set up to record to a onesecond level of precision, and the GPS messages were only updated every second anyway.There was very little evidence WICO-WICO interaction Missing messages Run 1 (n, %) Run 2 (n, %) Run 3 (n, %) GPS-Radar 15, 12.6 36, 28.1 of latency except in the third run where there was evidence some messages were delayed by one second. Request/Response.The request/response time was measured internally in microseconds and reported in the log file.The results as shown in Table 6. (b) Lost WICO to WICO Messages.The results showed in Table 7 a small message loss (around the 1% level for all runs) between the Vehicle Dynamics WICO and the Radar WICO, whereas there was a much higher message loss between the GPS WICO and the Radar, as high as 28.1% during Run 2. There was minimal message loss between the Vehicle Dynamics WICO and the Radar WICO.It is supposed that these missed messages could have been caused by the Radar WICO simultaneously receiving a successful GPS message.However, comparison of log files proved inconclusive as all records of a missing Vehicle Dynamic message in the Radar logs coincided with a missing GPS message, which suggest a temporary total loss of communications between all WICOs.The only exception to this could be found in 4 records from all the Radar logs which had a missing Vehicle Dynamics message followed one second later by a successful GPS message, but these occurrences were not cyclical in the log files. For the missing GPS messages, the causes for the higher loss rate were again not clear.A small number of messages in the GPS log files had a fault code which indicates that ZigBee communications were not allowed due to the system not acknowledging that the previous message had successfully been sent.Further investigation of these results is required to improve message lost between the Radar and GPS WICOs. Supra-Vehicle Level Application.An application was developed in order to demonstrate the benefits of the middleware in priority to emergency vehicles.For that purpose, an emergency vehicle (ambulance, fire engine or police car, etc) would be equipped with a MicaZ WICO which would broadcast a beacon message if it was on an emergency mission (Figure 14).For example, in a busy intersection controlled by traffic lights, emergency vehicles are detected and given priority by regulating the state of the traffic lights. Implementation. The implementation consists of two elements.The first element consists of a MicaZ WICO.The second element consists of a CITY traffic controller, manufactured by ETRA I+D, Spain.The CITY traffic controller is a well-proven controller that implements advanced capabilities for traffic management and control.A Cross-bow commercial data acquisition board MDA 300 (Figure 15) is used to provide as an interface between MicaZ WICO and CITY traffic controller with a small electric signal adaption stage. In the demonstration, a MicaZ WICO was connected to a CITY traffic controller (Figure 16) that acted directly by providing information to the regulator about an emergency situation.Another MicaZ WICO was placed in the infrastructure to relay this message to the traffic light regulator.It was placed so far as needed in order to give time to the traffic regulator to change its status taking into account the time lost due to communication mechanisms (publish/subscribe) and other time periods the traffic regulator needs in order to guarantee safety first.The traffic regulator has been programmed to attend to the trigger signal provided by the mote activating an emergency control sequence.The demonstration was successfully carried out in a real road environment in Valencia, Spain. Experimental Results. Several sets of experiments were carried out with EMMA middleware to evaluate possible use of the MicaZ WICO in the supra-vehicle level application.These experiments evaluated the use of MicaZ WICO with EMMA middleware for the application scenario at the supravehicle level.Two MicaZ WICOs (EMMA middleware running on them) were used for data-centric (request/response) and message-centric (send/receive) communication both in urban environment and mobile environment. (a) Send/Receive Communication Urban Environment Experiment.This experiment was carried out on Claremont Road, a busy road near Newcastle University.In each scenario, 100 packets were sent for every 500 ms, 1000 ms and those packets were received with another WICO which was connected to a Laptop via MIB 520 programming board.Both WICOs were placed at 1m above the ground.The MicaZ WICO's power level was set to default (NanoQplus power level 31).Each scenario was repeated three times, and calculations were performed offline to determine how many messages were lost at each distance and average values reported in Figure 17.Mobile Environment Experiment.This experiment was carried out on Claremont Road up to 40 mph and on a Motorway near to the Newcastle Airport for higher speeds.The first MicaZ WICO was placed on a roadside stand 1 m from the ground, and the second MicaZ WICO was placed on the middle of the dashboard of a vehicle and connected to a Laptop via MIB 520 programming board.The MicaZ WICO at the road side sent messages periodically (500 ms, 1000 ms) which were received by the MicaZ WICO in the vehicle.Each scenario was repeated three times, and calculations were performed offline to determine how many messages were lost at each distance and average values reported in Figure 18. In the mobile environment experiment, the received packets decreased with an increase in speed as the WICO is in range for a shorter period of time.This means that communication time window decreased with the increase in vehicle speed.At 70 mph speed, the WICO in the roadside received 5 and 11 packets for sending intervals 1000 ms, 500 ms, respectively.There were no packets lost between the first packet and the last packet received.This Experiment demonstrated that MicaZ WICO can be used with EMMA middleware communication methods between a fixed infrastructure WICO and also fast moving vehicle-based WICO application.This is an important finding which proves that MicaZ WICOs do not suffer from any Doppler effects at normal motorway (70 mph) speeds. (b) Request/Response Communication Urban Environment Experiment.This experiment was carried out on Claremont Road, a busy road near to Newcastle University.Two WICOs were used for request/response communication with a request message transmitted every 100 ms, 250 ms, and 500 ms for different WICO-WICO separations from 10 m to 65 m.In each scenario, response packets were received and recorded.Both WICOs were placed at 1m above the ground.The MicaZ WICOs power level was set to default (NanoQplus power level 31).Each scenario was repeated three times, and calculations were performed offline to determine how many messages were lost at each distance and average values reported in Figure 19. Mobile Environment Experiment.This experiment was carried out on Claremont Road up to 40 mph and on a Motorway near to Newcastle airport for higher speeds.The first MicaZ WICO was placed on a road side stand 1m from the ground and the second MicaZ WICO was placed on the middle of the dashboard of a vehicle and connected to a Laptop via MIB520 programming board.The MicaZ WICO at the roadside sent messages periodically which were received by the MicaZ WICO in the vehicle.Due to limited access to public roads and for safety reasons, the experiment was conducted only at a packet transmission interval of 100 ms.The experiment was repeated three times, and calculations were performed offline to determine how many messages were received at each distance and average values reported in Figure 20. The urban environment experiment shows that packets can be received without any packet lost up to 45 m distance.The percentage of packets lost increases above 45 m distance in both cases.In the mobile environment experiment, the received packets decrease with the speed increases as the WICO is in range for a shorter period of time.This means that communication time window is decreasing with the vehicle speed.At the 70 mph speed, The WICO in the roadside received 5 and 11 packets for sending intervals 1000 ms, 500 ms, respectively.And interestingly, there were no packets lost between the first packet and the last packet received.This experiment demonstrated that MicaZ WICO can be used with EMMA middleware communication methods between a fixed infrastructure WICO and also fast moving vehiclebased WICO applications.This is an important finding which proves that the MicaZ WICOs do not suffer from any Doppler effects at normal motorway (70 mph) speeds. Inter-Hierarchical Level Application. One of the main objectives of the project was to achieve a middleware able to abstract complex subsystems formed by different kinds of WICOs into simpler elements (composed WICOs) that behave in the upper level system as a single unit.This way, complex applications could be built with a hierarchical shape, each group of WICOs working together on the same functionality appearing a unique element providing certain types of data to the remaining.In addition, the possibility to form ad-hoc WICOs (i.e., to discover previously unknown elements on the system), and to propagate published data through the different abstraction layers, allowing their transformation and combination as it crosses certain points of the hierarchies, does really enhance the achievable possibilities of applications built on EMMA Middleware. An inter-hierarchical application, integrating different hierarchical levels developed in the project: the car level and the supra-vehicle level, demonstrated how heterogeneity issues could be solved by developing middleware such as EMMA.In order to demonstrate the inter-hierarchical collaboration of the WICOs developed on the project, the application consisted of transforming the information provided by the vehicles at both automotive and vehicle subsystem levels into specific traffic control actions at infrastructure (i.e., supra-vehicle) level. The inter-hierarchical demonstration made use of the WICOs at the vehicle level and supra-vehicle level.This demonstration aimed to provide advanced warning to a vehicle behind that there is an obstacle ahead.As can be seen International Journal of Vehicular Technology in Figure 21 the inter-hierarchical demonstrator made use of all communications mechanisms in the EM2P middleware and exercised most of the functionality of the middleware.In this demonstrator, inter-hierarchical collaboration of the WICOs developed on the project consisted of transforming the information provided by the vehicles at vehicle level (GPS, Vehicle Dynamics and Radar sensors based on Xilinx ML 403 platform with TRW Conekt daughter board ) into specific traffic control actions at infrastructure (to MicaZ) level.In the infrastructure level, two MicaZ WICOs were used.First MicaZ WICO was used as beacon WICO to relay any message received by ad-hoc vehicle subsystem WICO to the second MicaZ WICO which was connected to a portable VMS panel to display the information sent by the vehicle level WICO.This application was successfully demonstrated for the EMMA project final review in a real road environment in London. Conclusions It is clear that the next generation of vehicles will be required to have increased safety, lower emissions, and more entertainment with higher performance than those of today.The innovations in wireless sensor devices will enable novel automotive applications which will become very common in future transportation applications.The challenges such as integrating heterogeneous wireless devices for specific transportation application can be met by developing middleware technologies such as in EMMA.This paper has presented the EMMA project that has been undertaken to investigate the suitability of using heterogeneous wireless sensors in transportation system applications.The validation of the prototype applications shows that wireless sensor networking technologies can be used at the engine level, vehicle level, and supra-vehicle level.The ability to communicate between vehicle and roadside illustrates that wireless sensor networks will enable efficient and discrete communications between vehicle and roadside. Figure 5 : Figure 5: Xilinx processor board with TRW Conekt interface board mounted on top. Figure 12 : Figure 12: Application architecture on Xilinx board. Figure 13 : Figure 13: Three WICOs as tested in lab and in vehicle demonstration. Figure 14 : Figure 14: Giving priorities for emergency vehicle. Figure 21: Deployment of WICOs in the inter-hierarchical demonstrator. Table 1 : Summary of test results for laboratory environment. *Excluding lost packets from Oil WICO. Table 2 : Summary of test results for engine environment.Excluding lost packets from Oil WICO. * Table 3 : Summary of test results for environmental chamber at 1000 rpm. * Excluding lost packets from Oil WICO. Table 4 : Summary of test results for environmental chamber at 3000 rpm. * Excluding lost packets from Oil WICO. Table 5 : Summary of test results for temperature ramp within the environmental chamber. * Excluding lost packets from Oil WICO. Table 6 : Summary of latency results for vehicle-level request/response. Table 7 : Summary of lost message results for car-level communications.
2019-04-12T13:57:20.368Z
2011-05-17T00:00:00.000
{ "year": 2011, "sha1": "52f107a4d197807e76f46971664a83e082b0ae99", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2011/853948.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "52f107a4d197807e76f46971664a83e082b0ae99", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
225159530
pes2o/s2orc
v3-fos-license
Field margin floral enhancements increase pollinator diversity at the field edge but show no consistent spillover into the crop field: a meta‐analysis Conventional intensification of agriculture has reduced the availability of resources for pollinators, reducing their diversity and affecting plant pollination, both in natural habitats and croplands. Field margin floral enhancements such as flower strips or restored field margins could counteract these negative effects. The approaches to assess the success of these management measures generally evaluate separately the pollinator response at the edge and within the crop, as proxies for pollinator conservation and pollination services, respectively. We performed a meta‐analysis to understand the influence of field margin floral enhancements on the abundance and richness of pollinators at the edge and within the field, and on crop yield. We estimated 137 effect sizes from 40 studies, all from the northern hemisphere. Overall, the field margin floral enhancements increased the abundance and richness of pollinators at the field edge but had no consistent effect in the interior of the crop fields. Few studies evaluated crop yield, and in these studies no effects were observed. These results suggest that field margin floral enhancements can constitute a positive conservation action for pollinators but not necessarily associated with pollination ecosystem service. Introduction A large part of the area originally occupied by many temperate and tropical natural terrestrial ecosystems has become agricultural land (Ramankutty et al., 2008) and the expansion and intensification of agriculture have greatly reduced local biodiversity (Foley et al., 2005). This reduction of biodiversity directly affects a plethora of different ecosystem processes, both in the remaining natural habitats and in the crops itself (Tilman et al., 2001;Oliver et al., 2015). For example, agricultural intensification has reduced the availability of resources for pollinators (Tscharntke et al., 2005), which has led to a decrease in the abundance and diversity of pollinators in agroecosystems . This lack of pollinators in agricultural fields can result in poor pollination, as well as in a reduction in crop yields (Garibaldi et al., 2009). To persist in agricultural habitats, wild pollinators must be able to find suitable nesting and foraging resources (Kremen et al., 2004;M'Gonigle et al., 2015). Because of this, the re-diversification of agricultural areas has been proposed as a mean to strengthen pollination services provided by wild pollinators (Tscharntke et al., 2005;Kremen et al., 2007;Winfree, 2010;Garibaldi et al., 2014). Diversification of agricultural landscapes can take place at many scales, including within crops (e.g., polyculture), field margin floral enhancements (e.g., hedges and wildflower plantations) or landscape features (e.g., increasing natural coverage percentage) (Kremen & Miles, 2012;Duru et al., 2015). The use of field margin floral enhancements is one of the most extended practices to buffer the impact of intensive agriculture and conserve wild pollinators within their natural habitats (Sardiñas & Kremen, 2015) For example, best management practices recommended to introduce floral margins as additional sources for foraging and refuge for pollinators within monoculture landscapes (Pywell et al., 2011a). In this sense, it has been found that the restoration of hedges increases the richness of pollinators in the edges of the fields (Hannon & Sisk, 2009;Carvell et al., 2011). Floral field margins are the most common remnants of semi natural vegetation in crops (Marshall & Moonen, 2002;Boutin et al., 2002;Dover, 2019). These are linear features of trees, shrubs and grasses that surround the crop. They can be remnants of existing vegetation from cleared lands, result from natural dispersal of plants or can be established through direct sowing (Long & Anderson, 2010). Moreover, a particular action that is gaining traction for habitat restoration in pollinator dependent crops is the use of flower strips, which are typically planted in the marginal areas adjacent to the crop (Wratten et al., 2012;Morandin & Kremen, 2013b). Among its benefits are pest control, enhancement of the soil fauna, reduced soil erosion, sediment retention, increased biodiversity and air and water quality (Kort et al., 1998;Kleijn et al., 2006;Smith et al., 2008;Whitehouse et al., 2018). In addition, the value of this vegetation has been widely recognised as a habitat for native species of plants, butterflies and birds within agricultural landscapes (Burel, 1996;Maudsley, 2000;Dover et al., 2000;Hinsley & Bellamy, 2000). The effects of field margin floral enhancements can be evaluated for different target benefits. For example, several of these assessments focus on pollinator assemblages in the crop edge (Pywell et al., 2006;Haenke et al., 2009;Lye et al., 2009;Potts et al., 2009;Morandin & Kremen, 2013a;Morandin et al., 2016;Campbell et al., 2017;Wood et al., 2018), while others study pollinator assemblages within fields (Morandin & Kremen, 2013b;Barbir et al., 2015;Feltham et al., 2015;Sardiñas & Kremen, 2015;Morandin et al., 2016;Rundlöf et al., 2018). When studying the effects of field margin floral enhancements on pollinating insects at the edge of the field, the information obtained from this approach is mainly related to the conservation of these organisms in agroecosystems, and, on the other hand, when assessing the diversity of pollinators within of crops, the focus is in enhancing the ecosystem services provided (Kremen et al., 2019). Moreover, the characteristics of the edges also vary depending on the management, for example, edges can be restored or unrestored (i.e. sown and pre-existing plants, respectively) (Feber et al., 1996;Blake et al., 2011;Klein et al., 2012;Sardiñas & Kremen, 2015), and also vary in the type of vegetation, which can range from arboreal vegetation to herbaceous type (Scheper et al., 2015;Williams et al., 2015;Caudill et al., 2017;Garratt et al., 2017). Such differences could affect the magnitudes of the effects of field margin floral enhancements, both on pollinators and crop yield (Kremen et al., 2019). Hence, it is necessary to evaluate the studies developed so far to determine whether there are differences in the effect of these management measures due to different spatial and ecological approaches, so in this study, we seek to answer through a meta-analysis: What is the effect of field margin floral enhancements on the diversity of pollinators? Are pollinator responses different at the edge and within the field? Do these effects vary depending on the pollinator taxonomic group? And, do field margin floral enhancements affect crop yield? Literature search and study selection. A search of the published literature on the effect of field margin floral enhancements on pollinators was conducted using the Web of Science core collection database available from 1975 to May 2019 for the Electronic Library of Scientific Information program of the National Agency for Research and Development of Chile, which has access to a collection of almost 6000 scientific and technological journals in the English language. We use the combinations pollinat*ANDhedgerowANDcrop, pollinat*ANDfield marginANDcrop, pollinat*ANDhedgerowANDagro*, pollinat*ANDfield marginANDagro*, pollinat*ANDedgeANDcrop, pollinat*ANDedgeANDagro*, pollinat*ANDflower stripANDcrop, pollinat*ANDflower stripANDagro*. From these search criteria, a total of 447 studies were obtained. Additionally, a search was made in the database of Scheper et al. (2013) (71 studies) and Dover (2019) (204 studies), which reviewed studies that evaluate pollinator responses to changes in crop edge vegetation, both reviews did not assess the effect of field margin floral enhancements, but included these studies to evaluate other objectives, such as the influence of agri-environmental schemes. The inclusion of studies in our meta-analysis was based on the following criteria: 1. A field study that evaluates the effect of field margin floral enhancements on the pollinator diversity on the edge or within the crop. 2. The response variables consider the abundance, richness, visitation rate of pollinators, and/or crop yield. 3. Mean, standard deviation (or standard error) and sample size are reported. For those studies that reported their results only in graphs, data were obtained using DATATHIEF II software (B. Tummers 2006 <https://datathief.org/>). 4. Includes a comparison between an experimental and a control group. When comparisons were made between more than two treatments, the extreme groups were used for comparison. For example, in studies comparing the diversity of pollinators at different distances from the crop edge, the experimental treatment was the site closest to the edge and the control was the site further away from the edge (three studies) and generally there are no significant differences between the survey distances within the field. In the case that the study was carried out over several years, only the data from the last year were used (seven studies). Treatments compare always sites with field margin floral enhancement (experimental group) and sites without field margin floral enhancement (control group) (Fig. 1). The control treatments can include fields with edges managed with monofloral strips, edges without hedges or floral strips, and in the case of edge-interior contrast, the interior of the crop with a conventional management. Each study was categorised based on: (i) contrast type: edge-edge (EE), when crop edges of different fields were compared; edge-interior (EI), when the edge of the crop was compared with the inside of the same field, and interior-interior (II) when sites within crops were compared between different fields ( Fig. 1 and Table 1). (ii) Edge management: restored (e.g., fields with edges with a mix of sown floral strips or hedgerows restored with native or exotic vegetation) and unrestored (e.g., forest edges) (iii) edge type: edges dominated by herbaceous, arboreal or shrubby vegetation. (iv) Pollinator taxa evaluated: butterflies, bumblebees, honeybees, hoverflies, wild bees (i.e. studies including all apoidea) or other wild pollinators (i.e. studies including all pollinator taxa). The response variables were abundance, species richness and visitation rate for each pollinator taxa and the crop yield (Table 1). Effect sizes. Mean, standard deviation and sample size for the experimental and control groups of each study were recorded. For each study, the Hedges'd index was calculated as a measure of the effect size. Hedges'd is an estimate of the standardised mean difference between the control and experimental groups that is not biased by small sample sizes and unequal sampling variances (Hedges & Olkin, 1985). Hedges'd is an index without units that calculates the magnitude of the effect and its direction. The highest effect sizes are those that show large differences between the control and experimental groups. For example, positive d values indicate a tendency to increase the response variables (e.g. abundance of visiting pollinators) in edges with field margin floral enhancements (Borenstein et al., 2009). Statistical analyses For each response variable, a mixed-effects model was carried out using edge management, edge type and pollinator guild as moderators of the effect, except for the crop yield in which a random effect model was used because the analysis does not present moderators due to the low number of studies for this variable, which include only restored edges and herbaceous plants. Floral field margin and crop pollinator diversity 523 Analyses were run independently for each type of contrast (i.e. edge-edge, edge-interior and interior-interior) and the effect size was estimated for the different levels within each moderator. The models were adjusted using the estimate of maximum restricted likelihood (Koricheva et al., 2013). Omnibus (QM) tests of individual moderators were performed to determine the heterogeneity of the effect sizes between the levels of each moderator (e.g. edge management levels). Furthermore, to explore the possibility of publication bias, funnel plots (scatter plots of effect sizes against a measure of their variance) were constructed to determine whether the reported studies were unbalanced (Koricheva et al., 2013). A publication bias towards significant results would create an asymmetric funnel, which generally lacks small studies with no significant effects. In addition, a rank correlation test for funnel plot asymmetry was applied to examine whether the observed results and corresponding sampling variations were correlated (Begg & Mazumdar, 1994). A high correlation would indicate that the funnel chart is asymmetric, which would indicate a publication bias. All analyses and graphs were performed with the metafor package (Viechtbauer, 2010) through R version 3.4.4 (R Development Core Team, 2018). Results A total of 40 articles fulfilled our inclusion criteria (Table 1). From those, 137 effect sizes were obtained, which were distributed in 71 for the EE, 39 for EI and 27 for II contrasts. All studies were conducted in the northern hemisphere, with USA and England (12 studies each) being the main countries that have evaluated the questions associated with the conditions proposed in this meta-analysis. These studies were carried out between the years 1996 and 2019, most of them in the last decade, and were performed in 24 types of crops, more frequently in wheat (Triticum aestivum L.) and flowering crops such as blueberries (Vaccinium angustifolium Ait.), oilseed rape (Brassica napus L.) and tomatoes (Solanum lycopersicum L.) (Table 1). Only in the EE contrast, there was a clear overall positive estimated effect size of field margin floral enhancements for both abundance and richness of pollinators (Fig. 2). In this contrast, both for abundance and richness the heterogeneity test indicates differences between the levels of each proposed moderator ( Table 2). The abundance of pollinators is the response variable with the highest number of cases of this entire review (n = 40). Restored edges and with the presence of herbaceous plants are those that present the greatest abundance and richness of pollinators on the edge, mainly for hoverflies (abundance-richness) and wild bees (richness). The estimated effect size for both EI and II contrasts is not significant for the set of evaluated moderators (Fig. 2). However, it is worth noting that in the EI contrast, we observe consistent positive effect sizes for pollinator abundance where field margin floral enhancements is restored with herbaceous plants, and for pollinator richness for both restored and unrestored sites, for arboreal and herbaceous edges, and for wild bees (Fig. 2). The response in pollinator visitation rate did not have an adequate number of cases to perform an analysis for each moderator in each contrast, so only the type of contrast was used as a moderator in the estimation of the effect size. None of the contrasts present significant effect sizes (EE: d = −0.58; 95% CI = −2.19, 1.02; P = 0.47; n = 4;, EI: d = 0.77; 95% CI = −0.37, 1.92; P = 0.18; n = 6; and II: d = 0.43; 95% CI = −0.65, 1.51; P = 0.43; n = 8). Crop yield had only eight cases, which had a restored edge management with herbaceous plants, not having a significant effect size (d = 0.46; 95% CI = −0.06, 0.99; P = 0.08). Publication bias inferred through funnel plot only shows asymmetry in effect sizes for both abundance and richness in the EE contrast (Fig. 3a,d). This coincides with the significant associations of the test of rank correlation. This indicates a bias in publications primarily with positive effect sizes and larger standard errors (i.e. low sample size) in this contrast; however, the strong positive response of the visiting pollinators at the crop edge could also be causing an asymmetry in the analysis. Discussion Studies on the effect of field margin floral enhancements on pollinators have focused primarily on assessing the diversity of pollinators at the edge, and fewer efforts have been done trying to understand how these management tools affect pollinators within the field and ultimately, crop yield. Studies with edge-edge contrast sought mainly to determine the ability of edge vegetation to generate resources for visiting pollinators, e.g. in bees, these translate into food and nesting sites (Kremen et al., 2004). This evaluation does not allow to identify the possible spillover of insects from the edge towards the interior of the field. It only allows to determine whether this vegetation works as a conservation tool due to the attraction and permanence of potential pollinators at the edges of the field (Kremen et al., 2019). In this review, it was observed that the edge management tools that caused an increase in the abundance and richness of pollinators correspond to the management with restored edges and herbaceous plants, particularly in the case of hoverflies and wild bees. This is in agreement with studies that have also shown that peripheral areas around crops that contain varied species of wild flowers have positive effects on the abundance and diversity of many insect pollinators, such as honeybees, bumblebees, butterflies, hoverflies and other diptera insects (Lagerlöf et al., 1992;Carreck & Williams, 1997;Cheesman, 1998;Bäckman & Tiainen, 2002;Croxton et al., 2005), mainly associated herbaceous plants (Prys-Jones & Corbet, 1991;Fussell & Corbet, 1992;Mader et al., 2011) and restored edges (Morandin et al., 2016;Kremen et al., 2019). These edge management techniques are associated with an increase in floral diversity generating an increase in the supply of resources for pollinators, potentially allowing an increase in pollinators in the area (Carvell et al., 2004;Pywell et al., 2005;Greenleaf & Kremen, 2006;Winfree et al., 2008). Two hypotheses have been proposed regarding the effect of field margin floral enhancements (e.g., hedgerows) on the pollinator communities visiting the crops (Kremen et al., 2019). On the one hand, we can expect an 'exporter' effect (i.e., spillover) of pollinators from the edge into the crop (Morandin & Kremen, 2013b;M'Gonigle et al., 2015), and on the other hand, a pollinator 'concentrator' effect on the edge (i.e., pollinators are concentrated in the crop edges with greater foraging resources) (Kleijn et al., 2018;Kremen et al., 2019). This contrasting hypothesis may explain the positive and negative effects documented in the analysis of pollinators visiting the crop. In the case of studies with edge-interior contrast, where the comparison was made between the edge and the interior of the same field (i.e., interior as the control), there is a positive effect for several levels of the moderators mainly on the pollinator richness responses (i.e., greater richness in the edge). This would be giving account of a possible pollinator concentrating effect in these studies, i.e., there is an absence of spillover, so that the field margin floral enhancements would work only as a pollinator conservation tool at the field edge (Kremen et al., 2019). Although this evaluation could show a possible 'concentrator' effect of pollinators, it is necessary to directly elucidate the absence of spillover (e.g. with insect marking), since it is possible that a change in abundance and richness within the field could be both independent as parallel to the changes in the edge, after the field margin floral enhancements. In the case of the interior-interior design, it is the one that would specifically determine the effect of the edge enhancement on the diversity of pollinators within the field (Morandin et al., 2016). For this contrast, the results indicate that there is heterogeneity in responses when comparing sites with and without field margin floral enhancements, although the number of cases for this contrast is low. Overall, our results suggest a possible neutral effect of field margin floral enhancements on the pollination service in the crops included in this review, consistent with the idea that the edges would be functioning as pollinator attractors and concentrators, but not demonstrating the spillover towards the crop and the consequent ecosystem service delivery (Sardiñas & Kremen, 2015;Sardiñas et al., 2016;Dainese et al., 2017). However, it is important to consider that the effect of field margin floral enhancements on pollinators within the crop may require time, as pollinator populations fluctuate from year to year and it could take time for them to colonise new habitats and build up larger population sizes (Williams et al., 2001). For example, Blaauw and Isaacs (2014) observed that in wild bees and hoverflies the visitation rate of these began to increase within the crop after the third year of the field margin floral enhancements, so that these populations would be depending on the abundance of the vegetation of previous years. The reviewed papers generally test edges with less than 3 years, opening the door to find more consistent effects when analysed over longer time periods. On the other hand, several edgeinterior and interior-interior studies present different sampling distances towards the interior of the field and in three of them we apply maximum survey distance selection (Table 1); however, in these last studies, the sampling sites in the interior of the crop, there are no significant differences in the visit of pollinators (Morandin & Kremen, 2013a,b;Sardiñas et al., 2016;Caudill et al., 2017), so the response is constant within the crop. Therefore, we consider that, despite this variability of sampling distances within the fields, the results allow us to determine a no consistent pollinator spillover to crops. The results for crop yield have no effect in the eight cases included, which is consistent with some of the studies analysed in the review by Kremen et al. (2019) and with the results by Albrecht et al. (2020), but opposed to studies conducted in oilseed rape where pollinator diversity and crop yield increase, the latter due to the interaction between semi-natural landscapes with the effect of the edge vegetation (Haenke et al., 2014;Sutter et al., 2018). It is possible that a set of other factors, such as the type of crop, the composition and heterogeneity of the surrounding landscapes, the size of the edges and their distance towards the interior of the fields, could be affecting the relationship between edge management, pollinators and yield. Albrecht et al. (2020) determined in a synthesis, results similar to those of this study, where they found that the pollination service increases in sites with enhanced field margins, but this effect only occurs adjacent to the edge and with the absence of effect on crop yield. This could be due to different crop management practices (Bartomeus et al., 2015;Gagic et al., 2017) in addition to the landscape context (Dainese et al., 2017), which would be generating variability in the responses of crop production. Further studies need to isolate these factors to understand the effects of edge management on crop yield (Mwangi et al., 2012;Sardiñas & Kremen, 2015;Morandin et al., 2016). An important finding of our review is that most of the identified studies have been conducted almost exclusively in the northern hemisphere, with greater prominence of USA and Europe. This result stress the importance of expanding efforts to other regions where to implement and evaluate these tools of biodiversity management and where information on these forms of crop management is still lacking. The situation elsewhere is different from the European context where the application of agrienvironmental schemes has made a greater progress in the use of these management tools. In summary, there is a set of gaps in the knowledge about the use of field margin floral enhancements as a biodiversity management tool in agroecosystems, with most studies evaluating the use of these tools with pollinator conservation purposes in mind, but very few studies addressing pollination services delivered and yield. Under the current global decline in pollinator populations, it is urgent to advance in the understanding and identification of the factors that modulate the relationship between the management of crop edges and pollinators in order to enhance their conservation and ecosystem services in productive environments.
2020-10-28T18:57:30.946Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "9720bb9e9206794a6253307b7035de83258c6a32", "oa_license": "CCBYNCSA", "oa_url": "http://rid.unrn.edu.ar/bitstream/20.500.12049/6165/1/Zamorano%20(2020)%20Field%20margin%20floral%20enhancements%20increase%20pollinator%20diversity%20at%20the%20field%20edge%20but%20show%20no%20consistent%20spillover%20into%20the%20crop%20field%20a%20meta%E2%80%90analysis.pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "8e3a1c20a82da654b29dea7708e875e7aefa0ef2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
53687879
pes2o/s2orc
v3-fos-license
INFLUENCE OF EXTENSION AGENTS ’ AND FARMERS ’ COMMUNICATIONS FACTORS ON THE EFFECTIVENESS POULTRY TECHNOLOGY MESSAGES This study was conducted in Delta State to determine the influence of extension agents’ and farmers’ communication factors on effectiveness of production technology messages. One hundred and eighty (180) poultry farmers and forty six (46) extension agents were selected for this study. The poultry production technology messages communicated to farmers included climate change adaptation methods, waste management, health management, predator control and improved breeds. The extension agents and poultry farmers were rated as being generally good in human relations, communication skills and role performances. The test of hypothesis showed positive significant influence of the communication skills of extension agents and poultry farmers on level of adoption of the poultry technologies hence poultry production technology messages. It was recommended among others that extension agents should provide follow up appointment for farmers, more extension agents should be trained and employed, farmers should be encouraged to allow their spouses share information with other farmers and the communication skills of the extension agents and fairness should be sustained. INTRODUCTION Communication planning is critical for media outreach, but its value reaches far beyond traditional external relations.For organizations like the Delta State Agricultural Development Programme (DTADP), taking time to formulate a consistent and effective poultry production technology message can be a valuable investment that enhances development, media advocacy and public relations, direct service and outreach efforts.Developing an effective poultry technology message can help agricultural extension agency team of staff, volunteers, donors and supporters articulate coherent and consistent idea about why organization works and its mission matters (Ofuoku 2010). According to Hunt (2006), a message is effective if it persuades a particular audience.An effective poultry technology message is the one that prompts poultry farmers to act in a way that supports the goals of extension agencies and other stakeholders.On the receipt of such a message, if the target farmers change behavior in a desired manner, it means the message is effective and the content there of will be adopted by them. Poultry production technology messages are those in which poultry farmers and other stakeholders through planned use of strategies and processes of communication with the goal of achieving agricultural development.The poultry production technology messages communicated to farmers range from environmental issues, health, predator prevention to exotic breeds (Ofuoku 2010). The extension agents' and farmers' communication factors include human relations, communication behavior and skills, and role performance.The factors, according to Waisboard (2006), Agbamu (2006), Olowu (1989) could either enhance or jeopardize the success of a development programme.It therefore means that they are salient to effectiveness of messages. In spite of the well designed and promoted programme, the needs and aspirations of poultry farmers to improve their farming systems have been stagnated (Adefuye and Adedoyin 1993).Be that as it may, it means the poultry production technology messages disseminated to them are not Effective.It is suspected that the poultry production technology messages are affected by the communication factors relating to extension agents and the farmers.The result of this study will contribute as a guide for poultry production technology message design and dissemination by DTADP and other extension agencies in the world, especially in the developing nations. Objectives of the Study: The major objective of the study is to ascertain the influence of extension agents' and farmers' communication factors on effectiveness of poultry technology messages .The specific objectives are to: identify the poultry production technology messages communicated to farmers, evaluate the human relations, communization behavior and skills, and role performance of extension agents and farmers determine the adoption levels of the production technologies borne by the messages ascertain the effectiveness of the messages and the influence of extension agents' and farmers' communication factors on effectiveness of the messages Hypothesis: The extension agents' and farmers communication factors do not influence effectiveness of poultry production technology messages. METHODOLOGY This study was carried out in Delta State, Nigeria.The state has high concentration of poultry farmers that are served by the Delta State Agricultural Development Programme (DTADP).The state is demarcated into three agricultural zones -Delta North, Delta Central and Delta South Agricultural Zones by the DTADP. Random sampling method was used to select the respondents from among the 1800( Delta North=725;Delta Central=945 and Delta South=144) poultry farmers registered with the three DTADP zonal headquarters located in each of the three agricultural zones, on the basis of 10% from each zone.This resulted to 180 poultry farmers.Purposive sampling was used to select 46 extension agents covering the various extension blocks where these farmers were located.Purposive sampling was used in order to select the extension agents who have been working with these farmers for minimum of ten years as these ones know the farmers better than those who have not spent up to ten years with them.Consequently, 246 respondents were arrived at and used for the study. The data were collected from the respondents using questionnaire and structured interview schedule.Both instruments were pre-tested for reliability with cronbach's alpha coefficient.The results of the correlation between the first responses and the second responses showed a high level of correlation for structured interview schedule (r = 0.82) and the questionnaires (r = 0.85). The data collected were analyzed with the use of descriptive statistics such as frequency counts, percentages, and means derived from 5 point likert's scale, thus very good=5; good=4; fair=3; poor=2; and very poor=1.The hypothesis was used to address objective V.The hypothesis was tested using Pearson Product Moment Correlation (PPMC). Poultry production technology messages communicated to the respondents The poultry production technology messages communicated to farmers included messages on climate change which consisted of tree planting 97.2%, installation of fan (92.2°C) and constant supply of fresh water (100.0%);under waste management, 92.2% of the farmers received messages on recycling of poultry droppings.On health management, 100% got messages on bird flu control and prevention.Under predator control, 78.3% heard messages on the use of sliced garlic.Under exotic breeds 25% got messages on Abro, 6.7% on Arboracre, 28.3% on Hubbard strains, 23;9% on Harco, 18.9% on Isa brown, 13.3% on Shaver star cross and l8.3%on Black Olympia. Tree planting messages were transmitted to the farmers so as to the effect of heat.1-Jeat can cause stress for the birds and this affects their feed intake and laying capacity adversely.According to Izunobi (2002) a substantial increase in environmental temperature will reduce growth rate, egg production, size and shell quality. Trees also reduce the effect of wind on pens and installations of fans reduce heat in the pens too.Excessive wind known as wind hedge poses inescapable problem to many poultry farms from the southern coastline to the northern Sahara fringe (Izunobi 2002). Constant supply of water to chickens reduces heat stress.The water, especially when cool reduces the body temperature to the normal level required by the chickens.Waste management messages were to proffer solution to the problem of waste disposal.Waste when recycled in form of manure is beneficial for growing crops and reduces the effects of effluents from the dropping on the climate. A bird flu prevention message was meant to prevent and control its outbreak, especially when the disease is zoonotic.Garlic was recommended in messages to farmers to prevent predators like snake.Snakes, on entering pens kill birds and consumer eggs.This method of predator control has the advantages of being environmental friendly. Messages on exotic (improved) breeds of chicken were sent to the farmers for their superior qualities of rapid growth and high productivity in terms of meat and eggs.The ones mentioned are superior to the pure breeds of exotic breeds as they are hybrids of the pure exotic breeds. Human Relation, Communication Skills, and role performance Extension agents as rated by poultry farmers On human relations the receivers (poultry farmers) rated the extension agents (source/ sender) (Table 1) as being very good in the feeling of togetherness with farmers (mean= 3.97), manner of approach to influence acceptance of technology (mean=3.82)and general truthfulness and sincerity (mean=3.87).The feeling of togetherness with the farmers creates the feeling of oneness in them and makes them to become open to the extension agents creates confidence in the farmer so that they do not hide their problems and aspirations from the extension agents.This is for the fact that they then see themselves as a family. The manner of approach used to influence farmers to accept technology is very important.It is a function of how the extension agents market the technology to the farmers.The way he sends the message may either motivate farmers to adopt or discourage them against adoption.It is always better when messages are persuasively passed with respect for the farmers.Truthfulness and sincerity on the part of the extension agents make farmers to develop interest in them.This is congruent with Tladi ( 2004) who discovered that the above qualities were stressed by farmers as part of the criteria for evaluating the performance of extension agents.According to Agbamu (2006), credibility of the communicator will determine the attitude of the people. To the poultry farmers, with respect to communication skills, the extension agents were Very good in the display of empathy (mean=3.76),encourage of questions and enquires (mean=3.72),use of key commu nicator (mean=3.77),credibility (mean=3.66),personality (mean=3.51),use of common language and expression (mean=3.67),clarity and comprehensibility (mean=3.65),Display of adequate communication skill generally (mean=3.72)and involvement of clientele (mean=3.64).The farmers rated the extension agents as being good in the used of feedback (mean=3.27),listening skill (mean=3.23)and maintenance of continuous communication with the farmers (mean=3.21).However, they were rated as being fair in provision of followup appointments with farmers.In the use of entertainment, they were rated poorly (mean=2.88). Genuine display of empathy helps to win the confidence of farmers.Encouragement of questions and enquires helps extension agent to further Clarify the areas not understood farmers, thereby promoting message effectiveness.Agbamu (2006) opined that this also translates into understanding their educational level, norms and beliefs.Credibility of the extension agents as a communicator creates avenue for him or her to gain confidence and trust of his clientele (Williams et al. 1984).The extension agent needs to have good personality in order to win the respect of the farmers.He should be seen as somebody who respects himself by behaving responsibly.Agbamu (2006) suggests that he should have great respect for himself. The use of feedback enables the sender (extension agent) to know if the message is understood the way he expects it to be understood.He is able to know, through, feedback if the intended meaning is given to the message. The language used by the extension agent as a communicator should be understood by the farmers (receivers).Williams et al. (1984) as cited by Agbamu (2006) stated that a good communicator should speak clearly and use terms and language the receivers will understand.He should view what he is doing or saying from the stand point of his audience (Agbamu, 2006). Involvement of the clientele in the development of messages gives room for the message to be relevant to the clientele.The farmers know their problems and aspirations better then the extension agents.This is in accordance with the saying that he who wears the shoe knows where it pintches.Waisbord (2006) reported that any intervention that came from outside the villages or communities were felt as not belonging to the citizens or members, but to the government and rejected the technologies involved. Involvement of the receivers during the communication process by the extension agents elicits and sustains the interest of the receivers (farmers). Maintenance of continuous communication sustains the interest of the receivers too.The provision of follow up appointment has the same effect as maintenance of continuous communication.A good listener wins the respect of his audience.They see him as being genuinely interested in their problems and aspirations.Agbamu (2006) argues that senders should have listening ability.The use of entertainment enhances and sustains the interest of the audience.This also promotes their appreciation of the message. Extension agents were rated on role performance as being very good (mean = 3.72) in the knowledge of technologies.This also contributes to his credibility.The credibility of the communicator depends on the extent to which he is perceived as a source of valid assertions in terms of being knowledgeable about the subject matter he is presenting (Williams et al. 1984). Olowu (1989) also noted that an agricultural extension agent should be theoretically and technically competent and that inadequacies in these areas could jeopardize the success of a development programme.They were rated as good (mean = 3.21) in the encouragement of to share information with others.This promotes diffusion of technology messages hence technologies.It will enhance information sharing among poultry farmers.Poultry farmers rated the extension agents as being fair (mean = 3.18) in the awareness of time limit.Awareness of time limit guards against untimely communication of innovations and during meeting with farmers, it guards against over burdening of farmers with too much information, which causes waning attention. The extension agents were poor in frequency of farm visit (mean = 2.91) and availability to farmers (mean= 2.93).Frequency of extension contact with farmers and availability to farmers are very important as they promote better understanding of agricultural technology messages by farmers.It further enhances adoption of agricultural technologies.Ofuoku et al (2008) argue that the more extension agents visit farmers and educate them, the better they understand and adopt technologies.Asiabaka (1996) reported that frequency of extension contact influences the adoption behavior of farmers.Meanwhile, adoption is an index of effectiveness of technology messages. The extension agent's attention can be needed at anytime of the day.This is why their availability to farmers is very important.This is so especially when the farmers have itching problems that they want the extension agent to help resolve. Senders' (extension agents') rating of poultry farmers' (receives') human relations, communication skills and role performance factors Table 2 indicates that the poultry farmers were very good at exhibiting truthfulness and sincerity (mean=3.51)when dealing-with other farmers and extension agents.They were seen as being fair in having the feeling of togetherness with extension agents and other poultry farmers (X=3.13). They were considered as being very good at the use of clear and comprehensible language and expression (mean=4.07), participation question and enquiries (mean = 4.06), sending of feedback (mean=4.02),sharing of information with spouse and other farmers (mean=4.01).Comprehension of technology messages (mean=3.85)and listening ability (mean=3.62).They were however considered as being very fair in performance in the use of key communicators (mean=3.85)and maintenance of continuous contacts with extension agents (mean = 3.03).On role performance, the poultry farmers were considered to have put up a very good performance as members of poultry farmers groups (mean= 3.68).But were rated to have performed poorly in participation in agricultural shows and field trips (mean = 2.84) and at encouragement of spouse sharing information with other farmers (mean= 2.81) The attributes of truthfulness and sincerity promoted trust on the poultry farmers by the Extension agents.This translated into integrity.This quality promotes a strong relationship between extension agents and farmers.Since they have been considered as being fair in the area of feeling of togetherness with other farmers and extension agents it means that they sometimes share their challenges with other farmers and extension agents and also share solution to challenges with them likewise. On communication skill and behavior, the use of clear and comprehensible language and expression by the poultry farmers promoted right interpretation of messages and feedbacks by extension agents.Either interpretation of meanings of messages is promoted by the use of simple language and expression.According to Isife and Ofuoku (2008), use of simple language is one of the factors that establish comprehension.Here, the use of common words that have Concrete meanings is very important. Participation in question and enquires enhances the comprehension of messages by the farmers and reveals to the extension agents, parts of technology messages that the learners difficult to understand.In all, unclear parts of the poultry technology messages are made clear to the farmers through their participation in questions and enquires.This is in consonance with Hunt (2006) who stated that farmers gained understanding of messages communicated through their participation in question and enquires.Technology message communication was informed by a theory that became a science of producing effective messages (Quamagne, 1991).The science of producing effective messages is rooted in the active participation of the beneficiaries, in this case, poultry farmers, the process of communication. Sending of feedback by the poultry farmers encourages the extension agents or source to know if the farmers (receivers) interpret technology messages rightly or wrongly.Feedback from farmers enables extension agents to have awareness of the interpretation given to their messages (Agbamu, 2006).Feedback promotes further understanding of the messages by the poultry farmers. Sharing of information with spouse and other poultry farmers encourages trashing• of information.This informed farmer -farmer extension.This communication skill promotes exchange of ideas and knowledge among farmers.Exchange of information among farmers is instrumental in influencing the reasoning, feeling and action on the information under discussion (Isife and Ofuoku, 2008). Comprehension of the poultry technology messages by farmers is as a result of the use of simple and clear language and expression by the extension agents and vice versa.The terms that are technical are explained in short and simple sentences and common words that ha e concrete meaning were used by the extension agents.The poultry technology messages I1 print were sequentially written.This finding is congruent with Niehans (2006) who argues that simple and clear language influenced the comprehension of agricultural technology messages among small-holder semi-literate farmers in the Pacific countries. Listening ability of poultry farmers rated as being very good has contributed to their comprehension of the poultry technology messages.They are rated as being very good because the farmer's attention did not wane even as many of them contributed to the discussion of such messages, by asking questions.This finding is at congruent with Adedoyin (2004) who opined that waning attention is a barrier in communication with farmers. The poultry farmers were rated as being fair in the use of key communicators.This is attributed to the fact that they prefer contact with the extension agents directly either in group or on one-on-one basis.The use of key communicators is related to the use of contact farmers.The implication is that they like to avoid distortion of messages.Message distortion is one of the challenges with the use of key communicators. Another fact is that as progressive farmers are often used as key communicators, the technology messages may not get to other categories of farmers.Roiling (1988) argued that. Packaged information by extension agents to one dynamic group such as the contact farmers may not find their way to other categories of farmers because of heterophony gap arising from differential socio-economic status between contact and non-contact farmers.The poultry farmers' maintenance of continuous contact with extension agents is rated.Fair for the fact that there is an unfavorable extension agent-farmer ratio.Agbamu (2005) argued that the disproportionate extension agents to farm family ratio prevalent in developing countries had led to a situation in which many farmers do not benefit from the services of agricultural extension agents. Their performance as members of poultry farmers' groups is very good.This implies that they as members of group collectively each other to meet their common goals for subscribing to such groups.According to Ogionwo and Eke (1999), people subscribe to groups because of their personal goals which they cannot achieve solely as an individual, but which they can attain by joining a group.Such goals may include access to credit, extension service and information. The farmers were however, poor at meeting attendance.The implication is that most of them do not attend extension agents-farmers meeting.Since the extension agent to farmer ratio is poor, group extension method is mostly used, but the poultry farmers prefer face-to face contact with extension agents.Another reason is that meetings may have been fixed at odd times.Agbamu (2006), Isife and Ofuoku (2008) adjudged the individual or interpersonal (face-to-face) method as being the best.The farmers prefer this method because, according to Agbamu (2006), Isife and Ofuoku (2008), it gives the extension agents and farmers the opportunity to obtain first hand information. They are also poor at attending agricultural shows and field trips.This is attributed to the fact that most of them consider the cost of transportation to such trips as being on the high side and they do not get information about them early enough to enable them prepare for such. They are also poor at encouraging their spouse to share information with other farmers.This is mostly so with male poultry farmers.Most of the poultry farmers -are men and for cultural reasons, the association of their wives with other male farmers is highly restricted.Ofuoku (2010) discovered that culturally, men frown at frequent association between their spouses and male extension agents. Effectiveness of Poultry technology message as perceived by farmers The entire poultry teleology message (Table 3) except installation of ceiling fans (mean=2.36)were perceived as being very effective.There was none perceived as being not effective.This is as a result of the fact that the messages were simple to understand, triable, observable, and compatible with the peoples' culture and have relative advantage over the ones previously used.Another reason that is adduced to it is that they were sent with language that resonates with the target audience who not only support the extension goals but got motivated to act. Tree planting, installation of fans and constant supply of water were climate change adaptation and mitigation measures communicated to farmers for adaptation and mitigation of excessively high temperature resulting from climate changes.Nwanjiuba et al. (2008) observed that temperature had a negative significant relationship with broiler production.This negative relationship is due to the fact that the mean annual temperature exceeded the optimum for broiler production.This is congruent with Teklewold et al. (2006), Saiki (2009) who discovered that farmers planted trees around pens to mitigate and adapt to climate change. Installation of fan was to be effective s it as an added cost of production.Water is in abundance as the water table in most part of the study area is high and rainfall is almost throughout the year. Recycling of poultry droppings was found to be very effective as it massively solved the problem of accumulation of the droppings and air pollution.The gaseous emissions from poultry droppings also contribute to global warming.They were recycled by using them as manure for soil ammendrnent and fertilization.Teklewold (2006) opined that farmers found poultry droppings useful to them, as fertilizer for their supplementary crop farming activities and selli them to arable crop farmers. Bird Flu (Avian influenza) prevention measures were found effective as the outbreak has died down in the study area.According toCenter for Disease Control (CDC) (2009), the incidences of avian influenza epizootic outbreak has reduced.This is as a result of the preventive measures which poultry farmers adhered to. Garlic was also found effective by farmers because when they applied it as contained in the message, snakes were no longer visiting their farms.This confirms the findings of Teklewold ( 2006) who observed that poultry farmers placed garlic around poultry pens to repel snakes and it is efficacious. Technology message on improved breeds of chicken were found to be effective as these hybrids were seen to be fast growing, good layers and resistant to diseases.Wanjiuba et al (2008) discovered that the hybrids adopted by farmers were found to be highly resistant to diseases and adverse climatic conditions. Level of adoption of poultry production technologies Most (73.3%) of the farmers adopted above 8 technologies, 19.9% adopted between 5 and 8 technologies, while 7.8% adopted 1-5 technologies (Table 4).This shows that the adoption level was generally high as 73.3% of the respondents fell under the category of high adoption level.They all adopted bird flu prevention and control measures.From observation, the extension agents applied Rogers ' (1975) and (1983) protection motivation theory (PMT) in delivering the messages on bird flu prevention.PMT contends that individuals must perceive something to be risky or harmful to be motivated to protect one's self (Houser et al. 2009).As PMT suggests, Roger (1983), that perceived risk is a motivating factor, especially when delivered via a fearful message combined with message self efficacy. The results of the adoption level conforms the perception of the farmers on the effectiveness of the poultry technology messages.Level of adoption is an index of effectiveness of technology messages. Influence of extension agents' and farmers' communication skills of poultry production technology messages This was determined by the hypothesis Hot: There is no significant relationship between poultry production technology adoption and communication skills of extension agents and farmers. The results of the hypothesis (Table 5) indicate that adoption of poultry technology messages significantly and positively correlated with the communication factors of extension agents (r = 0.877) farmers (r = 0.797) at α 0.05.The communication session agents and farmers have contributed to the effectiveness of the poultry technology messages, since adoption level of the poultry technologies is an index of effectiveness of poultry technology messages.This agrees with Isife and Ofuoku (2008) who state that communication skills of sender (extension agent) and the receiver (poultry farmers) affect the effectiveness message passed across or exchanged or traded.This significant correlation shows that enhanced communication skills of extension agents and farmers are accompanied by increased adoption.The implication is that high level t7 of communication skill could help to stimula1e and poultry techno1ogRespectively.This translates into increased yield and income for the farmers.This also indicia's that as extension agents' and poultry farmers' communication skills are enhanced, adoption of improved poultry production practices is promoted. CONCLUSION AND RECOMMENDATIONS It is obvious that the reception of poultry production depends to a large extent, on the communication skills of the senders (extension agents) and receivers (poultry farmers).The Extension Agents were rated generally as being good at human relations, communication skills and role performance by the poultry farmers, while the extension agents also rated the poultry farmers generally as being good at human relations, communication skills and role performance. The poultry production technology messages disseminated to the farmers were effective and the level of adoption of these technologies confirmed that fact.The communication skills of both the extension agents and the farmers influenced the effectiveness of the poultry technology messages.The level of adoption of poultry technologies proved to be an index of effectiveness of the poultry production technology messages. In view of the findings, the following recommendations are made: 1. Extension agent's should endeavor to provide follow-up appointment with farmers always. 2. Move qualified people should be trained encouraged and employed as extension agents so as to solve the problems of unavailability to farmers and poor performance in farm visits by extension agents.3. Extension agents should try as much as possible to be timely and talk within limited time range so that the technologies can solve the problems they are meant for and avoid waning attention by farmers respectively.4. Farmers should be encouraged to participate in field trips 5. Farmers should be encouraged to allow their spouses to share Information.With other farmers as this will lead to information exchange that will be beneficial to them.6.Farmers should be educated on the beneficial of meeting attendance.Farmers and extension agents should sustain their communication skills. Table 3 : Effectiveness of poultry technology messages as perceived by farmers Technology messages Not effective.)E = effective, HE highly effective, NE = Not effective.effective, < 2.0 = highly effective,
2018-11-17T13:13:18.031Z
2013-02-17T00:00:00.000
{ "year": 2013, "sha1": "c7f89fa8a187ea1a28ce93532dbef9f2a596e63e", "oa_license": "CCBY", "oa_url": "http://tare.sljol.info/articles/10.4038/tare.v15i1.5238/galley/4191/download/", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c7f89fa8a187ea1a28ce93532dbef9f2a596e63e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
13622551
pes2o/s2orc
v3-fos-license
Partial Internal Biliary Diversion: A Solution for Intractable Pruritus in Progressive Familial Intrahepatic Cholestasis Type 1 Biliary diversion offers a potential option for intractable pruritus in children with chronic cholestatic disorders. Progressive familial intrahepatic cholestasis (PFIC) is an inherited disorder of impaired bile acid transport and excretion, which presents with jaundice and pruritus in the first few months of life and progresses to cirrhosis by infancy or adolescence. We report a child with PFIC type 1 who underwent internal biliary diversion for intractable pruritus and was relieved of his symptoms. diversion: A solution for intractable pruritus Progressive familial intrahepatic cholestasis (PFIC) type 1 is a subtype of a group of inherited progressive cholestatic disorders and is characterized by intrahepatic cholestasis, intense pruritus, normal or low gamma glutamyl transpeptidase (GGT), and characteristic "Byler bile" on electron microscopy (EM). Liver transplantation is recommended for PFIC; however, this may not be the solution in PFIC 1 where apart from the liver, there is also involvement of the intestine and the pancreas. Biliary diversion offers significant relief in intractable pruritus not responding to conventional medications. [1,2] We report a child with PFIC type 1 with disturbing pruritus who underwent internal biliary diversion. CASE REPORT A 7-year-old male child presented with persistent jaundice, high colored urine, intractable pruritus, and growth retardation since the age of 6 months .He was the first born of third degree consanguinity with a birth weight of 3.5 kg. Jaundice was persistent from infancy; however, itching was the most distressing symptom and over the years, it had become intractable, requiring a cocktail of medications. There was no history of ascites, gastrointestinal bleeds, irritability, fractures, night blindness, or lethargy. He was hospitalized once for epistaxis and treated for coagulopathy. On examination, he was apathetic, stunted (height 90 cm, <5 th centile), and undernourished (weight 13 kg, <5 th centile) with a Body mass index of 16. He had no dysmorphic features but was icteric with scratch marks on his face and ears. His hands and feet were enlarged without rachitic changes or xanthomata. The fingers and toes were broad and stubby with hyperpigmented, thick, rough, and lichenified skin [ Figure 1a]. Liver and spleen were firm and enlarged. Cardiovascular and respiratory system examinations were normal. His complete blood count was normal. Total bilirubin was 16.8 mg/dl; direct, 12.8 mg/dl; serum alkaline phosphatase, 774 IU/l (100-644); GGT, 20 IU/l (0-26); alanine transaminase, 169 IU/l (0-45); aspartate transaminase, 61 IU/l (0-45); serum cholesterol was 100 mg/dl (70-122); alpha fetoprotein, 1.56 ng/ml; and Serum bile acids was 120 µmol/L (0-10). Total protein and serum albumin were normal. Ultrasound showed hepatomegaly with normal echo texture. Hepatobiliary scan revealed decreased uptake and delayed excretion. Liver biopsy showed bland cholestasis and on EM, granular bile (Byler's bile) was seen suggesting PFIC type1 [ Figure 2]. The child was on regular fat-soluble vitamin supplements, medium chain triglycerides, and ursodeoxycholic acid (UDCA) at a dose of 20 mg/kg/day. Ondansetron and rifampicin were also prescribed for the pruritus. However, the response was not satisfactory and hence partial internal biliary diversion The Saudi Journal of Gastroenterology in his height (97 cm) and weight (18 kg). He has intermittent diarrhea, which could probably be due to high concentration of bile salts in the intestine. DISCUSSION PFIC 1 is a systemic inherited disorder with hepatic, intestinal, and pancreatic manifestations. The common clinical features of the PFIC group are jaundice in early infancy, hepatosplenomegaly, severe intractable pruritus, growth failure, and progression to cirrhosis. Diarrhea, epistaxis, pancreatitis, gallstones, and hearing loss are some additional features which may be seen in PFIC 1. The constant scratching and rubbing of the extremities resulting in marked thickening and lichenification of the skin on both hands and feet with enlargement of the fingers and toes, resembling those of manual laborers, has been reported in PFIC 1. [3] through a cholecystojejunocolic anastomosis was done to relieve the pruritus by interrupting the enterohepatic circulation and decreasing the preload of the bile salts to the liver. A 15-cm loop of bowel was isolated from mid jejunum and this conduit was sutured in an isoperistaltic manner superiorly to the gall bladder and inferiorly to the anterior aspect of mid ascending colon. Full thickness of gallbladder was anastomosed to the serosa of the conduit with a single layer of 4.0 vicryl sutures. A single layer of serosa to serosa anastomosis was performed with 4.0 vicryl sutures between the conduit and the colon. The distal end of the jejunum was tapered prior to anastomizing to the colon to prevent colonic contents from entering the conduit [ Figure 3]. Postoperatively, there was an unbelievable cessation of the pruritus. On follow-up after 2 years, he neither had pruritus nor jaundice and the skin changes including lichenification had disappeared [Figure1b]. There was also an improvement The Saudi Journal of Gastroenterology This classical presentation was seen in our case [ Figure 1]. Some characteristic biochemical features which help in identifying and differentiating PFIC1 from other familial cholestatic disorders include mild to moderate elevation of aminotransferases, low or normal GGT, normal serum alpha fetoprotein, low serum cholesterol, and elevated serum and urine bile acids. The pathological feature of bland cholestasis and cirrhosis with granular bile on EM is also a manifestation of PFIC 1. [4] The management includes nutritional support using medium chain triglycerides, water-and fat-soluble vitamins, and calcium supplementation. The distressing problem in PFIC is the intractable pruritus that may not respond to therapy as in our patient. UDCA is recommended in a dose of 20 mg/kg/day. Ondansetron, rifampicin, phenobarbitone, naloxone, and propofol have all been tried with variable results. Surgical options such as biliary diversion have shown some beneficial effects in PFIC. They decrease the amount of bile acids in the enterohepatic circulation by 50% and thereby decrease preload to the biliary canaliculus. [5] The best results such as relief of pruritus, increase in growth velocity, and slowing or arrest of disease progression are observed when surgery is done early in the course of the disease before severe fibrosis. In 1988, Whitington and Whitington performed cholecystojejunocutaneostomy as a form of partial external biliary diversion for relieving pruritus by increasing the elimination of accumulated bile acids. [6] Ileal bypass procedure was proposed to combat the problem of stoma. [7] Bustorff-Silva et al. [2] reported and performed cholecystojejunocolonic anastomosis in two children as a partial internal biliary diversion to avoid the stigma and complications of stoma and also to prevent malabsorption. Biliary diversion may also delay the need for liver transplant. To the best of our knowledge, our patient who underwent cholecystojejunocolic anastomosis and partial biliary diversion in view of intractable pruritis showing a good response is the third case to be reported for the procedure. The clinical and laboratory parameters in this child following biliary diversion were so gratifying, making one consider biliary diversion as the treatment for children with PFIC 1. However, more studies and long-term follow-up is necessary before universal recommendation.
2018-04-03T01:22:39.179Z
2011-05-01T00:00:00.000
{ "year": 2011, "sha1": "1b1af8b6f427c56fe22a5cee9aada6699a307891", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4103/1319-3767.80387", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a9f0b185d102ae994bf426db3bd3402624598295", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260398123
pes2o/s2orc
v3-fos-license
Assessing performance of horticultural farmers producer companies: Comparative case study Every year the horticultural sector of India faces huge quantity of food wastage due to lack of processing, value addition and post-harvest handling. Farmers Producer Company (FPC) can mitigate the loss through ensuring better value chain management. There are several horticulture based FPCs established in different parts of India. They have grown very fast and competing with agro-industries. The present study aimed to assess the performance of FPCs working in horticulture sector. The study was conducted in Maharashtra State of India by selecting three FPCs working in horticultural sector. Performance of these FPCs was assessed through Effectiveness Index developed for this study. Seven components viz. functional effectiveness, increase in income, increase in farmers share in consumers rupees, inclusiveness, sustainability of company, farmers satisfaction and empowerment were included in the index by following standard index forming protocol. Sahyadri Farms was found the best performing one among the selected FPCs, regarding effectiveness with a mean index score of 63.69 followed by Vasundhara Agro Producer Company Limited (50.20) and Junnar Taluka FPC Ltd. (41.29). INTRODUCTION India is the second largest producer of fruits and vegetables in the world and produces 260 million tons of food grains. Despite this, India also faces huge postharvest losses accounting for lack of proper handling practices and storage infrastructures. These postharvest losses incurred due to inadequacies in storage and logistics account for Rs. 92,651 crores ($13 billion) per year. According to the Committee on Doubling Farmers' Income, the proportions of produce that farmers are unable to sell in the market at the national level are 34 % and 44.6 %, for vegetables and fruits respectively and 40 % for fruits and vegetables together. This means, every year, farmers lose around Rs 63,000 crore for not being able to sell their produce for which, they have already made investments. It is also reported that only 10-11% of fruits and vegetables cultivated in India can be saved using cold storage facilities due to the expenses involved and lack of suitable facilities. Finance is another setback. To avert storage woes and lack of finance and liquidity; horticultural farmers are compelled to sell their produce immediately, within days of harvest, at any prevailing rate due to high perishability of horticultural crops. This covers distress sale and farmers do not realize the best price because of supply glut in the market. Farmers Producer Company may reduce this loss through improved value chain management. In India, the producer company concept has arisen as a new generation farmer's organization. Fruits and vegetables are suitable sector which can provide 2-4 times higher income to farmers than cereals. Near about 23 per cent of total registered FPCs are working exclusively in horticulture and many more are working in mixed approach i.e. combination of agricultural and horticultural crops as production options. FPCs can act as a potential driving force for agricultural and rural development. They are working as 'engines' of development that can uphold the pennon of rural development even ahead of local level, offering benefits to the rest of society (Blokland and Goue, 2007). In reality, FPCs have favorable position of scale economies applies to input purchases and accumulation, processing and marketing of the farmers produce in bulk. In both these cases, FPOs can bargain better prices. Through vertical and horizontal coordination as well as forward and backward linkage, FPCs work in value-addition processes which has not only enhanced their dealing power but also increased the share in consumers' rupee. FPCs have minimized the risk of farmers through promoting crop and livestock insurances. It has diminished the cost of information seeking, connecting smallholders to more complex market situation and making farmers acquainted with the competitive business environment through capacity building and empowerment. There are several horticulture-based FPCs in Maharashtra which have grown very fast and competing with agro industries. The strategic and technological innovations in value chain, clear vision, strong planning and technical insight are the core factors which made the FPC a leader among all grapes exporting agencies. There is immense potential for FPCs to work similarly in the area of value-chain management, so that the huge amount of post-harvest losses can be saved and utilized for home consumption and exports. As the model is new, less studies have been carried out so far on assessing the performance of Farmers Producer Companies in horticulture. Therefore, the present study is aimed to assess the performance of FPCs working in horticulture sector. Study area: The study was conducted in Maharashtra State of India. The state is one of the pioneer states in India where the growth of Farmers Producer Companies is remarkably high. Three successful companies working in vegetable, fruits and overall horticulture and processing industry were selected from the state through purposive sampling based on five specific criteria viz. i. the FPC has been working for more than 5 years successfully; ii. it has a sizeable membership (more than 2000 members), iii. turnover has been more than Rs. 50 lakhs; iv. FPC has several reported success stories and v. it has a unique business model. The criteria based purposive sampling was useful to select an effective and functional companies working at ground level. Based on that three companies have been selected based on the growth. Junnar Taluka FPC Ltd. is a FPC in initial development stage and working in mainly vegetable sector. Vasundhara Agro Producer Company Limited was selected as a company working mainly in fruits and some vegetable crops at moderate stage of growth. Sahyadri Farms working both in fruits and vegetable was selected for the study as it has achieved a tremendous growth level. The data was collected from the members of their FPCs Pune and Nasik District of Maharashtra. Operationalization of performance In this research, we have operationalized the performance as how effectively the producer company carries out its functions. It is better related to organizational performance which indicates how successfully an organized group of people with a particular purpose perform a function. In an organization like Farmers Producer Company, it is important to take care of farmers' satisfaction, empowerment, increasing income of farmers, ensuring value chain management, functional easiness, inclusiveness etc. by combining all these, an Effectiveness Index was prepared which is used in this study. Research design and survey instrument In this study, an Ex-Post Facto research design was used. A semi-structured interview schedule was prepared. The interview schedule consisted of eighteen different socio-personal and socio-economic variables of respondents and an index was formulated to measure the effectiveness of horticulture-based producer company. The effectiveness index included seven components (1) Functioning efficiency, (2) Increase in income, (3) Increase in farmers share in consumers rupee (4) Inclusiveness, (5) Sustainability of Farmers Producer Company, (6) Farmers satisfaction and (7) Empowerment The index was prepared based on the above-mentioned parameters and was calculated by the following equation. Where, E FPC = Indicated the effectiveness of the particular company (1) FE = Functioning effectiveness, (2) I = Increase in Income, (3) FSC= Increase in farmers share in consumers' rupee, (4) Inc = Inclusiveness, (5) S = Sustainability of farmers producer company, (6) FS= Farmers satisfaction and (7) E = Empowerment W i is respective weight calculated based on Analytical Hierarchy Process (AHP) of experts rating to the seven components based on Saaty (2008) and Mukherjee et al., (2018c). After consultation with the experts and reviewing a vast volume of literature, a rating scale was prepared for constructing the effectiveness index comprising the seven components. The effectiveness index was prepared following standard procedure. Twenty experts working in the top management for promoting farmers organizations were consulted and review of related studies were considered for constructing the index. The effectiveness index comprised of the seven following components. (1) Functional effectiveness: A functional efficiency index with 1-5 point scale was developed to evaluate the functioning of FPCs. Ten most relevant dimensions were studied in this index measuring the functional effectiveness. Summation of the scores of 10 functioning variables used in the study yielded functioning score of a single respondent. The scores of members of a particular group were added together to get the functioning score of that FPCs. The index was calculated by dividing the actual score by the maximum possible score of functioning. A similar method was followed by Abadi (2010). (2) Increase in income: Measurement of increase in income was calculated by outreaching the earlier income per year (i.e. before the intervention of the FPC) and the present income per year of the agricultural produce (i.e. after the intervention of the FPC). (3) Increase in farmers share in consumer rupee: This was calculated by outreaching the earlier farmers share (i.e. before the intervention of the FPC) and the present farmers share of the agricultural produce (i.e. after the intervention of the FPC). (4) Inclusiveness: The component inclusiveness was added as dimension in effectiveness to study how inclusive the companies were in including the backward class and poorest of the poor. The inclusiveness was studied by an index developed for the study including the category of farmers, caste, gender and financial class. (5) Sustainability of the company: Sustainability of company is very much important. If a source of income is not sustained, it cannot provide livelihood security. The sustainability of FPC was measured by a schedule developed for the purpose. This included the growth trends of fixed and capital assets of company and most importantly the human resources were considered. (6) Farmers satisfaction: The farmers satisfaction of the FPC services based on the selected dimensions was measured by an index developed for that purpose following the procedure given by Edwards (1957). This index consisted of 15 statements with 1-5 point of scale to which the respondents were asked to give their responses. The responses were averaged to get respondents satisfaction. (7) Empowerment: Empowerment of farmers due to joining of FPC was measured by an index developed for the purpose following the procedure given by Edward (1957). This index consisted of 14 statements covering all aspects of empowerment with 1-5 point of scale on which the respondents were asked to give their responses. The response of all seven components in this Effectiveness index were normalized by z transformation and then averaged. Similar methods were also followed by Mukherjee et al., (2011) andNikam, (2013). The weights for each component were assigned based on experts judgments using Analytical Hierarchy Process (AHP) depicted in Table 1 which indicates, the empowerment was weighted highest (eigen value = 0.26) followed by sustainability of producer company (eigen value = 0.20), members farmers satisfaction (eigen value = 0.17). Increase in income and share in consumers rupee was weighted next Eigen value 0.14 and 0.11 respectively. The consistency ratio of the AHP was 0.147 and consistency index 0.0991. The CI should be less than 0.1 which satisfies the result. The consistency index score indicated the consistency in judges' ratings. Sampling and data collection Focused group discussions (FGDs) and series of key informant interviews were car ried out to identify the aspects of effectiveness. Additionally, previous effectiveness studies were also reviewed to prepare the survey instrument. The survey instrument was sent to experts for their comments and possible modification and improvement were done based on their recommendations. For easy understanding of the farmers, the instrument was translated in hindi (common language) and a pilot test of 20 farmers was done to further clarify the questions. In-depth interviews were conducted with key informants to ensure the triangulation of data. Proper care was taken to make the respondents comfortable and the unbiased recording of the data was ensured. The data were collected from 50 randomly selected members of the company but due to incomplete response some interview schedules were rejected. Finally, a sample of 34 respondents of Vasundhar a Agr o Pr oducer Company; 37 respondents from Junnar Taluka FPC Ltd. and 38 respondents of Sahyadri Farms were considered for analysis. Statistical analysis: Comparison of socio-economic characteristics of farmers across the company were done through non parametric tests. For the statistical analysis, the data were analyzed using MS Excel and SPSS 20 software. The objectives of the company are collectivize the small vegetable growers, improve the standards of living through better use of improved technology of vegetable production, processing and marketing; minimize the environmental degradation while maintaining sustainable pr ofits and pr ovide consultancy in the field of horticulture especially for promotion of organic farming. Sahyadri Farms 'Sahyadri Farms' is working as a Farmers Producer Company since 2011 in Nasik, Maharashtra. It is a 100 percent farmer's owned and professionally managed Producer Company. It is operationally sound with best use of production and processing technology. Today, the company is a leading exporter of grapes in India, exporting ~14 percent of the total export of grapes to Europe. There are more than 3000 farmers working day and night for the company. It is India's leading FPC which is producing, marketing and exporting of frozen vegetable, value added fruit products, etc. to Germany, USA, Norway and many other countries. Socio-economic profile of FPC members The socio economic profile of selected FPC members' from all three FPCs was studied for comparison. The results are presented in the Table 3. The Table 3 indicates that majority of the farmers were of young categories for Sahyadri farms (54.05%) and Junnar Taluka FPC (44.70%). In case of VAPCOL majority of the members were found much older and experienced than others. There was no significant difference in age groups recorded. Also, majority of the respondent members were male in both the cases of Sahyadri farms and Junnar Taluka FPC, but in case of VAPCOL, majority (52.90%) were female. A similar case was also recorded for level of education and family size. Majority of the VAPCOL farmers were small and marginal in nature having less than 1 hectare land holding. Although, in case of Sahyadri farms, it was found that 64.90 % of the farmers were marginal in nature where as 35.10 % had having land holding more than 1 hectare. In the vegetable based farmer producer company at Junnar Block 65.80 % of the farmers were marginal. Social participation is an important parameter of socioeconomic status. The highest social participation was recorded for VAPCOL farmers (94.10 %) followed by Sahyadri farms (89.20 %) and Junnar Taluka Farmer Producer Company (86.60 %). Similar case can also be seen in case of extension agency contact where, a majority of the VAPCOL farmers (97.10 %) had high level of extension agency contact followed by Sahyadri farms (89.20 %). For training experience it was found that all of the producer company members attended two and more trainings in their life time. Majority of them were members of other groups like self-help groups, co-operatives etc. The number is highest in case of VAPCOL because, it is following institutional model where several cooperatives combine to form farmers producer company, so apart from the membership in FPOs, the VAPCOL farmers were also associated in cooperatives. Sahyadri farm was developed from self help groups, that is why 86.50 % of the farmers had membership in other groups but the case is different for Junnar Taluka where, individual farmers associated with each other to form the company so, only 55.30 % farmers were associated with other groups. In case of progressiveness and attitude, it was found that majority of the farmers in all the groups were progressive in nature and have positive attitude towards FPCs. The increase in annual income was found to be the highest in case of Sahyadri farms, in which 64.90 % of the members where earning more than 3 lakh after joining Sahyadri farms whereas 35.10 % earn between 2 to 3 lakh per year. The majority of the farmers of Junnar Taluka (57.90 %) were earning 2-3 lakh and 42.10 % of them has enhanced their income up to Rs. 1 to 2 lakhs after joining the FPC. The VAPCOL is a association of very small farmers and it was found that the majority (88.20 %) had able to enhance income up to 1 lakh per annum after joining the company, while 11.8 % up to 1 to 2 lakh per annum. Comparative effectiveness of selected Farmers Producer Companies It is essential to assess the effective of FPCs wor king in the horticulture sector. Producer Company wise mean score of the components of effectiveness is depicted in Table 4. Functional efficiency wise, all the companies scored more than 4.5 out of 5, which is a quite high score. It indicated that the companies were well functioning. The highest score was obtained by Sahyadri Farms (4.55) as it has its own management team and qualified salaried staff. Functional efficiency wise the companies are nearly at par with each other. The effectiveness score of different Farmers Producer Companies are depicted in Table 5. The overall index score indicates that the Sahyadri Farms is the best among other regarding effectiveness with mean score 63.69 followed by Vasundhara Agro Producer Company Ltd (50.20) and Junnar Taluka FPC Ltd. (41.29). The reason behind this are that the companies are good in empowering their members, have a sustainable business venture, the members were highly satisfied with the performance of company and effective in enhancing farmers income. To study the whether the companies significantly differ in effectiveness, one way ANOVA was conducted. The F value was 68.142 which were significant at 1 per cent level of significance. It is observed that the companies significantly differed from each other in effectiveness (Table 6). As per the data, the highest percentage increase in annual income of members farmers before joining the company was observed in Sahyadri Farms (67.41%). The results showed that farmer's income had enhanced in a range of 32 to 67 per cent after joining Farmers Producer Companies. Farmers share in consumer's rupee was another component, which indicates level of value addition. It was found that farmers share in consumers rupee had increased 32-35 per cent more than earlier. It is mainly due to the value addition at producer company level. The highest increase was found for VAPCOL (34.71 %) which was due to well-established marketing channel by the company. Beside this door to door picking and delivery to retail market and marketing efficiency has culminated the change. Inclusiveness is another indicator used in this index to have a look on whether the companies are working with the poor and backward section of society or not. It was found all the FPCs were inclusive in nature. VAPCOL farms scored 0.76 out of 1 whereas Junnar Taluka FPC Ltd. scored 0.75 whereas in case of the Sahyadri Farms the members are already working in grapes and a large number of the members were rich before joining the FPC which is reflected in the lesser inclusiveness score (0.67). Sustainability of an organization is key factor in effectiveness. The big FPCs scored better in these parameters. In sustainability parameter, Sahyadri Farms, VAPCOL and Junnar Taluka FPC Ltd. got the index score 0.92, (Mukherjee et al., 2020). As per the experts rating, empowerment was weighted highest (0.26), the FPCs who ensured better empowering farmers through training and capacity building exercise in horticultural products gained major weightages. Sustainability of income was another important parameter realized to be the important in effectiveness of FPCs. It depends upon sales growth, membership growth, successful ventures made, profit growth, market linkages and several others factors. Farmer producer companies can play a more important role in sustainable agricultural intensification for smallholders, particularly by addressing the constraints like the size of landholding, access to credit, irrigation, and marketplaces (Reddy et al., 2020). Satisfaction of the producers are the next important index parameter which includes timeliness of inputs delivery, quality service, dividend distribution, income enhancement etc. CONCLUSION In this study, an attempt was made to measure the performance of horticulture based farmers producer companies with an effectiveness having seven components namely, functioning efficiency, increase in income, increase in farmers share in consumers rupee inclusiveness, sustainability of Farmers Producer Company, farmers satisfaction and empowerment. The component empowerment was weighted highest followed by sustainability of producer company members, farmers satisfaction and increase in income. Sahyadri Farms was the best among other regarding effectiveness with mean score 63.69 followed by and Vasundhara Agro Producer Company Limited. (50.20) and Junnar Taluka FPC Ltd. (41.29). The reason behind this may be that the companies are good in empowering their members, having a sustainable business venture, the members were highly satisfied with the performance of company and effective in enhancing farmers income. The three parameters, farmers empowerment, FPC sustainability and farmers satisfaction cumulatively contributing 63 % of index weights. To be effective, the horticultural FPCs need to focus on these three parameters most.
2023-08-03T15:18:05.273Z
2022-12-31T00:00:00.000
{ "year": 2022, "sha1": "11d3d91d8ff458f7116213c52c1b24705726d1b7", "oa_license": "CCBYNCSA", "oa_url": "https://jhs.iihr.res.in/index.php/jhs/article/download/1187/690", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d8f66480e3bbd226a6110964f8b774a34dc5ad2a", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Business" ], "extfieldsofstudy": [] }
62880239
pes2o/s2orc
v3-fos-license
Properties of bleached pulp sheets of avocado wood (Persea americana Mill.) pulped by Kraft and Soda processes Chips of avocado wood (Persea americana Mill.) were pulped by means of conventional Soda and Kraft pulping processes. The pulps were bleached with an elemental-chlorine-free sequence OD1-Eop-D2, pre-setting reaction conditions for the first chlorine dioxide stage (D1). The results show that during the chemical pulping process, avocado wood is easier to cook than other hardwoods such as eucalyptus. The avocado pulp also showed a very good bleachability, reaching brightness levels of up to 92% ISO compared to 84% for eucalyptus after the ECF bleaching sequence. The avocado Kraft pulps required more chemical input in the bleaching sequence than the Soda pulps. On the other hand, the physico-mechanical properties of the pulp were not notably reduced by the bleaching process, the Kraft pulp being stronger than the soda pulp. Strength properties of avocado are similar to those of eucalyptus; therefore this raw material constitutes a worthwhile choice for cellulosic fiber supply. INTRODUCTION The Avocado tree (Persea americana Mill.) is probably native to México and Central America (Record & Hess, 1944;Kopp, 1966).It is cultivated for fruit production in many countries around the world.Orchard trees grow to an average height of 10-15 m and a diameter of around 60 cm, and tend to form numerous low branches, however with shape and dimensions which do not lend themselves for quick and easy conversion into sawn timber.According to FAO reports (FAOSTAT, 2004), around 416.000 ha are cultivated with avocado trees worldwide.The total world production of avocado fruit is estimated at 3.188.000tons.Large avocado plantations were established in Mexico in the 1960's and 1970's mainly in the states of Michoacán and Puebla.In fact, México is the world's largest producer with about 102 500 ha under cultivation, planted with ca. 100 trees per ha, and a total annual production of 1.040.000tons of fruit.This represents nearly one third of world production (FAOSTAT, 2004). It is evident that the main concern of the avocado growers is fruit production, which is strongly affected by the absence of light and air due to the dense canopies of the plantations.To increase or maintain current avocado production, the plantations require frequent pruning and thinning operations generating large amounts of biomass, as about 10% of the planted land must be cleared annually in order to avoid infection of healthy trees by those infested with plagues.In México about 2 million trees are thus removed yearly from the plantations generating approximately 500.000cubic meters of round wood.Most of this raw material is simply burned without deriving any economic benefit.Only a small proportion is converted into sawn timber for packaging crates, parts of musical instruments, etc. (López, 1999). As a consequence, a specific avocado wood research program was proposed with the general objectives of determining the properties of the residual wood, search for potential uses and, in particular, examine the suitability of this raw material for pulp and paper production.Papermaking with avocado wood is of particular interest because • the pulp and paper industry in México has a very limited supply of raw materials, and • Avocado wood fibers possess adequate morphological characteristics for papermaking (Silva et al., 1999). OBJECTIVE The objective of this study was to produce a bleached pulp of 88-90% ISO brightness (Elrepho), with chips of avocado wood (Persea americana Mill.) pulped by means of kraft and soda processes, meeting the environmental considerations for the mills of the future. MATERIALS AND METHODS Debarked logs of avocado wood from regular plantation maintenance were chipped with a pilot scale Bruks Mekaniska AB type 980AA chipper with 2 radial blades and then classified by length and thickness.The chips that passed through the 8 mm mesh sieve but were retained on the 7 mm mesh sieve were selected for pulping, according to the method D35X (Hatton, 1979). The screened chips were cooked in one liter stainless steel digesters using Kraft and Soda pulping processes with the objective to produce pulps with a Kappa number of approximately 18 units in both pulping processes.The conditions of the pulping stage were as follows: 13-14% of active alkali (AA) as Na 2 O, maintaining a constant liquor to wood ratio of 5:1 and a cooking time of 90 minutes at 170°C.The produced pulps were separated from the residual liquor, washed, and passed through a 0.15 mm slotted flat screen. The following parameters were evaluated for the pulps passed through the screen: residual lignin by Kappa number (TAPPI T-236), viscosity (TAPPI T-230), percent of rejects and yield.Residual active alkali in the liquor was evaluated by means of a potentiometric titration. The following elemental chlorine free (ECF) bleaching sequence was applied to the screened pulps: oxygen reinforced with soda (O), first chlorine dioxide (D1), oxygen-peroxide extraction (E OP ) (Senior, 1998), and finally a second chlorine dioxide (D2) with the reaction conditions shown in table 1.The strength properties of bleached and unbleached pulps were evaluated using TAPPI standard methods. Pulping The principal parameters of the pulps obtained from both processes are listed in table 2, those with a nearly equal Kappa number (approximately 18), indicated by an asterisk, were chosen for this essay.Accordingly, 13,5% of AA produced a Kraft pulp with a Kappa number of 17,5, a brightness value of 37,2% ISO, and a viscosity of 27.3 mPa.s.On the other hand, the Soda process yielded a pulp with a Kappa number of 17,7, a brightness value of 41,2% ISO, and a viscosity of 14,2 mPa.s.The lower brightness resulting from the Kraft process must be attributed to the formation of chromophorous groups such as catechols and, to a lesser degree, hidroquinones.Moreover, the specific lignin selectivity of the reagents used in both processes is different, resulting in a higher viscosity of the Kraft pulp (Gellersted et al.,1984). When comparing the above results with those previously obtained with eucalypt (Eucalyptus globulus, E. dunnii) kraft pulp, i.e.Kappa number of 15,3 with 15% AA (Fernández, 1988), the avocado wood proved to be easier to delignify than the eucalypts used for pulping in México. Bleaching 1. Exploratory bleaching D 1 stage Various preliminary tests were performed to establish the charge factor (CF) for chlorine dioxide (% ClO 2 = Kappa number x CF) and also the soda charge as pH buffer during the D 1 stage (Figs. 1 and 2) in order to obtain dependable information for optimum chemical consumption and brightness. These tests resulted in a charge factor for the Kraft pulp of 0,24, with a reagent load of 2,59% ClO 2 (17,5 x 0,24), and 0,25% of alkali to be added (on OD pulp) to control the pH (Fig. 2).In comparison, the Soda pulp needed less reagent, with a charge factor of 0,18 (1,62% ClO 2 ), while further pH adjustment was not required (Fig. 1).This different behavior between Kraft and Soda pulps can be explained by the fact that the residual lignin of the Kraft pulps is very difficult to remove during the various steps of the bleaching sequence.After 90% of the wood lignin has been eliminated when the Kappa number is around 40, the selectivity of the kraft liquor decreases and a degradation carbohydrates is initiated (Gellerstedt et al., 1984).In addition, the presence of covalent bonds between the residual lignin and carbohydrates may also impede the removal of lignin from the pulp (Yamasaki et al., 1981). O-D1-Eop-D2 Sequence During the bleaching sequence the behavior of both Soda and Kraft pulps is similar (Table 3), except during stage D 1 , which has been analyzed in the previous paragraph.The Soda pulp shows a 3% higher initial brightness than the Kraft pulp and maintains this slightly superior level throughout the entire bleaching sequence. On the other hand, the viscosity of the Kraft pulp decreases 7 mPa.sduring the bleaching sequence, whereas that of the Soda pulp decreases by only 3 mPa.s.Nevertheless, the Soda pulp possesses a lower viscosity at the end of the bleaching process due to its lesser initial viscosity. Handsheet strength properties Tensile strength (Fig. 3) as well as burst (Fig. 4) and tear (Fig. 5) indexes of Soda and Kraft pulps were determined.For comparison, the respective data for bleached eucalypt pulp hand sheets (Fernández, 1998) are included with the three figures.The bleaching process did not significantly reduce any of the strength properties assessed.Moreover, the avocado Kraft pulp showed a rather high tensile strength compared to all other pulps.Equally, the Kraft and Soda pulps tear index (Fig. 4) remain largely unaffected by the bleaching process excepting the lower range of the refining degree.The burst index (Fig. 5) increases with the bleaching for the Kraft pulp, whereas the Soda pulp does not present changes induced by bleaching.On average, the strength properties of Kraft pulps were higher than those of the Soda pulps.Refining degree [°SR] The comparison of the avocado pulps with the eucaypt pulp reveals similar values of the burst index, higher breaking length, and lower values of the index except for Avocado bleached soda pulp which has tear index similar to that of eucalypt pulp. CONCLUSIONS Kraft and Soda pulps of avocado wood subjected to a bleaching sequence O-D 1 -E op -D 2 could attain competitive commercial brightness levels (88-92% ISO) with low chloride dioxide concentration.This is more evident in Soda pulps and this pulp is also easier to bleach than the Kraft pulp.The viscosity of Soda pulps is lower than that of Kraft pulps; therefore strength properties of bleached and unbleached Kraft pulps are higher than the equivalent Soda pulps. Avocado wood pulps compare favorably with commercial eucalypt pulps (Eucalyptus spp.) and thus constitute a viable alternative as cellulose fiber supply. Figure 1 . Figure 1.Effect of ClO 2 charge factor (CF) on the characteristics of the Soda pulp during the exploratory bleaching stage D 1 Figure 2 . Figure 2. Effect of alkali on the Kraft pulp characteristics during the bleaching stage D1, CF = 0,24 Figure 3 . Figure 3. Breaking length of Kraft and Soda unbleached and bleached pulps as a function of the refining degree Table 1 . Bleach sequence conditions 0,5% of MgSO 4 was added during the O stage as carbohydrate protector; the consistency in all stages was 10% Table 2 . Results of Kraft and Soda chemical pulping processes Table 3 . Results of bleaching sequence of avocado wood Kraft and Soda pulps (p) = peroxide; (s) = soda
2018-12-18T05:48:27.597Z
2016-08-31T00:00:00.000
{ "year": 2016, "sha1": "66b6386fae7d2bbbc6b67c4c294cc62aeee1c05d", "oa_license": "CCBYNCSA", "oa_url": "https://myb.ojs.inecol.mx/index.php/myb/article/download/1248/1418", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2a3ec7036cf9c41a330ae28b94818bfd9ff5bf84", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
82621183
pes2o/s2orc
v3-fos-license
Studies in the Ericoideae . III . The genus Grisebachia A revision of the genus Grisebachia Klotzsch in which eight species are recognized is presented. The genus belongs to the Ericaceae-Ericoideae and is endemic in the south-western part of the Cape Province. The work revealed a high degree of variability among the species, necessitating the reduction of seven species to infraspecific rank, seven species to synonomy and the rejection of one species as imperfectly known. One new species G. secundiflora E. G. H. Oliver is described. HISTORICAL OUTLINE W hen K lotzsch reclassified the subfam ily Ericoi deae in 1838, he described the genus Grisebachia consisting o f eight species: G. ciliaris sensu K lotzsch, G. hispida K lotzsch, G. involuta K lotzsch, G. zeyheriana K lotzsch, G. incana (B artl.)K lotzsch, G. hirta K lotzsch an d G. plum osa K lotzsch.O f these, only G. incana an d G. plum osa are retained in the present revision.L ater in the sam e w ork he described Eremia parviflora, w hich is now recognized as a species o f Grisebachia. The follow ing year B entham (1839), w hen revising the whole fam ily E ricaceae, retained Grisebachia but in an enlaig ed fo rm .He upheld K lo tzsch 's eight species and ad d ed G. dregeana and G. serrulata, both o f which have now been reduced to synonom y.He also included F inkea K lotzsch w ith tw o species, as a section.N .E. Brow n (1906) correctly placed the latter in the genus Acrostem on o f K lotzsch. In 1876 B entham again revised the fam ily and included a fu rth e r tw o o f K lotzsch's genera, Acros temon and Comocephalus, und er Grisebachia.The genus was then divided up in to three sections based on the shape o f the corolla, the hairiness o f the filam ents and the ovary com plem ent. In Die N atiirlichen Pflanzenfam ilien D rude (1897) to o k a very conservative view o f the E ricoideae and placed Grisebachia as defined by B entham as one o f four sections in the genus Erem ia D .D o n . N .E. Brow n (1906) in F lo ra Capensis changed th s system o f the earlier w orkers and adop ted Grisebacl.iias originally co n stru ed by K lotzsch.He retained Acrostem on as a distinct genus an d reduced Como cephalus and F inkea to synonom y u nder it.He retained all eight o f K lo tzsch 's species and placed Eremia parviflora correctly in Grisebachia, but as G. eremioides w hich h ad been nam ed by M acO w an in 1890.He also a d d ed ten o f his ow n species o f which only three are upheld as distinct species in the present revision, nam ely G. rigida, G. nivenii and G. minutiflora. Phillips (1926) accepted Brow n's w ork in its entirety in the first edition o f his G enera.In 1944 he put forw ard his proposals fo r a reclassification o f the family in S outh A frica fo r the second edition o f his G enera (1951).He presum ably based his ideas on the very conservative views o f D rude.He recircum scribed all o f the genera an d placed Grisebachia u n d e r Erem ia together with seven other genera, som e o f w hich are quite unrelated. The genus has been retained in D yer's G enera (O liver 1975) in the sam e form as adopted by N .E. Brow n in F lo ra C apensis. W hen the present revision was un d ertak en the genus Grisebachia consisted o f 21 species.As a result o f finding num erous variations and overlapping o f characters the num ber o f species has been reduced to seven.One new species, G. secundiflora, has been added. MORPHOLOGY In hab it m ost o f the species o f Grisebachia are erect, often form ing com pact shrublets.G. parviflora is usually sparse an d spreading am ong rocks an d vegetation and G. secundiflora is com pact b u t rath e r spraw ling. The branches o f all species are never entirely glabrous.M ost have pubescent to pilose or tom entose branches w hen young, som etim es w ith sim ple to plum ose sto u t hairs interm ingled.These m ay be gland-tipped. T he leaves are all typically ericoid w ith no openbacked form s and are m ostly 3-nate.In G. plum osa subsp.hispida they are always 4-nate, while in subsp.pentheri they can be occasionally 4-nate inbetw een 3-nate.M ost leaves are adpressed with one exception, G. ciliaris subsp.multiglandulosa, where they are recurved spreading.The indum entum o f the leaves is very variable and disjunctions have been used for taxonom ic division.S tout simple or plum ose hairs m ay o r m ay n ot be present on the leaves and m ay be confined to the m argins or occur on the adaxial surface as well.In m any cases these stout hairs m ay fall off and rem ain only as short stubs, w hich can easily be overlooked. T he flowers o f all species are term inal either at the ends o f the m ain branches or m ore often at the ends o f sh o rt lateral branchlets.In G. parviflora these sh o rt branchlets m ay be aggregated together to form a loose pseudospike and in G. secundiflora the pseu d o spike is com pacted an d secund.The 3-bracteolate flowers are usually 4-12 in a head or as m uch as 36 in G. minutiflora.The bracteoles vary considerably in shape and size, particularly in G. plum osa and G. ciliaris where they are o f taxonom ic im portance.The bracteoles m ay be equal to very unequal w ith the m edian bracteole being considerably enlarged w ith an expanded base. T h e variation also occurs within a single inflorescence w here the bracteoles m ay be very unequal in the ou ter flowers to equal in the inner flowers (Fig. 1). The calyx in all species is 4-lobed o r 4-partite.In G. ciliaris, G. incana, G. rigida an d G. nivenii the sepals are free o r very slightly jo in ed a t the base, whereas in G. plumosa, G. parviflora, G. minutiflora an d G. secundiflora they are joined for q u a rte r to three quarters o f their length.In all species the sepals are m ore or less equal.Som etim es the lateral pair m ay be slightly narro w er th an the ab-an d adaxial pair.The indum entum o f the calyx is very variable and is used as a taxonom ic character.In m ost species there are some sto u t sim ple to plum ose h airs on the m argins o f the sepals and som etim es also on the abaxial surface.These may be gland-tipped in dif ferent taxa or in th e same taxon in the young stages. The corolla is o f tw o basic shapes in the genus.Five o f the species, G. ciliaris, G. plum osa, G. rigida, G. incana and G. nivenii, have a corolla sim ilar to th a t in the section Cyatholom a o f the genus Erica.T he corolla tube is m ore o r less distinctly con stricted in the m iddle with an ovoid or o bovoid base and cyathiform upper p o rtio n including the large lobes.This is evident in the fresh state a n d in m ost o f the species in the pressed m aterial.But care m ust be taken w ith m aterial o f G. ciliaris subsp.ciliaris in w hich the small flowers easily loose this shape when pressed.O ccasionally, w hen the constriction is not very m arked, a cam panulate shape occurs. In the three rem aining species, G. parviflora, G. minutiflora and G. secundiflora, the corolla has no distinct constriction.In the first tw o species it is usually funnel-shaped o r obconic an d in the th ird it is tu b u lar or tu b u lar with an inflated m iddle poition.In the m ajority o f species especially those w ith the constricted corolla the corolla is pubescent to pilose in the middle region outside and also inside aro u n d the point o f constriction. The num ber o f stam ens in all specim ens exam ined was constantly 4 and is im p o rtan t in the generic classification.The stam ens are free an d have pilose filam ents. The anthers are m ostly m anifest being arranged ju st above the constriction in the corolla.In G. secundiflora they are m anifest to included.The m ajority o f anthers are characteristically bipartite.In G. secundiflora t.iere is a tendency foi them to be bilobed.Awns are present in several species, b u t may be absent in anthers o f the sam e flower.This ch arac ter, used in the past for specific recognition, is o f no use taxonom ically.T he an th ers are all dorsally attached w ith an expanded apex to the filament. The pollen in all the m aterial exam ined occurs as single grains, which are found in several o f the o ther m inor genera o f the Ericoideae.The grains are tricolporate, the furrow s being deeply chanelled and alm ost as long as the cell.In shape the grains are m ostly ellipsoid with flattened apices.In a few cases they are oblate as in G. nivenii an d some form s o f G. plumosa and G. parviflora.The sculpturing o f the surface in the first six species is either scabrate or m icroscabrate with a tendency for the elem ent rods to becom e fused at their distal ends to form tecta.In the last tw o species the fusion is com plete giving an alm ost sm ooth appearance to the pollen surface (Fig. 2). The ovary is m ostly 2-celled w ith a single pendulous subapical ovule in each cell.Very occasionally 3-celled ovaries occur, notably in G. parviflora subsp.pubescens.In G. secundiflora the ovary is constantly slightly obliquely 1-celled. M ature fruits th a t were found in a few species were hard-w alled nuts w ith the walls often verrucose.They are apparently indehiscent a n d contain one or tw o very soft juicy seeds.Some fruits on Levyns 1367 (G.ciliaris subsp.multiglandulosa) collected in 1925 still contained soft juicy seeds. As in m ost genera o f the E ricoideae, the ovary in Grisebachia is seated on a nectariferous disc which, in some cases, is very conspicuous.This suggests th a t all the species are insect pollinated.O n a few occasions it was noted in the field th a t plants were visited by bees. The stigm a varies from sim ple to capitellate, which is in accordance w ith the insect pollination. DELIMITATION OF THE GENUS As defined in the present revision the genus Grise bachia is characterized by having 3 bracteoles, 4 sepals, which are free or p artly fused, a 4-lobed corolla, 4 free stam ens w ith bipartite anthers and an ovary w ith 2, rarely 3 o r 1, cells and a single ovule in each cell.The im p o rta n t characters arfe the stam en num ber and bipartite anthers. The uniform ity o f the genus has until now been recorded as very constant.It has been relatively easy to assign m aterial to the genus when identifying Ericaceae.The 2-celled ovary, 4 stam ens and bipartite anthers served to be a distinctive com bination o f characters. N only know n from the type collections.A n exam ination o f a few flowers did n o t confirm this an d w ithout investigating num erous flowers it was decided to accept Brow n's observations.A num ber o f flowers w ith 3-celled ovaries was, how ever, fo u n d in G. parviflora subsp.pubescens. There are several genera in the E ricoideae w hich have 2-celled ovaries an d 4 stam ens, i.e.Simocheilus, Acrostem on, Thoracosperma, Sym pieza, Aniserica an d Coilostigma, but none o f them has the distinctive bipartite anthers found in Grisebachia.They all have a different appearance from the ra th e r uniform Grisebachia. W ith the discovery o f G. secundiflora, this uniform ity was slightly changed.This species was difficult to place satisfactorily in any o f the genera due to its 1-celled ovary.The species was clearly allied to som e species o f Eremia and Grisebachia, but could not be included in the form er on the g rounds o f having only 4 stam ens and in the latter for having a 1-celled ovary.As the genus Erem ia has recently been em ended (Oliver, 1976) to include ovary v ariatio n s from 4-celled to 1-celled but w ith constantly 8 stam ens, it was decided to place this new species u n d er Grise bachia next to G. parviflora an d em end th a t genus to include the 1-celled ovary ra th e r th an a lte r Erem ia even furth er to include 4 stam ens.The 1-celled ovary b ro u g h t the species cl >se to Anom alanthus and Syndesmanhus, neither o f which it resembles. The close sim ilarity betw een Grisebachia and Eremia has been m entioned under th a t genus (Oliver, 1976).G. parviflora is superfi;ially sim ilar to Erem ia curvistyla in flower form and habit.G. secundiflora looks very m uch like Erem ia totta but, in b o th cases, the 4 stam ens serve to distinguish them as species belonging to the tw o separate genera.The an th ers o f all species o f Eremia except E. curvistyla are only bilobed and not distinctly bipartite as occurs in Grisebachia.In Eremiella outeniquae (Oliver, 1976) the anthers are also bipartite, b u t the rest o f the floral characters are very different from Grisebachia. Grisebachia and Erem ia are sym patric to a great extent in the area from the C edarberg to the C old Bokkeveld.U ndoubtedly they are very closely allied and have possibly envolved from som e ancestral stock, which in tu rn arose from the genus Erica by reduction. PHYTOGEOGRAPHY T he genus Grisebachia is endem ic in the so u th w estern and western parts (Fig. 3 Com pact erect shrubs up to 0 ,5 m, rarely 1 m, high.Branches num erous erect pubescent to tom entose with longer stou t hairs inbetween, som etim es glandtipped, som etim es becom ing glabrous and grey.Leaves 3-or 4-nate inbricate and adpressed to spread ing recurved, up to 4 m m long, linear-oblong to ovate, acute to obtuse, glabrous o r puberulous to tomentose and canopubescent when young an d with short to long stou t plum ose or simple eglandular or glandular hairs on the m argins only or also on the abaxial surface becom ing glabrous on the abaxial surface an d scabrid w ith the sto ut hairs falling off leaving short trun cate stubs, rarely only crisped pubescent w ithout any stout hairs.Flowers (1) 6-12 (16)-nate in term inal erect or nodding heads; pedicels short, 0 ,5 -1 ,5 m m long, puberulous som etim es w ith longer stouter hairs inbetween, som etim es glandtip p ed ; bracteoles m edian to adpressed, m arkedly unequal to subequal, the m edian from lanceolate to broadly ovate from an expanded base, 1 ,5 -3 , 5 x 2 , 7 m m , the laterals usually sm aller and narrow er, m ostly oblong-elliptic, all ciliate with short to long stou t plum ose to simple hairs which may be glandtipped, w ith or w ithout an even to sparse covering o f sim ilar shorter or equally sized hairs on the abaxial surface, som etim es ju st crisped pubescent.C alyx jo in ed for one th ird to two thirds o f its length, cam panulate som etim es 4-angled at the base, pink, glabrous to pubescent w ith the lobes ciliate w ith sh o rt to long simple to m arkedly plum ose stou t to soft hairs which m ay be gland-tipped, occasionally w ith an even to sparse covering o f sim ilar shorter or equally sized hairs on the abaxial surface; lobes narrow ly to broadly deltoid, slightly sulcate a t the apex.Corolla 4-lobed up to 4 m m long, very co n stricted half to two thirds the way up above an ovoid to obovoid base cyathiform above, often 4-angled the angles alternating w ith the calyx segm ents, pubescent to sparsely so in the m iddle region and pilose on the inside at the m outh o r constrictio n ; lobes broadly ovate to deltoid, obtuse erect to slightly spreading, glabrous or slightly pubescent dow n the centre outside.Stam ens 4, free; filam ents narrow ly linear broadened at the point o f attach m en t, pilose, w hite; anthers exserted o r m anifest, 0 ,5 -1 ,3 m m long obovate, dorsally attached, scabrous, m uticous or rarely m inutely aw ned the awns occurring only in a few flowers or an th ers; pore a b o u t h a lf the length o f the cell; pollen grains single.Ovary 2-celled, com pressed, ovoid to oblate, obtuse to subacute sm ooth to verrucose rarely pilose at the apex otherw ise glabrous, seated on a disc; style exserted; stigm a small, subcapitate; fruit hard verrucose.Figs 5-8. A species form ing erect shrublets up to 0 ,5 m occurring on sandy coastal flats in the w estern Cape Province from Cape T ow n to G raafw ater an d on m o untain slopes in the C lanw illiam area, flowering from June to Septem ber. G. plumosa is characterized by the calyx being joined for one q u arter to th ree-quarters o f its length, the corolla-tube being distinctly constricted in the m iddle and the habit being erect. In his treatm ent o f the genus B row n recognized five species in the group with joined calyces, basing the separation on the form o f the sto u t hairs on the calyx.The five species were G. plum osa, G. hirta, G. pilifolia, G. pentheri an d G. solivaga.O n the small am o u n t o f m aterial available to him this classification was feasible.But since F lo ra Capensis num erous collections o f these species have been m ade.A n exam ination o f all this m aterial show ed a degree o f variation in the diagnostic characters sufficient to w arran t the five species being regarded as one single com plex o f taxa with discontinuities occurring only in one character an d with partial separation in oth er characters between the constitu en t taxa.T he oldest nam e applicable to this com plex is G. plumosa K lotzsch. It was also found th a t v ariation in the n um ber o f leaves per w horl in som e specimens o f G. pentheri overlapped with the num ber in the very sim ilar G. hispida which had been separated off from the rest o f the species in the genus on this character.G. hispida therefore had to be included in the G. plumosa Com plex. The above six taxa were then exam ined as one com plex group.It was found th a t the g roup could be divided into two form series on the position o f the stout hairs on the leaves, one w ith the hairs only on the m argins the other w ith the hairs also scattered on the abaxial surface.The first series co ntained G. plum osa and G. solivaga, the second co ntained G. hispida, G. pentheri, G. pilifolia an d G. hirta. In the first form series the variatio n in calyx hairs from the very plum ose m aterial o f G. plum osa in the M am re area to the alm ost sim ple-haired specim ens in the A u ro ra area show ed an overlap w ith the type and only collection o f G. solivaga from ju st west o f Clanwilliam .M aterial which had been nam ed as G. hirta was found to constitute a distinct new taxon m ore closely allied to G. plumosa on the leaf character.F urtherm ore, a collection m ade in the G if berg (Oliver 4951) was found to be nearest to G. plumosa and, although som ew hat anom alous, was referred to this series which then consisted o f G. plumosa (in cluding G. solivaga) and the tw o new taxa. In the second form series G. pilifolia showed considerable variation w ith an overlap in the dis tinguishing characters with G. pentheri thus necessitat ing its reduction to synonom y.A close relationship with G. hispida was show n to exist with only a partial separation on the num ber o f leaves per w horl an d a distinct separation in the plum osity o f the hairs.G. pentheri and G. hirta appeared to be very sim ilar w ith only one character show ing any disjunction, i.e. the position o f the stout hairs on the abaxial surface o f the calyx.This series therefore consisted o f G. hispida, G. pentheri (including G. pilifolia) and G. hirta. W ithin these two form series recognizable on the single character difference o f leaf hairs, several m ore or less distinct taxa could be distinguished again on various single character differences.The com plex occurs in tw o m ain d istribution centres, the M am re area in the south and the m ountains west o f the O lifants River in the n orth.Regional separation o f the two series in the com plex is only partial.The " plum osa" series is concentrated in the south with some outliers in the far no rth and the " hispida" series in the n orth with outliers in the south.It was decided fo r reasons o f expediency to regard the com plex as one species w ith six subspecies based on a single ch aracter d isjunction occurring w ith some degree o f regional sep aratio n over a wide part of the d istrib u tio n range o f the species (Fig. 4).This classification, th o u g h n o t final, attem pts to show the type o f variatio n th a t occurs in this complex.Only a th o ro u g h biosystem atic study o f the populations will solve the problem s an d should either confirm o r correct the above classification I have given. In floral an d foliage characters this is a very variable taxo n in w hich six subspecies are recognized.Branches pubescent to tom entose w ith sh o rt stout plum ose hairs inbetween, rarely ju st tom entose.Leaves 3-nate m ostly erect and adpressed, pubescent to canopubescent and ciliate w ith short, occasionally long, sto u t plum ose hairs, becom ing glabrous an d often scabrid edged w ith the cilia falling off leaving sh ort truncate setae.Bracteoles pubescent, rarely glabrous, ciliate w ith sh o rt stout plum ose hairs, rarely also w ith sim ilar hairs on the abaxial surface.Sepals pubescent, rarely glabrous, ciliate w ith stou t plum ose, rarely subplum ose hairs, with a few shorter ones scattered over the abaxial surface.Subsp.plumosa is characterized by having m ostly plum ose, occasionally simple, cilia on the leaf m argins an d no glands on the calyx.The cilia are present at least in the young stages as they often fall off leaving m inute truncate setae which can easily be overlooked.This latter character was used by B entham in creating his G. serrulata.Branches pubescent becom ing glabrous w ith sh o rt sto u t plum ose hairs inbetw een w hen older.Leaves 3-nate, adpressed but som etim es slightly recurved, pubescent becoming glabrous on the abaxial surface, ciliate w ith short sto u t plum ose hairs, occasionally gland-tipped when young, som etim es falling off w hen older and rem aining as sh o rt tru n cate setae.Bracteoles and sepals pubescent m ainly at the apex, ciliate w ith simple to sparsely plum ose gland-tipped hairs on the m argins and slightly sh o rter ones on the abaxial surface.Subsp.irrorata is recognizable by the gland-tipped hairs on the calyx and lack o f sto u t hairs on the abaxial surface o f the leaves. K ey to the subspecies M aterial o f this tax o n had been, until now, placed under G. hirta K lotzsch due to the key ch aracter o f a glandular calyx w ithout taking into account the significant differences in the leaf indum entum .In subsp.irrorata the leaves are typical o f the " plum osa" kind in which the sto u t hairs are confined to the m argins o f the leaves and do n o t occur on the abaxial surface as they occur in subsp.hirta (G.hirta Klotzsch). The collection, Compton 9530, is interm ediate between subsp.irrorata and subsp.plumosa in having glands only on the abaxial surface o f the calyx.There appears to be no interm ediate between this taxon an d subsp.hirta.H ow ever, after a biosystem atic study this taxon m ay be show n to be a hybrid between subsp.plumosa and subsp.hirta. The three taxa, subsp.plumosa, subsp.irrorata and subsp.hirta are sym patric on the sandy flats o f the M am re area.These flats have unfortunately been decim ated by alien vegetation and hum an activity.The pressure on the area from industrial and u rban developm ent is now very great. A thorough biosystem atic study o f the populations from this area was not possible and probably never will be possible for a m ore objective assessm ent o f their relationships. Branches pubescent w ith sim ple hairs only.Leaves 3-nate adpressed, lanate w ith simple hairs, occasionally with a few tufts o f sto u ter hairs on the m argins.Bracteoles pubescent and lanate at the apex.Sepals very sparsely pilose, ciliate w ith irregularly and sparsely plum ose hairs, rarely with a few sim ilar hairs on the abaxial surface.Subsp.eciliata is distinct in the species for having leaves which do n ot possess stout hairs on either the m argins or abaxial surface.The pubescence is very short, lanate an d crisped.There is, however, an occasional tuft o f hairs on the m argins o f the leaves b ut n ot sim ilar to those found in the rest o f the species. This taxon is som ew hat anom alous in th a t it has a sim ilarity to G. plum osa com plex in w hich the closest affinity is with the m aterial form erly know n as G. solivaga N .E .Br. now form ing p a rt o f G. plumosa subsp.plumosa.T he crisped pubescence, lack o f distinct stout hairs and the tufts on the leaves are sim ilar to the condition found in G. ciliaris subsp.ciliaris which occurs in the same area.The broad calyx lobes are sim ilar to the broad sepals in the m aterial form erly know n as G. dregeana Benth.The sepals o f the latter are, how ever, free or very slightly joined a t the base. It was decided to place this taxon under G. plumosa on the basis o f the fused calyx segments and to leave G. ciliaris to be characterized by its free sepals.The taxon is, however, a close link between these tw o species and points to the need for a thorough biosystem atic study o f all the O lifants River taxa to understand their relationships.Subsp.hispida m ay be distinguished by its 4-nate leaves and calyx thickly covered w ith densely plum ose hairs.In all the m aterial exam ined the leaves were 4-nate.In subsp.pentheri som e specim ens have been found to possess 4 -nate leaves on branches w ith mostly 3-nate leaves.T he subspecies has the largest leaves, flower-heads an d flowers in the species.Branches pubescent, rarely glabrous, w ith long stiff plum ose eglandular or gland-tipped hairs i.ibetw een.Leaves 3-nate, rarely also 4-nate, pubescent w hen young, ciliate w ith sto u t hairs and clothed w ith a few sim ilar hairs on the abaxial surface, the hairs being sim ple and gland-tipped to plum ose an d eglan d u lar, often breaking off and rem aining as sh o rt tru n cate setae.Bracteoles and sepals m ostly glabrous, rarely sparsely pubescent, ciliate w ith sim ple to plum ose cilia w hich may be gland-tipped, occasionally clothed w ith a few sim ilar shorter hairs on the abaxial surface dow n the middle.Subsp.pentheri is characterized by its 3-nate leaves, w hich are very rarely 4-nate on the same branch, its calyx w hich is m ore sparsely hairy and less plum ose th an in subsp.hispida a n d in the glandular form by having m ost o f the hairs o n the m argins o f the sepals.This subspecies is very variable in the form o f the hairs on the leaves an d calyx.In F lo ra Capensis Brow n recognized tw o separate species based on these hairs, his ow n G. pilifolia w ith its sim ple to plum ose eglandular hairs and later in the ad d en d a Z ah lb ru ck n er's G. pentheri w ith its gland-tipped simple to subplum ose hairs.Since F lo ra Capensis, m ore m aterial o f this g roup has been collected and has exhibited a com plete range betw een the two extrem es thus necessitating a red u ctio n o f G. pilifolia to synonom y under G. pentheri w hich itself h ad to be reduced to subspecific ran k in the G. plum osa com plex. T he relationship betw een the glan d u lar form s o f subsp.pentheri and subsp.hirta is very close an d it is only w ith some careful exam ination th a t they can be distinguished.The only ch aracter w hich shows any discontinuity is the d istrib u tio n o f the hairs on the calyx.In subsp.pentheri the sto u t hairs are m ostly confined to the m argins o f the calyx lobes w ith the hairs on the abaxial surface being few an d shorter.In subsp.hirta the hairs are m ore o r less evenly distributed over the calyx an d are o f the sam e length.This relationship is interesting because the two tax a are widely separated.B ranches pubescent to tom entose w ith long stiff plum ose gland-tipped hairs inbetween.Leaves 3-nate, pubescent w hen young becom ing glabrous, ciliate w ith long stout simple to plum ose gland-tipped hairs and clothed w ith sim ilar hairs on the abaxial surface, erect to spreading-recurved.Bracteoles an d sepals puberulous sometimes sparsely so, ciliate an d evenly clothed on the abaxial surface w ith num erous short sto u t simple to sparsely plum ose gland-tipped hairs.Subsp.hirta is characterized by its glandular stout mostly simple hairs evenly distributed on the calyx.It differs only in this respect from subsp.pentheri and from subsp.irrorata in having gland-tipped hairs on the m argins and abaxial surfaces o f the leaves. The relationship betw een subsp.hirta and subsp irrorata is very close w ith the flowers being alm ost identical.The difference in the leaves, how ever, is distinct.In subsp.irrorata the leaves are typical o f subsp.plumosa w ith short stout plum ose cilia.A s all three taxa are sym patric o n the sandy flats near M am re there is a possibility th a t hybridisation an d introgression m ay occur.A th o ro u g h biosystem atic study o f populations from the area will have to be carried o ut to ascertain the relationships o f these taxa. Blaeria ciliaris L.f., Suppl. 122 (1782). Shrublets m ostly low grow ing an d com pact or erect up to 75 cm high.Branches subglabrous to pubescent, occasionally arach n o id , the hairs thick and m atted, erect or retrorse or very sparse and erect, som etim es with stouter longer hairs inbetw een w hich are either plum ose or sim ple and gland-tipped.Leaves 4-nate, erect and adpressed, som etim es im bri cate to spreading recurved, 1 -4 ,5 m m long w ith the petiole very short to 0 ,5 mm long, from linear to ovate to obovate, very variable in the indum entum , pubescent w ith dense crisped retrorse hairs or p uberu lous with erect hairs, eciliate or rarely with sm all com pound tufts on the m argins o r som etim es with stout plum ose cilia with few to m any spreading plum e branches, the cilia often falling off and rem ain ing as short setae, all becom ing m ore or less glabrous with age, som etim es subglabrous to glabrous and shiny on the abaxial surface and ciliate and clothed on the abaxial surface w ith sto u t sim p b gland-tipped hairs.Flowers 3-12 in capitate, som etim es nodding, heads a t the ends o f lateral b ranchlets; bracteoles 3 subequal to m arkedly unequal in the outer flowers o f the inflorescences to equal in the inner flowers, mostly m edian to rem ote, adpressed or recurved, the m edian 1 ,3 -5 x 0 ,4 5 -2 ,3 m m , sm all an d oblong to narrow ly ovate w ith a relatively large keel-tip and no m arkedly expanded base to broadly elliptic or ovate with a broad flat base and relatively sm all b u t distinct keel-tip, from alm ost glabrous to puberulous all over, som etim es w ith a distinct apical tu ft o f lanate hairs, sometimes ciliate w ith short to long stout sim ple to plum ose eglandular o r gland-tipped hairs, rarely with a few sim ilar hairs on the abaxial surface at the keel-tip; the pedicel 1 ,0 -2 ,5 m m long, puberulous to sparsely glan d u lar pilose.C alyx 4-partite som etim es slightly joined a t the base, 1 , 5 ^, 3 x 0 , 4 -2 , 3 mm, very variable in size an d indum entum , sm all narrow ly oblong to oblong-ovate to broadly elliptic or large oblong-elliptic to b roadly elliptic and ovate, slightly keel-tipped occasionally w ith a knoblike apex, glabrous to pubescent som etim es w ith a distinct apical tu ft o f lan ate o r straight hairs, ciliate with long stout sim ple to plum ose crooked or straight hairs which are eglandular o f gland-tipped, plum e branches long an d spreading or short and erect, sometimes clo th ed w ith sim ilar hairs on the abaxial surface, the apex devoid o f cilia o r ciliate, the cilia as long as, m ostly longer th an , the w idth o f the sepals.Corolla 4-lobed, 2 ,5 -7 m m long, constricted in the m iddle to tw o -th ird s o f the way up, som etim es inconspicuously so, inflated below in the lower part and often 4-angled, the angles altern atin g w ith the sepals, cyathiform above the constriction, pubescent outside m ainly in the m iddle region, pubescent to pilose, rarely su b g lab ro u s inside aro u n d the constric tion; lobes erect to slightly spreading, broadly to narrow ly deltoid, sm o o th to slightly crenulate, obtuse, occasionally em arginate.Stam ens 4, free; filam ents mostly linear, expanded at the apex a t the po in t o f attachm ent to th e an th er, sparsely pilose to villous; anthers m anifest, b ip artite, 0 ,8 -1 ,5 m m long, m ostly oblong, scabrid to long scabrid, m uticous or aristate; awns up to h a lf th e length o f the cell; pore up to half the length o f the cell; pollen grains single.Ovary 2-celled w ith a single ovule in each cell, m ostly compressed, ovoid to oblate, obtuse, glabrous to pilose at the apex; style filiform , glabrous, far exserted; stigma subsim ple to capitellate.Figs 9-16. A species form ing low com pact sem ispreading to erect shrublets up to 0 ,5 m occuring in sandy areas in m ountains betw een Porterville an d N iew oudtville in the w estern C ape, flowering from A ugust to November. G. ciliaris is characterized by having the calyx segments free o r only very slightly jo in ed at the base, the corolla-tube m ore o r less distinctly constricted in the m iddle an d the cilia on the calyx longer than the w idth o f the sepals. G. ciliaris is one o f the oldest described species among the m inor genera o f the Ericoideae.Strangely the species is very isolated and far-rem oved from Cape Tow n w here o th er species m ore accessible existed, but were overlooked fo r so long.Despite its long standing, the species has been very much confused until now.L innaeus, the younger, stated in the protologue th a t the species had 3-nate leaves based undoubtedly on a T hunberg specim en.T h u n berg him self later published a fuller description from his own specim en stating th a t the leaves were 4-nate.This error was subsequently repeated by num erous authors until R ach (1853) corrected this. A sim ilar situ atio n exists with G. ciliaris as occurs in G. plumosa.In F lo ra Capensis Brow n recognized six species w hich he grouped on the ch aracter o f a It w ould app ear th a t we are dealing w ith an aggre gate species o f spatially separated noninterbreeding populations which are in the first stages o f evolving into a num ber o f distinct entities which may eventually becom e sufficiently distinct to be regarded as separate species.A t present, sim ilarities are too close to justify this latter classification. A n im p o rtan t feature and character o f use in delim iting the subspecies is the nature o f the cilia on the calyx, som ething which is easily observable an d yet som ew hat difficult to define (Fig. 11) p a rti cularly in regard to the plum e sidebranches. 3- ssp involuta ssp bolusii ssp ciliaris ssp ciliciiflora ssp multiglandulosa 1 2 B ranches pubescent w ith retrorse simple hairs, som etim es arachnoid.Leaves adpressed, 1 -2 ,5 m m long, narrow ly ovate to oblong to obovate, hairy w ith dense crisped retrorse hairs, often becom ing glabrous on the abaxial surface.Bracteoles equal to slightly unequal, m edian to rem ote, often recurved, the m edian 1 ,3 -1 ,8 x 0 ,4 5 -0 ,8 mm, narrow ly ovate to elliptic to oblong with a relatively large keel-tip and no m arkedly expanded base, m ostly pilose w ith crisped hairs, very rarely ciliate w ith a few sh ort sto ut hairs; pedicel up to 1,5 mm long.Sepals 1 ,5 -2 ,2 x 0 ,4 -1 1 m m, narrow ly oblong to oblong to oblong-ovate, often with a swollen knob-like apex, very slightly keel-tipped, pilose at the base and som etim es in the upper half, the apex clothed with a tu ft o f lanate hairs, ciliate with simple to plum ose sto ut hairs with irregular short and long spreading plum e branches, eglandular or gland-tipped, cilia often crooked, as long as or longer than the w idth o f the sepal, occasionally with some sim ilar hairs on the abaxial surface, the apex usually devoid o f cilia, rarely w ith cilia.Corolla up to 3 ,5 mm long, distinctly or indistinctly constricted in the m iddle, the constriction som etim es n o t visible in dried m aterial, pubescent outside in the m iddle region, pilose to alm ost glabrous inside around the constric tion.Anthers 0 ,8 -0 ,9 m m long, scabrous, aristate rarely m uticous; awns up to half the length o f the cell.Ovary glabrous to pilose at the apex.Fig. 13.This subspecies is distinguished by the can o p ubescent leaves, the hairs being crisped an d n o t glandular, the crisped pubescence on the bracteoles, w ith rarely short stout hairs, an d the relatively sh o rt calyx hairs, the hairs being sim ple and straig h t to crooked and irregularly plum ose w ith spreading branches (Fig. 12). V ariation within the subspecies occurs in the w idth o f the sepals where the b ro ad form (Lavis 19811) merges into w hat was G. dregeana recorded as a single collection from the G ifberg.Sim ilarly variation in the pubescence on the ovary apex and the presence o r absence o f a n th e r awns provided a g rad atio n with G. dregeana.Branches puberulous to pilose w ith simple hairs, som etim es w ith sh o rt sto u t plum ose hairs adm ixed.Leaves extrem ely variable, 1 ,2 -3 ,5 mm, linearlanceolate to ovate or obovate, pubescent, sometimes with dense crisped hairs o r alm o st lanate becom ing glabrous, ciliate with sto u t plum ose hairs or ju s, com pound tufts, rarely only pubescent, cilia variablet m ostly straight w ith few to m any long spreading plum e branches, eglandular, often falling off and rem aining as stubs.Bracteoles sub-equal to very unequal, adpressed to the calyx, the m edian elliptic to broadly elliptic o r ovate, 2 , 0 -3 , 2 x 1 ,0 -1 ,7 mm, with broad flat base and relatively sm all but distinct keel-tip, the laterals oblong-elliptic to obovate som etim es oblique, glabrous to puberulous, ciliate in the upper half w ith stout plum ose hairs.Sepals 2 ,1 -3 ,0 x 0 ,9 -2 0 m m , oblong-elliptic to very broadly elliptic, pubescent rarely subglabrous, ciliate with short to long sto u t plum ose hairs, rarely subplum ose, eglandular, plum e branches spreading, relatively long, rarely sh o rt and erect, w ith sim ilar hairs on the abaxial surface, ciliate at the apex and w ith an apical tu ft o f straig h t not lanate hairs.Corolla 3-4 m m long, pubescent to pilose inside and outside in the middle region.Anthers m uticous rarely m inutely awned.Ovary glabrous.This subspecies is distinguished by having a large median bracteole m ore th a n 2 x 1 mm but less th an 3,2 mm long w ith a b ro ad base an d relatively small keel-tip and w ith m arginal cilia, leaves m ostly ciliate with stout plum ose hairs at least when young, sepals less th an 3 m m long w ith very plum ose sto u t hairs with spreading branches.This is a very variable taxon particularly as to the leaves, som e being close to subsp.ciliaris w ith the short crisped in d u m en tu m an d only a few tufted cilia.The m ajority o f specim ens has awnless anthers whereas those o f subsp.ciliaris are m ostly awned except in the so u th ern populations.The bracteoles and leaves w ith distinct cilia serve to distinguish subsp.bolusii.T he larger linear-leaved form tends tow ards subsp.involuta.A superficial sim ilarity exists between subsp.bolusii and G. plumosa subsp.pentheri which occurs on the west side o f the O lifants River valley.The free, as opposed to jo in e d , sepals with num erous o r few abaxial hairs serve to distinguish the two taxa.Branches pubescent with sim ple hairs.Leaves adpressed, up to 4 ,5 mm long, m ostly lanceolate, straight, sparsely p uberulous becom ing glabrous, ciliate with sh o rt sto u t plum ose hairs w hich becom e setae.Bracteoles unequal, m edian, adpressed to the calyx, the m edian 4 -5 x 1 , 7 -2 , 3 mm long elliptic to ovate with an expanded flat base and distinct keel-tip, the laterals a b o u t 3 m m long oblong w ith a slight keel-tip, all g labrous except for a few hairs on the keel-tip, ciliate with long slightly plum ose straight hairs, the plum e branches very small and pointing tow ards the apex o f the cilium.Sepals 4 , 3 x 1 , 9 -2 , 3 mm broadly elliptic, slightly keeltipped, glabrous, with apical tuft o f short straig h t hairs, ciliate with long straight slightly plum ose hairs, plum e branches very small and pointing tow ards the apex o f the cilium.Corolla 6-7 mm long, constricted tw o-thirds o f the way up, pubescent in the m iddle region outside, villous inside at the constriction.Anthers ab o u t 1,5 mm long, m uticous, scabrous.Ovary glabrous.This subspecies is characterized by its overall larger flowers and inflorescence, the bracteoles being longer th an 4 ,2 mm and the sepals longer th a n 4 mm, b oth w ith a broad base and relatively small keel-tip. The larger size is the only differentiating c h aracter betw een this subspecies and some form s o f subsp.bolusii.It also has close sim ilarities with som e form s o f subsp.ciliciiflora but, again, the size difference is pronounced and the bracteole shape slightly different.The sepal cilia (Fig. 12) are m ore closely related to those o f subsp.ciliciiflora th a n those o f subsp.bolusii. Subsp.involuta is very restricted in its d istrib u tio n possibly occurring in only one or two p o p u lations on the western side o f the K rakadouw range.N o recent collections have been made.These populations are allopatric to those o f subsp.bolusii and subsp.ciliciiflora m aking interchange o f genetic m aterial highly im probable.Branches pubescent with erect to retrorse sh o rt sim ple hairs.Leaves adpressed, 1 ,5 -3 ,0 mm long, linear to ovate, m ostly pubescent w ith adpressed crisped hairs becom ing som ew hat glabrous on the abaxial surface, rarely glabrous when young, occasion ally ciliate with short stout gland-tipped hairs or ju st apiculate.Bracteoles rem ote or m edian, unequal to subequal, slightly recurved, 1 ,4 -2 ,3 x 0 ,5 -0 ,8 mm , oblong to elliptic, the laterals linear to linearelliptic, with a distinct keel-tip and flat base, pubescent to subglabrous but with an apical tu ft o f crisped hairs, ciliate with long stout subplum ose h airs; pedicel long pilose.Sepals 1,6 -2 ,5 x 0 ,6 -1 ,3 m m , oblong to broadly elliptic with a slight keel-tip pubescent to glabrous with or w ithout an apical tuft o f straight hairs, ciliate with long straight plum ose hairs, plum e branches small forw ard pointing, rarely subplum ose, rarely gland-tipped.Corolla 2 ,5 -3 ,0 mm long, distinctly constricted and 4-angled T his subspecies is characterized by the small usually subequal narrow -based bracteoles w ith co n spicuous long plum ose cilia and by the leaves, when glandular, with the glands apical or m arginal only and usually short-stalked to sessile and by the sepals with long plum ose hairs having very sm all erect branches. Relationships with the oth er subspecies are in three directions and are som ew hat difficult to explain in the case o f subsp.ciliaris due to the geographical isolation o f the latter.The relationships with subsp.involuta and subsp.multiglandulosa are understandable due to the reasonably close proxim ity o f the p o p u la tions. T here is considerable variation w ithin this su b species which necessitated the inclusion o f G. apiculata and G. zeyheriana in the synonom y.The typical form o f subsp.ciliciiflora possesses leaves with a crisped, som etim es retrorse, indum entum and calyx with num erous long sparsely plum ose hairs with plume branches forw ard pointing.In some form s the leaves possess sessile or subsessile m arginal glands and a large apical gland.Branches puberulous to subglabrous with stouter gland-tipped hairs adm ixed.Leaves recurved-spreading sometimes straight and adpressed, 1 ,5 -4 ,5 mm long with petiole 0 ,5 mm long, lanceolate to ovate m ostly glabrous and shiny on the abaxial surface and sparsely puberulous on the adaxial surface, rarely entirely glabrous, occasionally puberulous all over when young, ciliate and clothed on the abaxial surface with short to long stout simple gland-tipped hairs, sometimes those on abaxial surface falling off in erect leaves.Bracteoles m edian to rem ote, subequal to unequal, 0 ,8 -2 ,5 mm long the laterals mostly 1,0 mm long, linear to oblong-ovate to oblong-elliptic with an enlarged keel-tip, glabrous to pilose in the lower half, som etim es crisped at the apex, ciliate with stout gland-tipped simple hairs with a few on the keel-tip; pedicel up to 2 ,5 mm long, sparsely puberu lous with simple and gland-tipped hairs.Sepals 1 ,9 -2 ,5 x 0 ,5 -0 ,8 mm oblong to oblong-ovate with a slight keel-tip, glabrous, rarely with a few scattered hairs, ciliate with long straight subplum ose to simple hairs with sim ilar hairs on the abaxial surface, hairs often gland-tipped.Corolla ab o u t 4 ,5 mm long, distinctly 4-angled, pubescent below the con striction som etim es sparsely so and confined to the angles, villous inside.Anthers ab o u t 0 ,8 mm long, scabrous, m uticous.Ovary glabrous.Fig. 16 This subspecies m ay easily be recognized by its leaves w hich are erect to recurved-spreading m ostly glabrous b u t w ith distinct long gland-tipped hairs on the m argins an d abaxial surface, by its small bracteoles up to 1 , 8 x 1 , 0 m m which have stout subplum ose to sim ple hairs on the m argins and abaxial surface o f th e keel-tip and by the sim ple to subplum ose eg landular o r gland-tipped hairs on the sepals. The m aterial available varies som ew hat in floral and foliage characters.T he leaves are alw ays glandciliate w ith long sto u t hairs on the m argins and abaxial surface an d are m ostly distinctly recurvedspreading.But a few specim ens have erect adpressed leaves like those o f subsp.ciliciiflora.The calyx cilia m ay be sim ple o r occasionally plum ose with plume branches like those in subsp.ciliciiflora.It was found th a t the only distinguishing ch aracter is the presence o f abaxial hairs on the leaves and bracteoles in subsp.multiglandulosa.Small com pact shrublets to 30 cm high.Branches pubescent to tom entose w ith reflexed hairs, occa sionally with sto u t plum ose to gland-tipped sub plum ose hairs in between.Leaves 3-nate adpressed, 1 ,5 -2 ,0 m m long, elliptic to oblong-obovate, p u b e scent becom ing glabrous on the abaxial surface, ciliate with a few very short stout plum ose hairs or w ith short stout gland-tipped hairs; petiole very short, pubescent, som etim es with gland-tipped hairs.Flowers in small term inal heads of 3-6 (9) on the ends o f lateral branchlets, pink, occasionally w hite; pedicel up to 1 ,0 m m long pubescent; bracteoles subequal to unequal, m edian b u t adpressed, 1 ,0 -1 ,4 m m long, narrow ly oblong to elliptic oblong often w ith an enlarged keel-tip, acute or obtuse, the laterals linear, pubescent, ciliate with short stout plum ose hairs or subplum ose gland-tipped hairs.C alyx 4 -p artite; lobes 1 ,2 -1 ,8 x O ,3 ^),6 5 m m , m ostly narrow ly oblong, occasionally elliptic-oblong or linear, acute, pubescent w ith longer straight hairs at the apex, ciliate w ith stout plum ose eglandular hairs to ciliate w ith sto u t subsim ple gland-tipped hairs, all shorter th an the w idth o f the lobe, som e tim es w ith sim ilar b u t shorter hairs on the abaxial surface.Corolla 2 ,4 -2 ,7 mm long, constricted at the m iddle, ellipsoid below, urceolate above, pilose to villous in the m iddle region and slightly up the back o f the lobes, pilose inside around the co n stric tio n ; lobes very broad, obtuse, erect-spreading.Stam ens 8, free; filam ents linear, m uch dilated a t the p o int o f attachm ent, sparsely pilose; an th ers m anifest, ab o u t 0 ,7 mm long with oblong parallel to spreading cells, scabrid edged, aristate; aw ns sm all to obsolete, arising from the apex o f the fila m ents, scabrid; pore relatively small ab o u t one q u arte r the length o f the cell.Ovary 2-celled with a single pendulous ovule in each cell, com pressed, broadly ovoid, pubescent on to p and seated on a distinct nectariferous disc; style filiform, glabrous, far exserted; stigm a sim ple to capitellate.Fruit a hard verrucose nut.G. incana may be distinguished from related taxa by its small flowers, sm all narrow sepals less than 2 ,0 x 0 ,6 5 mm w hich have slender straight cilia as long as but m ostly shorter th an the w idth o f the sepal and by the straight hairs form ing the apical tuft on the bracteoles and sepals. The species affords a good exam ple o f geographical vicarism with its closely related species which are well separated spatially, e.g.G. ciliaris subsp.ciliaris, G. rigida an d G. nivenii. Difficulty was experienced in distinguishing G. incana from G. ciliaris subsp.ciliaris.The form er occurs only on the sandy coastal flats adjacent to the Cape Peninsula, w hereas the latter is confined to the sum m its o f m ountains a ro u n d V anrhynsdorp and N iew oudtville. N. E. Brow n unfortunately m isinterpreted the corolla shape in G. ciliaris subsp.ciliaris and so isolated it from G. incana in his revision.Several characters were exam ined in detail and found to have a certain degree o f disjunction and, when used in com bination, served to distinguish the two taxa.In G. incana the leaves possess hairs which are m ostly erect as opposed to the crisped retrorse hairs in G. ciliaris subsp.ciliaris.The leaves are usually edged with short stout plum ose hairs or gland-tipped hairs, whereas in G. ciliaris subsp.ciliaris this rarely occurs.The calyx in G. incana is m ostly pilose with a tuft o f longer straight hairs at the apex and w ith cilia shorter than the w idth o f the sepal.In G. ciliaris subsp.ciliaris the calyx is very sparsely puberulous with a distinct apical tuft o f long interwoven crisped hairs and with cilia longer th a n the w idth o f the sepal. To the east there occur two closely related species, G. rigida and G. nivenii, both in restricted separate areas.G. incana differs from b oth these species in the size o f the sepals, which are less th an 2 ,0 x 0 ,6 5 mm, and in the texture and indum entum o f the leaves.In the glandular form o f G. incana the leaves are very sim ilar to those in G. rigida but are not so inflated, are m ore pubescent and have the glands confined to the m argins. The sepals in this species are the sm allest and narrow est in the genus; in one specim en being only 0 ,3 mm wide.T his feature m akes the corollas m ore easily visible th a n in o th er species.The anthers are unique in the genus in having the sm allest pores relative to the size o f the cell. Two fairly distinct form s occur in the m aterial so far collected.The specim ens from the north around M am re have bracteoles an d sepals with m ore plum ose cilia w hich are eglandular.Those from the south in the K raaifontein area have sub plum ose gland-tipped hairs on the sepals, bracteoles and on the leaves.This variation is, however, clinal with no distinct disjunction betw een the tw o extremes. N. E. Brow n described G. alba from a single collection m ade by A dm iral G rey an d based it on the single ch aracter o f white flowers including the anthers.W hite-flow ered form s o f G. incana have been collected, b u t these have h ad pale brow n anthers.In all o th er characters, G. alba is identical to the glandular form o f G. incana and is presum ed to be only an a b e rra n t albino o f this species. G. incana is fairly restricted in its distribution occurring only on the recent sand deposits on the coastal flats a t Sir Low ry's Pass, Eerste River, K raai fontein, below T ygerberg and near M elkboschstrand.The records from Sim onstow n, K leinm ond, du T o it's K lo o f an d Vogelvlei are very doubtful. 4. Grisebachia rigida N .E .Br. in FI. C ap. 4 , 1 :343 (1906) Shrublets com p act and low to erect up to 50 cm high.Branches pubescent with sim ple recurved hairs, very occasionally gland-tipped, rarely w ith stout plum ose hairs adm ixed.Leaves 3-nate up to 3 ,O x 1,0 m m , m ostly elliptic to narrow ly elliptic, occasionally ovate o r oblong-obovate, thick and fattened, pubescent or m inutely scabrous on the abaxial surface, rarely glabrous and shiny, pubescent on adaxial surface, ciliate with 7-9 short sto u t gland-tipped hairs and with some scattered over the abaxial surface, petiole very short, shortly g landular pubescent.Flowers 1-8-nate on the ends o f lateral branchlets, pink, rarely w hite; pedicels a b o u t 1 mm long, pubescent with some plum ose hairs a t the apex; bracteoles equal to slightly unequal with the m edian slightly broader, adpressed to the calyx, ovate to oblong-ovate to elliptic to narrow oblong-elliptic, obtuse, glabrous or sparsely pubescent, ciliate w ith stout simple to very slightly plum ose hairs which are m ostly gland-tipped, rarely sparsely pilose inside, keel-tipped.C alyx 4-lobed, slightly joined at the base; lobes ovate-elliptic to oblong-elliptic to broadly elliptic up to 2 , 1 -2 , 8 x O ,9 -2 ,0 mm, often w ith incurved m argins, subacute, keel-tipped, glabrous or sparsely pubescent m ostly in the lower half, ciliate with broadly based stout hairs alm ost fim briate in places an d with sim ilar hairs up the centre o f the abaxial surface, hairs m ostly simple or very slightly plum ose, rarely gland-tipped, often crooked.Corolla up to 4 ,4 mm long often oblique, distinctly c o n stricted in the m iddle; tube up to 3 mm long globose ellipsoid, spreading above the constriction, 4-angled, puberulous outside w ith glabrous patches opposite the sepals, pilose on the inside mainly at the p o in t o f co n striction; lobes erect-spreading slightly crenulate and em arginate a b o u t 1 mm long, very b ro adly obtuse, pubescent at the base in the middle.Stam ens 4; filam ents linear, sparsely to densely pilose, up to 2 ,5 m m long; anthers m anifest, attached dorsally one th ird the way up, variable in size, 0 ,7 -1 ,1 mm long w ith oblong to obovate parallel or spreading cells, alm ost glabrous to scabrous, occasionally with som e long tra n sp aren t hairs on the edges, m uticous \ v ' " S/J (I/ </ \ \ \ s ■ / or arista te ; awns up to 0 ,4 m m long o r one th ird the length o f the cell, arising from the filam ent apex, spreading laterally to descending, m inutely scabrous; pollen grains single.Ovary 2-celled w ith a single pendulous ovule in each cell, 0 ,6 x 0 ,8 -0 ,9 x 0 ,9 m m , broadly ovoid to ellipsoid, com pressed, obtuse, variously pilose at the apex, very unevenly w rinkled; style up to 4 mm long, glabrous, exserted; stigm a slightly capitellate.Fruit a h ard verrucose n u t.G. rigida is characterized by its free o r slightly jo in ed sepals which are m ore th a n 2 , 0 x 0 , 6 5 m m w ith very bro ad flattened subplum ose o r sim ple cilia and by its fattened leaves which are ciliate w ith short gland-tipped hairs and w ith a few sim ilar hairs on the abaxial surface. The species differs from G. nivenii in the leaf cilia, less plum ose calyx and glabrous inner surface o f the sepals.It is closely related to G. incana from which it is easily distinguished by its b ro ad er sepals w ith their broad cilia and by the leaves. G. rigida varies in the size o f the sepals w here in the type, Bolus 5193, they m ay be as m uch as 2 ,7 x 2 , 0 mm.The anthers also vary in size, shape an d in the occurrence o f awns.A few specim ens have an th ers w ith long colourless hairs, a feature very rarely seen in the Ericoideae. The species occurs on the recent sandy alluvial flats at the eastern base o f the Stettyns M ountains betw een W orcester and V illiersdorp where isolated pockets o f fynbos grow (Fig. 20).T he surrounding area possesses m ountain renosterveld on the shales and W ittenberg quartzites.In this area the species is very susceptible to extinction due to encroaching agriculture and burning.B olus's record between French H oek an d Villiers d o rp has not been reconfirm ed.A lthough som ew hat rem oved from the m ain populations an d com ing from a com pletely different valley system, this record could be correct due to the num erous sandy alluvial patches in the area. 5. Grisebachia nivenii N .E .Br. in FI. Cap. 4 , 1 :343 (1906).Shrublets com pact, erect, up to 50 cm high.Branches m inutely pubescent with retrorse hairs.Leaves 3-nate, adpressed, up to 1 ,7 x 1 , 2 m m , broadly ovate to elliptic, very rounded and thick, obtuse or acute, glabrous on the abaxial surface, pubescent on the adaxial surface, ciliate w ith 5-6 sh o rt sto ut plum ose cilia; petiole very sh o rt and bro ad , ciliate.Flowers in term inal globose heads o f 3-8 on the ends o f lateral branchlets, pink, rarely w hite; pedice's very short, sparsely pubescent, som etim es w ith sto u t plum ose hairs a t the apex; bracteoles adpressed, about 1,3 m m long, subequal, rarely m arkedly unequal, mostly oblong-elliptic som etim es the laterals obliquely so, the m edian rarely angular-ovate, all keel-tipped, obtuse or acute, puberulous outside and inside, ciliate w ith stout plum ose hairs.C alyx 4-lobed, sometimes slightly joined a t the base; lobes broadly elliptic to narrow ly oblong-elliptic, 2 ,1 -2 ,9 x 0 ,6 5 -1 ,3 mm, acute, keel-tipped, puberulous on the abaxial surface m ainly tow ards the base, sparsely puberulous on the inside, ciliate with sto u t b ro a d plum ose hairs with sim ilar hairs over the adaxial surface tow ards the centre, rarely gland-tipped.Corolla up to 4 ,0 mm long, som etim es oblique, distinctly constricted in the m iddle; tube up to 3 m m long, globose ovoid to ellipsoid, urceolate above the constriction, pube scent to pilose in the m iddle an d lower p a rt and up the back o f the lobes and inside the tube a ro u n d the constriction; lobes broadly obtuse, erect or slightly spreading.Stam ens 4; filam ents linear, sparsely to densely pilose; an th ers m anifest, a b o u t 0 ,8 mm long, oblong, dorsifixed one th ird o f the way up, papillate, aristate; pore a b o u t a th ird o f the length o f the cell; awns sm all, spreading to deflexed, arising from the filam ent at the p o in t o f attach m en t to the an th e r; pollen grains single.Ovary 2-celled with a single pendulous ovule in each cell, 0 ,7 -0 ,6 mm, ovoid com pressed, obtuse sparsely pilose at the apex; style exserted, up to 4 mm long; stigm a capitel late or subsim ple.Fruit verrucose, hard.abaxial surface an d have 5 -6 very sh o rt stout plum ose cilia an d by its m ore o r less free sepals which have short stout flattened plum ose cilia. The species is closely allied to G. rigida and G. incana.It differs from G. rigida in having glabrous shiny leaves, even w hen young, w ith plum ose cilia and no gland-tipped cilia.The calyx cilia are m ore plu mose th an in G. rigida and there is a sparse pubescence on the inside o f th e sepals.F ro m G. incana it differs in leaf details, the la tte r having pubescent nonfattened leaves.The sepals in G. nivenii are bro ad er, m ore than 2 ,0 x 0 ,6 5 m m .G. nivenii is geographically isolated as it occurs only on the sandy flats in the hilly co u n try so u th east o f Sw ellendam w here it is far rem oved from its closest allies, G. rigida an d G. incana.Like nearly all the other species in th e genus it is confined to a few small patches o f alluvial sand.T he m ain m orphological difference between these tw o tax a lies in the form o f the inflorescence.In A the flowers are generally 1-4 -n a te a t the ends o f sh o rt axillary branchlets w hich are often clustered together in a pseudospike along the m ain an d lateral branches.In B the flowers are term inal on the ends o f sh o rt branchlets w ith up to 36 flowers form ing a cluster.These clusters usually hang dow nw ards m aking the plan ts less conspicuous th an those of tax o n A. Except for one vicariad, plants o f taxon B are glan d u la r particularly on the m argins o f the calyx.T he glands term inate plum ose cilia.In taxon A the few glands are confined to the short simple cilia on the calyx. T aking into account the hab itat and m orphological differences, I have decided to regard the tw o tax a A & B as closely related species referred to G. parvi flo ra and G. minutiflora respectively and to recognize several subspecific taxa. Low com pact to spreading shrublets up to 20 cm high, rarely up to 50 cm.Branches erect or spreading often entw ining am ong the surrounding vegetation, w ith num erous short branchlets, pubescent som etim es w ith glands adm ixed.Leaves 3-nate up to 3 m m long w ith the petiole, erect to spreading, straight to m arkedly recurved, linear to lanceolate rarely ovate, acute, flat to trigonous, glabrous or a t first puberulous, ciliate som etim es with gland cilia or sessile glands, som etim es glan d -ap icu late: petiole adpressed ciliate. Flowers 1-^-nate at the ends o f extrem ely short branchlets arranged in a spike-like m anner along the branches; pedicels very short, less th a n 0 ,5 mm long; bracteoles 3, equal to subequal, adpressed to the calyx or slightly spreading, very variable in size, up to 1,5 m m long, in shape from linear to elliptic-oblong to ovate, acute to obtuse, glabrous or pubescent ciliate w ith o r w ithout sessile o r stalked glands white.C alyx 4-lobed for its length, cam panulate 0 ,6 -1 ,8 mm long an d 0 ,3 -0 ,8 m m wide, glabrous or pubescent, w hite; lobes erect variable in shape from elliptic-oblong to ovate to su b q uad rate, the apex acute cuspidate or su btruncate with an apiculus, with or w ithout a distinct keel-tip and w ith or w ithout a distinct m edian ridge, ciliate, with or w ithout sessile o r stalked glands.Corolla 4-lobed, up to 3 x 1 , 7 mm long, m ostly 2 x 1 , 2 m m, funnel-shaped sometim es broadening m ore above the m iddle, nowhere constricted, occasionally tubularellipsoid; tube glabrous inside som etim es glabrous outside o r puberulous in lower half to fully pubescent; lobes erect to incurved, rarely slightly spreading, obtuse, broader th an long, glabrous entire.Stam ens 4 included or m anifest; filam ents filiform g lab ro u s; anthers up to 1,1 mm long w ith oblong parallel separate cells, basal, m inutely scabrous, a ristate; awns the length o f the cell to rud im en tary , ciliate; pore the length o f the cell, pollen grains single.Ovary 2-celled with a single ovule per cell in one subspecies, rarely 2 ovules per cell and som etim es 3 cells, ovoid to ellipsoid, up to 1 x 0 , 7 m m , obtuse glabrous to puberulous on the top, som etim es thickly pubescent; style far exserted, straight or curved, glabrous up to 4 mm long; stigm a m inutely capitate.Figs 22 & 23. A species generally low, sparse an d spraw ling in habit, occasionally com pact, occurring frequently on dry stony slopes and flats on m ountains from the C edarberg to D u T o it's K lo o f eastw ards to near Swellendam, flowering from as early as July to as late as January. A very variable taxon in which three subspecies are recognized. G. parviflora is characterized by the flowers being 1-4 -n a te in small heads clustered along the branches in a congested spike-like m anner, the corolla tube n ot contracted in the middle and the simple cilia on the calyx lobes.The species is closely allied to G. minutiflora and has a rem arkable superficial resem blance to Eremia curvistyla (N .E. Br.) E. G. H. Oliver differing basically in the num ber o f stam ens.All three species are sym patric. The true identity o f this species has been over looked until now as it has nearly always been referred to the later G. eremioides M acO w an.K lotzsch in describing it placed the species in D o n 's genus Eremia using as his type an Ecklon & Zeyher collection from " Hills between Puspas Valley and K ogm ansk loof M o u n tain s" .W ithout seeing the type, N. E. Brown followed K lotzsch in keeping the species in Eremia.But later in his w ork on Grise bachia he stated th a t he had seen the type and found th at the species was conspecific with G. eremioides which he proceeded to retain under the Kew Rule.This was picked up by D ruce in his search th ro u g h F lo ra Capensis for new com binations but never applied by subsequent botanists. U nfortunately the holotype in Berlin is no longer extant and all the Ecklon & Zeyher m aterial distributed as Eremia parviflora K lotzsch tu rn s out to be Anomalanthus scoparius K lotzsch.N. E. Brown on exam ining the type sent to him on loan stated " The description o f K lotzsch is very erroneous, as the calyx is n o t subequal to the corolla, but considerably shorter th a n it, and the stam ens are 4, not 8 as K lotzsch states.It is identical w ith Z eyher 1117, except th at the leaves are straighter, like those o f Schlechter 10091" .This statem ent clears up the discrepancies in the type description an d also paves the way for the typification o f the species.W ithout any authentic duplicate m aterial available, I consider th a t it is justifiable to rely on N. E. B row n's com parison and therefore select Zeyher 1117 in the herbarium at Kew as the neotype.The above published note by N. E. Brown also appears on the sheet a t Kew. Five duplicates o f Zeyher 1117 are located in various h erbaria an d all com e from " Flats between the W itsenberg and Skurfdeberg" .The locality o f H ouw H oek on one o f the sheets in SAM as cited by M acO w an in his protologue is erroneous. To date no additional m aterial o f G. parviflora has been collected in the sam e area as th a t visited by Ecklon & Zeyher.But in the variatio n pattern, I have found in the calyx th a t the broadly elliptic-acum inate lobes o f Zeyher 1117 could well have occurred in the holotype.The leaf difference noted by N. E. Brown is not vitally significant as variatio n in this ch aracter has been noted even on the same plant.N .E. Brown described (b) an d (c) w ith their narrow elliptic-oblong acute lobes in com bination with the possession o f erect straight leaves as a separate variety, var.grata.I have found th a t neither the calyx lobe shape n o r the leaf arran g em en t show any significant differences.There is a definite intergrading between the broad based apiculate and the narrow elliptic-oblong acute sepals as seen in Compton 6624 from R oodesandberg, Stokoe 6067 from the W ellington m ountains and Esterhuysen 11057 from Stettynsberg.In leaf arrangem ent variatio n from erect straight leaves through to curved slightly spreading leaves can occur on the same plant.T here were therefore no grounds for keeping the tw o species separate.Small shrublet, com pact to spreading.Leaves stiffly trigonous, m arkedly spreading-recurved when m ature.C alyx lobes sub q u ad rate, subtruncate with a thickened apiculus, rarely very broadly angularovate, closely ciliate w ith sh o rt simple hairs occasionally with a few subsessile glands, otherw ise glabrous, rarely sparsely puberulous.Corolla glabrous, rarely sparsely puberulous.Ovary thickly pubescent on the upper half.There is, then, in the C edarberg a reasonable discontinuity in several characters coupled w ith a spatial separation.This should w arrant recognition at specific level, but as there are definite similarities with certain elem ents to the south, I feel th a t recognition is only justifiable a t subspecific level. On two collections N. E. Brow n described this vicariad as var.eglandula.A n exam ination o f all the collections showed th at sm all subsessile glands are present on the m argins o f the calyx.Low com pact to sem i-spreading shrublet up to 20 cm high.Branches long when spreading and som etim es rooting a t the nodes, pubescent, w ith glandular hairs interm ingled when young, 3-angled when young.Leaves 3-nate, up to 3 ,5 m m long, the petiole 0 ,5 m m long, ovate to narrow ly oblong, straight erect or slightly spreading, im bricate or sho rter th a n the internodes, subobtuse, thick, pubescent o r glabrous w hen young w ith a few to num erous sessile glands on the m argins, becom ing glabrous except on the adaxial surface, sometimes term inating in a sessile gland.Flowers in term inal globose heads o f up to 36 flowers on sh o rt branchlets, not form ing congested spikes, w hite; pedicels alm ost none up to 0 ,8 m m long; bracteoles 3, adpressed or recurvedspreading, equal or very unequal in som e o u ter flowers in inflorescences, m ostly 0 ,5 -1 ,7 m m long, if equal th en linear acute or subacute and m inutely keel-tipped, if unequal then the m edian one large an d leaflike an d well keeled, glabrous or pubescent, ciliate tow ards the base with simple hairs and a m ixture o f sim ple and larger plum ose and glandtipped hairs tow ards the apex.C alyx 4-lobed from 4 its length, up to 1,9 mm long, pubescent, the pubescence sh o rt over the whole surface or in zones w ith som etim es longish hairs; tube obconic o r tu b u lar w ith spreading lobes; lobes ovate-oblong to very broadly ovate up to 0 ,9 mm long and 1 m m bro ad , erect or spreading, som etim es with pubescence on the inside a t the top, ciliate w ith sh o rt o r long cilia w hich are sim ple o r variously plum ose fro m base to apex, m ostly gland-tipped, keel-tipped an d thickened, acute to subobtuse.Leaves pubescent w hen young, occasionally w ith a few sessile glands on the m argins.Pedicels up to 0 ,8 mm long; bracteoles usually adpressed and recurved tow ards the apex, m ostly up to 1,7 m m long.C alyx pubescent in zones or evenly w ith pubescence also on the inner surface in the u p per quarter, cilia plum ose to the apex and not glandtipped.G. minutiflora is characterized by the globose inflorescences o f 6-36 flowers scattered along the branches, the corolla-tube n o t contracted in the middle, the plum ose cilia on the calyx lobes and the general glandular condition o f the flowering branches.It is closely allied to G. parviflora. W hen Brown revised the genus he had tw o collections o f Schlechter to exam ine and justifiably described them as two new species, G. minutiflora and G. nodiflora, basing them on the prescence or absence o f gland-tipped plum ose or simple cilia on the calyx lobes. Since then no further m aterial referrable to G. nodiflora has been collected, but there have been 12 collections of G. minutiflora.The type o f G. nodiflora possesses no glands on the calyx lobes, whereas in the collections o f G. minutiflora there are always some glands p esent on the cilia, but these may be extremely small in some flowers.The m aterial was then exam ined for o th er characters.Brown used the differences in the size o f the inflorescence heads, but this is invalid as the heads in Oliver 4105 are m uch larger th a n in Schlechter 10188.He also used the degree o f feathering on the cilia, but this is very variable in G. minutiflora from simple to alm ost fully plumose. Slight differences were found in the length o f the pedicel which is absen t to 0 ,4 m m long in G. m inuti flora and up to 0 ,8 m m long in G. nodiflora.The bracteoles o f the form er are usually adpressed whereas in the latter they are approxim ate b u t curvedspreading.I decided to reduce G. nodiflora to sub specific level u n d er G. minutiflora, because o f the difference in the g lan d u lar state o f the cilia and the slight discontinuities in the bracteoles and pedicel characters coupled w ith the allopatric distribution. A t first exam in atio n I found th a t there was a distinct difference in the type o f pubescence on the calyces with G. m inutiflora having zones th a t were pubescent and glab ro u s and w ith G. nodiflora having an evenly pubescent calyx.Close exam ination o f Schlechter 10188 d u plicates show ed there to be two distinct form s. In one th e pubescence is zoned as in G. minutiflora, in th e o th e r it is evenly distributed over the calyx, a co n d itio n n o t found in G. m inuti flora.This variability suggests a closer relationship between the tw o tax a th a n was previously accorded to them . The m aterial o n sheets o f Schlechter 10188 can easily be separated in to tw o form s, A & B, on the type o f pubescence an d the shape o f the calyx lobes.In form A the pubescence is zoned an d th e calyx lobes are oblong to elliptic-oblong acute.In form B the pubescence is denser an d evenly d istrib u ted over the calyx the lobes o f w hich are transversely broadly elliptic obtuse.These variations can only be recognised as form s at present as they presum ably cam e from one population.T his has n o t been rediscovered and until such tim e as it is, no fu rth er statu s can be given to this v ariatio n . The specim ens exam ined in detail have been assigned to the tw o form s as follow s: N. E. Brown did n o t designate a holotype but labelled one sheet in BO L (form A only) and one in K (form A & B) as types.F rom the protologue two characters can be pinp o in ted to determ ine which form he used to describe his species, i.e. " ca yx lobes oblong, a c u te " .T hese undoubtedly refer to form A. I have th erefore chosen the Schlechter sheet labelled as the type in BO L as the lectotype. The species was first seen by me at the Ceres W ildflower Show in O ctober 1974 w hen I was doing the nam ing o f specimens.The collector and locality unfortunately could n ot be traced.A few weeks later the m aterial collected the previous year in the Sw artruggens by D r J. M acG regor was sent to me for identification. A n exam ination o f the above collections showed th a t they constituted a new an d very distinct species which I was n o t able to place satisfactorily in any know n genus.Follow ing D r M acG regor's directions, I visited the Sw artruggens an d located the species in three disjunct sparse populations.A range o f m aterial was collected and exam ined for variations in the critical character, the 1-celled uniovulate ovary.A n exam ination o f num erous flowers produced only a few with unequally 2-celled ovaries. The species is rem arkably sim ilar to Eremia totta (Thunb.)G. D on in the ou tw ard appearance o f the flowers and leaves but can n o t be placed near th at species which has 8 stam ens a n d a 4-celled ovary. Eremia has been considerably am ended to include the 1-celled Eremia curvistyla (N .E .Br.) E. G .H. Oliver but still retains the co n stant character of eight stam ens (Oliver, 1976). To a lesser extent the species is sim ilar in outw ard appearances to Grisebachia parviflora (Fig. 22) and G. minutiflora (Fig. 24) both o f which it grew with in the Swartruggens.They all possess four stamens.U ntil this revision, all species o f Grisebachia possessed 2-celled ovaries w ith very few exceptions having 3-celled ovaries. There were thus four ways o f dealing with the new species: (1) placing it under Eremia and having to am end the generic circum scription even fu rther to include the 4-stam ened condition, thus causing a breakdow n in the distinction between Eremia and Grisebachia' , (2) placing it under Grisebachia and am ending the generic circum scription to include this 1-celled species; (3) placing it in either Anomalanthus or Syndesm anthus, genera w ith 4 stam ens a nd a 1-celled ovary, b u t w hich have no resem blance to it; (4) describing the species as a separate m onotypic genus. T aking into acco u n t the im plications o f the above, it was decided to b ro ad en the circum scription of Grisebachia to include the 3-celled variations p a rti cularly in G. parviflora subsp.pubescens an d the 1-celled, rarely 2-celled, condition occurring in the new species. In the Sw artruggens G. secundiflora was found in three separate p o p u latio n s consisting o f only a few scattered plan ts each.A t the lower altitude the plants were grow ing on sandy flats w ith a p opulation o f G. minutiflora nearby.H igher up the m ou n tain they were grow ing on sandy, rocky slopes together with some plan ts o f G. parviflora subsp.pubescens. In all cases G. secundiflora form ed decum bent yet com pact shrubs up to 0 ,5 m high a n d up to 1 m across w ith n u m erous ascending to decum bent branches.T he branches were strikingly bare and devoid o f leaves except tow ards their ends.The white conspicuous secund inflorescences were subterm inal.This co ntrasted strongly w ith the very com pact low shrublet o f G. minutiflora o r the sparse spreading procum bent to sem i-erect plants o f G. parviflora ssp.pubescens w hich were all in fruit.Some variatio n in floral characters occurs.The pubescence on the calyx may be present o r absent on different twigs in the collections Oliver 6105, 6107 and 6115 an d is present in M acGregor s.n.It is absent in Oliver 5044 from the flower show.The corolla tube in the collections o f O liver and M acG regor from the Sw artruggens are sh o rt, tu b u la r and inflated in the low er h alf w hereas in the m aterial from the flower show it is distinctly longer and narrow er.As this latter m aterial is unlocalized no further com m ents on its statu s can be given.K lotzsch w rongly ascribed the W illdenow specim en to the species which had, up until then, been cited as Blaeria ciliaris L.f.In his description he stated th a t the leaves were 4-nate, possibly repeating the slip m ade by T hunberg and copied by m ost su b sequent authors.The W illdenow specimen has in fact 3-nate leaves.R ach (1855) in exam ining K lotzsch's specim en and th a t o f T hunberg stated that they were n o t o f the sam e species.Brown noted th at T h u n b erg 's specim en was identical to the type in the L innaean H erbarium . I have been able to exam ine the W illdenow specim en in the Berlin H erbarium .The m aterial certainly does not belong to G. ciliaris (L.f.)K lotzsch subsp.ciliaris and is only in young bud stage from which it is n ot possible to identify it with any certainty. Fig Fig. 1.-Variation in the size of the median bracteole from outer to inner flowers in a single inflorescence o f Grisebachia plumosa subsp.plumosa.Drawn x l 6 from Thompson 791 (STE). F i g .4 .-Di s t r i b u t i o n o f Grisebachia plumosa: ® s u b s p .plumosa; © s u b s p .hirta; O s u b s p .irrorata\ % s u b s p .eciliata; (5 s u b s p .hispida; © s u b s p .pentheri.sepal, x 16, drawn from Schlechter 8480 (PRE). This subspecies is very sim ilar to som e form s o f subsp.pentheri, som e o f w hich used to constitute part o f w hat was form erly G. pilifolia N .E .Br.In these form s the calyx and leaves are eglandular a n d the hairs very plum ose.Subsp.hispida appears to be very restricted in its distribution, occurring only on the sandy hills and flats near Paleisheuwel associated w ith dry fynbos scrub which, according to A cocks's m ap o f Veld Types (1953), is classified as True Fynbos an d n o t C oastal M acchia.(e) subsp.pentheri (Zahlbr.)E. G. H. Oliver, com b, et stat nov.G. pentheri Zahlbr. in Ann.Naturh.Mus.Wien.20:42 (1905); N.E.Br. in FI.Cap. 4 ,1 : 1128 (1909).Type: Elandsfontein, Clanwilliam, Aug. 1894, Pent her 2925 (BM!; C a p e .-3218(Clanwilliam): Uitkomst, Graafwater, 427 m, (-BA ), Compton 4945 (BOL; NBG); 4949 (BOL); Compton 6789 (NBG; STE); Compton 24218 (NBG; STE); Kanovlei, east of Graafwater, 396 m (-BA), Oliver 3869 (STE) Subsp.pentheri occurs frequently in scattered p opulations in sandy areas on the m ountains o n the west side o f the O lifants River near Clanw illiam .U nfortunately m uch o f the h ab itat o f this tax o n has been lost to farm ing practices an d all th a t rem ains is in the rocky unusable areas.The fynbos in w hich the plants grow m ay w ithout hum an a n d anim al intervention becom e quite tall an d erect plants o f this taxon have been seen up to 1 m high.(f) subsp.hirta (K lotzsch) E. G. H. Oliver, com b et stat.nov. F i g .9 .-Di s t r i b u t i o n o f Grisebachia ciliaris: A s u b s p .ciliaris; O s u b s p .bolusii', © s u b s p .involuta-, # s u b s p .ciliciiflora; © s u b s p .multiglandulosa.calyxdivided to the base an d m uticous anthers.He then separated the species on the nature o f the indum entum on the leaves, the sepal size an d the length and form o f the sepal hairs.These species were G. bolusii, G. apiculata, G. involuta, G. velleriflora, G. dregeana and G. zeyheriana.A seventh species, G. thunbergii (G.ciliaris), he characterized incorrectly by placing it with those species n ot having a distinctly constricted corolla.On the small am o u n t o f m aterial available to Brown the recognition o f these tax a as distinct species was feasible but n um erous subsequent collections have provided a considerable degree o f variation which broke dow n m any o f the existing discontinuities in the m edian bracteoles(Figs 10 & 11), sepal hairs (Fig.12) an d leaf glands. Fig. 11 . Fig. 11.-Grisebachia ciliaris, scatter diagram showing the variation in the median bracteole of the outer 3 flowers in a single inflorescence.Each dot represents the mean measurement of each inflorescence. Compton 20884 (BOL; NBG; STE); Hutchinson 763 (BM; BOL; K; PRE); 790 m, Oliver s.n.(STE); 762 m, F ig .13.-Grisebachia ciliaris subsp.ciliaris.1, flower, x 8 ; 2, corolla, x 8 ; 3, bracteoles, x l6 : a, laterals; b, median; c, median inside view; all drawn from the fragment of the holotype (K); 4, flower, x 8 ; 5, corolla, two views, x 8 ; 6, sepal, x 16; 7, anther, side, front and back views, x 16; 8, ovary, x 16; 9, leaf, x 16; all drawn from Oliver 3860 (STE); 10, sepal, x 16; 11, ovary, x 16; both drawn from Marloth 7646 (STE); 12, median bracteoles, x 16, a from Marloth 7646 (STE) and b from Middlemost 1594 (NBG); 13, anther, back, front and side views, x 16, drawn from Drêge 7803 T h ere are three groupings o f popu latio n s w ithin the subspecies.The n o rth ern group occurs on the N iew oudtville Plateau an d has no plum es on the leaves n o r cilia on the calyx apices.The two sou th ern p o p u latio n s on the G ifberg and a t L ockenberg are interm ed iates between subsp.ciliaris and subsp.bolusii in occasionally having plum es on the cilia and a few cilia on the sepal apices.D ue to the large spatial separation, a hybrid origin is ruled out.A nother line o f relationship occurs with subsp.ciliciiflora o f the C itrusdal area.F ro m this latter it differs in the type o f plum ose sepal cilia and in having sh o rt sepal cilia.Som etim es the leaves o f these tw o subspecies are rem arkably sim ilar in having a crisped retrorse indem entum an d no cilia.G. ciliaris (G.thunbergii R ach) was incorrectly assessed by Brown, w ho ju d g ed the corollas to be w ithout any distinct constriction in the middle.This condition is ap p a re n t in the dried m aterial which, when thoroughly boiled, som etim es shows a slight constriction.However, all fresh m aterial exam ined possessed distinctly constricted corollas.This subspecies is m ost closely related to G. incana from the flats near C ape T ow n-an unusual distri butional relationship.It is distinguished by the calyx hairs being as long as o r longer th an the w idth o f the sepals and, if equal, w ith a distinct tu ft o f apical crisped hairs.(b) subsp.bolusii (N .E .Br.) E. G. H. Olive com b, et stat.nov.G. bolusii N.E.Br. in FI.Cap. 4 ,1 : 340 (1906).Type: Moun tains near Pakhuis Pass, Bolus 8681 (BOL, holo!; K!; N H!; PRE!; STE!; Z!). F i g . 1 5 .-Grisebachiaciliaris s u b s p .ciliciiflora.1, c o r o l l a , x 8 ; 2, le a f, xl6; b o t h d r a w n f r o m Oliver 4 0 1 7 (STE); 3, la te r a l b r a c te o le , x 8 ; 4 , m e d ia n b r a c te o le , x 8 ; 5 , s e p a l, x 8 ; 6, a n t h e r , f r o n t , s id e a n d b a c k v ie w s , x 1 6 ; 7 , o v a r y , x 1 6 ; a ll d r a w n f r o m th e h o lo ty p e , Masson s .n .(BM); 8, le a f, x 16, d r a w n f r o m Stokoe in SAM 5 4 8 4 7 (STE); 9, le a f, x 16, d r a w n f r o m Schlechter 4 9 6 9 (STE).at the base, pubescent to pilose outside and inside in the middle region.Anthers a b o u t 1 m m long, m uticous long scabrous; pore h alf the length o f the cell.Ovary glabrous.Fig. 15.Compton 16128 (NBG; STE); Compton 20965 (BOL; NBG; PRE; STE); Leipoldt in BOL 21655 (BOL); Stokoe in SA M 54847 (SAM; STE); Elandskloof Pass, (-CA ), Hafstrom & Acocks 1043 (PRE; S; STE); Waterfall between Citrusdal & Elandskloof, (-CA), Stokoe 7712 (BOL; NBG; N H ; PRE); Williams sub Baker 1821 (BM); Kleinfontein east of Citrusdal 7 6 2 m, (-CA), Oliver (E; K; MO; P; PRE; STE); Allandale, south-east of Citrusdal, 5 4 8 m (-CA), Oliver 5007 (BM; G; S; STE); near Citrusdal (-CA), Rust s.n. Subsp.ciliciiflora occurs in the C itrusdal area mostly at lower altitudes on sandy open patches and on the m ountains north-w est o f the tow n.The locality near W uppertal, Compton 24264, is unusual and inexplicable.(e) subsp.multiglandulosa E. G. H. Olive subsp.nov., similis subspecie ciliciiflorae, sed distinguitur piliis longis glandulis m arginibus et paginis abaxialibus foliorum bracteolarum que.T y p e.-C ape, O lifants River Valley above T oorgat on the farm G rootfontein, Oliver 3972 (STE, holo.!; K !; M O !; N B G !; PR E!). Subsp.multiglandulosa is confined to the m ountains at the south ern end o f the m ain O lifants River valley w here it occurs m ainly o n sandy open flat areas.The m ajority o f p o p u latio n s is allopatric to those o f subsp.ciliciiflora, occurring at high altitude only.In the region o f P iekenierskloof there is, however, an overlap.The specim en, Levyns 1367, is an interm ediate very sim ilar to the type and only collection o f G .apiculata (subsp.ciliciiflora).U n fortunately all th e p o p u latio n s a p p ear to have been removed in this a re a by agriculture thus m aking a study o f the p o p u latio n s im possible. Figs 17 & 18.A species form ing small com pact shrublets occurring in sandy places on the flats between Sir Low ry's Pass, K raaifo n tein and M am re, flowering early from A pril to July. Fig. 21.A species form ing com pact erect shrublets up to 50 cm high occurring in a very restricted area o f sandy flats south-east o f Swellendam , flowering from July to Septem ber. The surviving p o p u la tions in the area now lie w ithin the b o undaries o f the B ontebok N atio n a l Park.There is som e con fu sio n a b o u t the collector o f the lectotype o f the species.It was labelled as " C.B.S. M asson" b ut N. E. Brow n changed this to Niven.The handw riting is definitely n o t N iven's, b u t m atches th at on the type o f Erem ia brevifolia Benth.w hich Brown cited as collected by M asson.N either o f these labels exactly m atches the handw riting o f M asson in the Kew A rchives.B row n labelled this sheet as the type.V ariation w ithin th e species is very slight.The sepals in Z eyher 3330 are som ew hat n arro w er th an in the other specim ens a n d have som e gland-tipped hairs.The G. parviflora/G.minutiflora group A long the m ain ch ain o f m o u n tain s a n d high level plateaux ru n n in g in a n o rth -s o u th direction from the C edarberg th ro u g h the C old Bokkeveld to the W orcester D istrict, there is a series o f vicarious taxa.Superficially they are very sim ilar in th eir low com pact to spreading habit, small w hite subcalycine to calycine flowers, m anifest an thers and ciliate calyx lobes.One very variable taxon (A) is w idespread from the C edarberg to the m ountains south o f W orcester an d eastw ards to near Swellendam, occurring on dry stony slopes w ith short dry restiad/ericoid vegetation.It usually form s a low com pact to spraw ling sh rublet w ith branches spreading am ongst the restiads.It consists o f three distinct allopatric vicariads a n d has had the nam es G. parviflora (K lotzsch) D ruce (G .ermioides M acO w an) and G. similis N .E. Br. applied to it.The second taxon (B) is m uch m ore restricted, occurring in the central a n d southern Cold B okkeveld on dry open sandy flats an d form s a low co m p act shrublet som etim es slightly spraw ling and ro o tin g a t the nodes.T his has had the nam es G. m inuti flora N .E. Br. a n d G. nodiflora N.E.Br. applied to it.As w ith the taxon A, there are two distinct allo p atric vicariads in this taxon.T o m y know ledge the two tax a only grow in reasonably close proxim ity in the H artebeeskloof and W inkelhaak areas where I have observed them .In the form er locality the plants o f taxon A (G. parviflora) were few and were outliers of the larger po p u latio n s higher up the rocky slopes.The plants o f tax o n B (G. minutiflora) were locally com m on on open sandy patches.In the latter locality taxon B w as flowering three m onths later th an the early flow ering vicariad o f tax o n A. T he basic v ariation occurring in G. parviflora is found in the shape o f the calyx lobes, the arran g e m ent o f the leaves and the indum entum o f the flowers, the w idest range being in the shape o f the calyx lobes.In subsp.parviflora three groups occur: (a) large lobes w ith a b ro ad elliptic base and acum inate apex, sparsely gland-ciliate o r ciliate, com m on, central to southern in distrib u tio n represented by the neotype, Zeyher 1117: (b) large lobes, narrow elliptic-oblong w ith an acute apex, distinctly gland-ciliate, less com m on an d confined to the central region o f the distributional range (W interhoek to central Cold Bokkeveld).(c) sm all lobes m ostly elliptic-oblong w ith an acute apex, n o rth ern in distrib u tio n centred on the southern an d central C edarberg. N. E. Brown also described G. similis var.publicalyx.The collection M aguire 1778 from G ydo, presum ably from one population, possesses glabrous calyces.This also occurs in the C om pton an d Esterhuysen collections from Slab Peak.The indum entum o f both the calyx and corolla occurs random ly with m ost southern collections having a glabrous calyx but the lack o f hairs and pubescence o f the collections from the northern C edarberg an d eastern Bokkeveld are significant in the delim itation o f subsp.eglandula and subsp.pubescens.(b) subsp.eglandula (N .E. Br.) E. G. H. Oliv stat.nov.Grisebachia eremioides var.eglandula N.E.Br. in FI. Fig. 2 2 .7 , 2 2 .8 .C a p e .-3219(Wuppertal); Pakhuis (-A A ), Barker 4505 (NBG); Esterhuysen 5924 (BOL; PRE); Esterhuysen 21764 (BOL); Krakadouw, 910 m (-A A ), Bodkin s.n.(BOL); Stokoe in S A M 55129 (NBG; PRE; SAM); Stokoe in SA M 56776 (SAM); Rocklands, 790 m (-A A ), Kruger 1031 (STE); Eselbank, 1220 m (-AC), Schlechter 8818 (BM; BOL; G; K; P; PRE; STE; Z); Crevasse Peak, 1220 m (-AC ), Taylor 7459 (PRE; STE).Without precise locality: near Clanwilliam, Leipoldt 135 (BOL); Bokkeveld, 1580 m, Schlechter 8919 (7) (BM; BOL; E; G; K; MO; P; PRE; STE; UPS; W; Z).In the C edarberg, the n o rth ern part o f the distribution range o f this species, tw o reasonably distinct groupings, A & B, o f specim ens can be m ade on the shape o f the calyx lobes, the pubescence, the arrangem ent o f the leaves and the distribution.G ro u p A has small narrow acute puberulous calyx lobes with no m arked apiculus and erect straight leaves and is ascribed to subsp.parviflora.G ro u p B has generally small flattened q uadrate to subquadrate glabrous calyx lobes w ith a distinct apiculus and m arkedly recurved leaves.The pubescence on the ovary is m uch longer an d denser.There is very little overlap in these characters between the tw o groups in the C edarberg and b oth seem to be fairly distinct.The affinities o f group B appear to lie w ith the collections m uch further south.The leaf arrangem ents in A and B are very distinct in the C edarberg b ut n o t betw een B and some random southern collections o f subsp.parviflora in which the leaves can be spreading and curved eg. in Bolus 5403 from T ulbagh.T he calyx shapes o f B, although distinct in the C edarberg, have sim ilarities in some southern collections.The distributions o f the tw o groups A & B are relatively easily separable w ith A occurring west and south o f the K rakadow -W elbedacht m ountain range and B n orth an d east o f the range.A detailed investigation o f the range is necessary to ascertain w hether this separation is in fact true or ju st due to lack o f records. N. E. Brow n an n o tated the collection Leipoldt 135 in BOL as the type.It is u n fo rtu n ate th a t it is unlocalized.The collection Schlechter 8919 given as ju st Bokke veld is undoubtedly this subspecies an d I regard the locality as an erro r.(c) subsp.pubescens E. G. H . Oliver, subsp.nov., a subspecie typica et subspecie eglandula floribus m ajoribus, tu b o corollae om nino pubescenti, calyce om nino pubescenti, d istributione et florescentia dignoscenda.T y p e .-Cape, C eres D ist: K atbakkies in the Sw artruggens (-D C ) Oliver 4310 (STE, h o lo .;B M ; B O L; E ; G ; K ; M O ; N B G ; P R E ; S).A n erect to spreading sh ru b up to 50 cm high.C alyx pubescent; lobes o b long-triangular to broadly so, ciliate w ith fine sim ple hairs an d sto u ter glandtipped hairs adm ixed.Corolla 2-3 m m long and 1 ,2 -1 ,7 m m b ro a d ; tu b e pubescent over the whole length, tubular-ellipsoid.Fig. 2 2 .9-2 2 .1 2 .Cape.-3 2 1 9 (Wuppertal): Schurweberg, east of Bokkeveld Tafelberg, 1 0 6 0 m (-C D ), Esterhuysen 20651 (BOL; K; NBG; MO; PRE; S; STE); Zuurvlakte north of Rietvlei in the Swartruggens, 1 0 6 0 m (-C D ), Oliver 6114 (PRE; STE); Kat bakkies in the Swartruggens, 1 2 2 0 m (-D C ), Levyns 1860 (CT; SAM); 1 1 8 8 m, Oliver 4310 (BM; BOL; E; G; K; MO; NBG; PRE; S; STE); Oliver 4312 (B; C ; There are several collections from the eastern p art o f the C old B okkeveld which have a different appearance an d earlier flowering tim e from the rest o f the collections o f G. parviflora.The flowers are generally m uch larger with th e corolla tube com pletely pubescent.T he Levyns collection from K atbakkies has the coro lla tu b e pubescent for th ree-q u arters o f its length.In nearly all the collections o f G. parviflora the corollas are g labrous to p u berulous and then usually below the level o f the sepals.T he collections o f M iddlem ost from Bokkerivier to th e so u th are interm ediate tending tow ards the pubescence o f the Levyns collection.The flowering tim e o f the eastern Bokkeveld collections is significant being from June to Septem ber fo r m aterial in full flower.The collections of Levyns 1860 an d Esterhuysen 20651 & 29847 and Oliver 6114 th o u g h recorded fo r Sept., Oct. and Nov. are o f fruiting m aterial.T he flow ering tim e for collections o f G. parviflora elsewhere and particularly in th e adjoining areas o f the Bokkeveld are Sept.-D ec.fo r m aterial in full flower.This means th a t cross-pollination, even if the populations were sym patric, could n o t take place.T he M iddlem ost collections from B okkerivier were collected in N ovem ber in full flower.Despite there being little m orphological disjunction between the g roup o f eastern Bokkeveld collections and the rest o f G. parviflora, I consider th a t the geographical an d reproductive isolation w arrants recognition o f this g ro u p as a distinct subspecies o f parviflora.7. Grisebachia minutiflora N .E .Br. in FI.Cap. 4 ,1 :3 4 8 (1906).T ype: C ape, near K lein Vlei in Cold Bokkeveld, Schlechter 10064 (B M !; B O L!; G !; K, h o lo !; M O !; P !; P R E !; S !; S T E !; W !). Corolla 4-lobed, obconic to tu b u lar w ith spreading upper half, n o t constricted, glabrous inside and outside, up to 2 ,4 m m long; lobes short b road obtuse slightly crenulate, erect o r slightly spreading.Stam ens 4, m anifest o r slightly exserted; filam ents filiform, glabrous, sigm oid a t the apex; an th ers up to 0 ,7 mm long with a lm o st parallel sides, oblong, m inutely scabrous, aw ned a b o u t $ the way up the back o f the cell; aw ns up to i the length o f the cell som etim es spreading; pore a b o u t £ the length o f the cell.Ovary 2-celled w ith a single pendulous ovule per cell, ellipsoid, puberulous a t the apex; style filiform, glabrous, f a r exserted up to 2 ,2 m m long; stigm a sim ple o r slightly swollen.Fig. 24.A species form ing co m pact erect to sem ispreading low shrublets, occurring in sandy places in the C old B okkeveld n o rth o f Ceres, flowering from O ctober to January.A variable tax o n in which tw o subspecies are recognized.K ey to the subspecies Cilia on the calyx gland-tipped................... (a) subsp.minutiflora Cilia on the calyx not gland-tipped............ (b) subsp.nodiflora Leaves pubescent w hen young w ith several sessile glands on the m argins, becom ing glabrous exceps cn the adaxial surface, or glabrous w ith nu m ero u t sessile glands over the surface but pubescent on the adaxial surface.Pedicels alm ost absent or up to 0 ,4 m m long; bracteoles usually adpressed, m ostly up to 1,1 m m long, occasionally up to 1,3 m m .C alyx pubescent in zones, glabrous on inner surface, cilia plum ose o r simple, gland-tipped.Figs 2 4 .1 -24 .8& 25. T y p e .-Cape, Ceres D istrict, Sw artruggens in the C old Bokkeveld, Oliver 6105 (STE, h o lo .;B M ; BO L; E ; G ; K ; M O ; N B G ; P; P R E : S; W). F i g . 2 6 .-Grisebachiasecundiflora.1, f lo w e r , x 8 ; 2 , c o r o l l a , x 8 ; 3, th r e e b r a c te o le s , x 1 6 ; 4 , s e p a ls , l a t e r a l a n d a b a x i a l, x 1 6 ; 5 , a n t h e r , b a c k , f r o n t a n d s id e v ie w s , x 1 6 ; 6 , g y n o e c iu m , x 1 6 ; 7 , le a f, x 8 ; a ll d r a w n f r o m th e h o lo ty p e , Oliver 6 1 0 5 (STE); 8 , flo w e r, x 8 , a ll d r a w n f r o m Oliver 5 0 4 4 (STE).F ig . 2 7 .-Di s t r i b u t i o n o f Grisebachia secundiflora.tosh o rt tu b u lar an d m uch inflated in the low er half, finely pubescent in the middle region an d glabrous inside, w hite: lobes varying from 0 , 6 x 0 , 6 -l , 0 x 0 ,9 m m , erect to slightly spreading, obtuse, glabrous.Stam ens 4, free; filaments narrow linear, glabrous w hite; anthers up to 1 , 0 x 0 , 5 m m , included to m anifest, subbasally attached on the dorsal surface sm ooth to minutely scabrid, aristate; aw ns dorsal pointing dow nw ards, a b o u t half the length o f the cell, m inutely ciliate, w hite; pore a b o u t h a lf the length o f the cell; pollen grains single.Ovary 1-celled w ith a single pendulous apical ovule, very rarely 2-celled, slightly oblique, 0 , 7 x 0 ,6 mm, glabrous, greenish, seated on a distinct d a rk red nectariferous disc; style filiform, 4 ,0 -4 ,5 m m long, exserted, glabrous; stigm a capitellate.Figs 26 & 27. Table M ountain Series.This is very evident in the localities o f G. rigida, G. nivenii, G. minutiflora, G. secundiflora and G. ciliaris subsp.ciliaris, subsp.bolusii, subsp.The form er is p articularly w idespread Perennial w oody shrublets, erect up to 50 cm , from M am re to the Paleisheuwel area.In the n o rth rarely 1 m, or com pact and spreading to p ro stra te it is confined to sandy pockets in the m ountains.The an d spreading.Leaves 3-nate, rarely 4-nate, erect isolated localities o f G. incana in the area from M am re im bricate to spreading an d recurved, pubescent to Sir L ow ry's Pass are ascribed to the occurrence often w ith stout subplum ose to plum ose hairs on the of suitable sandy restionaceous sites.N o species have m argins and abaxial surface.Flowers in term inal been recorded from the extensive sands o f the Cape heads usually on sh o rt lateral branchlets som etim es Flats proper.T his is pro b ab ly due to the sand over-form ing congested pseudo-spikes. ciliciiflora and subsp.multiglandulosa.The h a b ita t o f G. ciliaris subsp.involuta is n o t know n, as details are not given on the three collections and I have n o t collected the subspecies, b u t it w ould probably fit the requirem ents.On the low lands o f the west coast the sands are recent and alternate w ith the heavier clay soils o f the M alm esbury beds.H ere the species G. plumosa and G. incana occur.surfacescabrate to m icroscabrate w ith elem ent piT^a n s o -r r v 6. 339 (1802), p ro p a rte ; T hunb., ro(js free t0 f usecj Ovary 2-celled w ith a single r l.C ap.364 ( l o i J ) , p ro p arte.subapical pendulous ovule in each cell, rarely 3-celled, Eremia K lotzsch in L innaea 12:498 (1838), pro in one species obliquely 1-celled, w ith a d istin ct parte; Phill. in J1 S. A fr. Bot.10: 70 (1944), pro nectariferous disc.S tyle exserted.Stigm a sim ple to parte; et G en. ed. 2, 560 (1951), p ro parte.capitellate.Fruit a h ard apparently indehiscent nut.K ey to the species
2019-03-19T13:08:52.309Z
1980-11-06T00:00:00.000
{ "year": 1980, "sha1": "f00e2ff809e697a96ffd155921445f8d993b2586", "oa_license": "CCBY", "oa_url": "https://journals.abcjournal.aosis.co.za/index.php/abc/article/download/1292/1250", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f00e2ff809e697a96ffd155921445f8d993b2586", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
85909171
pes2o/s2orc
v3-fos-license
Vegetational structure and plant diversity relation in a sub-alpine region of Garhwal Himalaya , Uttarakhnad India The present study investigated the community structure of a sub-alpine region of Mandal Valley (2200 to 3000 m) along the altitudinal gradient in Garhwal Himalay abased on analytic and synthetic characters. Poa annua was dominant and Potentilla fulgence was co-dominant along the altitudinal gradient, indicating that most of plant species in different communities were contagiously distributed. Species niche width indicated wide distribution of the species along all altitudinal gradients. INTRODUCTION The western Himalaya comprises of a variety of forest type at various altitudes and within one altitude, while the eco-factors such as topography, inclination of slope, aspect and soil types further affect the forest composition and vagour (Shank and Noorie, 1950).The diversity and richness of the vegetation in subalpine region of Garhwal Himalaya has lured researchers from time to time due to the rich diversity in this regions (Osmaston, 1922;Smythe, 1938;Gupta, 1962;Rau, 1975Rau, , 1982)).The analysis of quantitative and qualitative characters of the vegetation of the Himalaya Region has been emphasized by some researchers (Ralhan et al., 1982;Sharma and Kumar, 1992;Sharma, 1996).Vegetation types along an altitudinal gradient between Mandal and Chopta have been undertaken to understand species composition, population structure and ecosystem stability in moist mixed temperature forests of higher Himalaya (Sharma et al, 2001).In Himalaya, infiltrations is snowfall cause rapid recession of glaciers presumably a natural phenomenon and the vegetation grow rapidly due to the moisture in soil, However, the current human induced changes are receding (Nautiyal et al., 2001).Impacts on alpine ecosystems are generally predicted in terms of variation in vegetation composition and invasion of species from lower altitude (Nautiyal et al., 2004).The ultimate concern *Corresponding author.E-mail: adityabisht1234@yahoo.com.is, therefore, stability of alpine and sub alpine soil as it is determined by vegetation cover (Körner, 1999).Due to some anthropogenic activities viz.over grazing, construction etc. the vegetation of study area are decreases. MATERIALS AND METHODS Chamoli District is the central part of the Western Himalaya.It has attracted people over the world for its beautiful mountains, valleys, rivers, flora fauna, snowy peaks, religious shrines and the famous valley of flowers.The study area includes Mandal valley and adjacent places ranging from 2200 to 3000 m above sea level.The valley lies between 30° 10' N latitude and 79° 25' E longitudinal in Chamoli. Three sites were selected for the present investigation along this altitudinal gradient.Site one was located at 2200 m above sea level, 2nd at 2700 m above sea leve and third at 3000 m above sea level.This study was carried during November 2002 to 2004 for consecutive years including four seasons.Viz winter, spring, summer and rainy. The research site is characterized by a moderate and cool climate, with a mean maximum temperature of 10.85±2.0°CJanuary and 27.89 ±18.03°C in July.The mean minimum temperature was recorded to be 7.56±2.2°C in January and 24.66±1.8°C in July.The average rainfall fluctuated between 17 mm (January) to 750 mm (July) during two study years.Mean relative humidity varied between 58% in February to 55% in September. The vegetation investigation was done seasonally by recording the density of all species in twenty randomly placed 50 × 50 cm quadrats at each site.Frequency density and abundance were calculated following description by Curtis and McIntosh (1950).The abundance-frequency ratio was used to interpret the distribution Table 1.Frequency and density of some plant species (Mean ± Standard deviation) in three stands. Name of species Frequency Density Niche width (Bi) 2.20 pattern of species (Whiiford, 1949).The diversity index was calculated using Shannon-Wiener information function (Shannon-Wienerr 1963) and concentration of dominance (Cd) was measured by Simpson's index (1949).The coefficient of community similarity (Jaccard, 1912) was worked out after computing importance value index (IVI) of the species to assess the similarity among the vegetation on different slopes. Similarly niche width was calculated for important species using the following equation (Levins, 1968): ßi = ∑ (Nj) 2 /Nj 2 Where j is density value (plant m-2 ) in selected stands. RESULT A total of 90 species were collected from the study area.Of these 46 species were present at site 1st, 60 species at site 2nd and 65 species at site 3rd.The frequency and density data is presented in Table 1.Poa annua had the highest mean frequency and density at all three sites.Total basal coverage (TBC) and IVI are presented in Table 2. Potentilla fulgence showed the highest TBC in all the three sites.By comparison, P. annua exhibited the highest IVI and therefore was dominant species in study area.Gnaphalium hypoleucum and Polygonum emodi (2.97) showed the highest niche width (Table 2). Except a few species showing random distribution, most species displayed contagious distribution patterns.Regular distribution was rarely observed in this area.The maximum contagious distribution was found at site 1st and followed by site 3rd and site 2nd.The regular distribution was completely absent at the site 2nd and site 3rd.The general diversity index showed variation for different communities in the same season during two consecutive years.For instance, the diversity index increased in some communities while it declined in other communities in the second sampling year.Tables 3 and 4 present the dominance, general diversity, alpha diversity and evenness value. The highest value of H was recorded to be 2.77±0.58 to 2.99±0.47 at site 2nd and the lowest ranged from 2.59±0.23 to 2.64±0.48 at site 1st.Site 1st and 2nd showed the highest alpha diversity, with 44 and 41 species in rainy seasons during two sampling years.The alpha diversity exhibited significant variation between different seasons.It increased from summer to rainy.The highest species rich richness was reported to be 44 (site 2nd) in rainy season, while it declined to 11 in winter (stand 2nd).Beta diversity (within habited diversity) was observed to be highest between site 1st and 3rd (1.05) followed by stand 1st and stand 2nd (0.98).The dominance value increas at most of the sites in the consecutive years.The mean value varied between 0.09±0.02and 0.09±0.04 at site 1st, between 0.08±0.05and 0.10± 0.05 at site 2nd, and between 0.08±0.04 and 0.09 ± 0.04 at site 3rd.The evenness values indicated the maximum sharing percentage of all species and no single species contributed significantly to all habitats.Evenness value (mean) varied between 1.85±0.11and 1.97±0.13at site 1st, between 2.08±0.16and 2.10±0.11at site 2nd, 2.27±0.24and 2.99±0.26 at site 3rd.Jaccard's similarity index was calculated between two stands, which resemble each other in physiognomy.Maximum mean similarities were observed for stand 1st and 3rd (20.00) follo-wed by stands 2nd and 3rd, 1st and 2nd. DISCUSSION The study of phytosociological attributes is useful for comparing one community with the other from season to season and year to year (Singh, 1972).Each species within a community has large measure of its structural and functional individualism and has more or less different ecological amplitude and modality (Singh and Joshi, 1979).The species diversity reflects the gene pool and adaptation potential of the community (Odum, 1971).The temperature remained constant during morning hours with little variation and experiencing cold winds, the cumulative effect enhanced the senescence in most of the herb species immediately after the completion of flowering and fruiting with the onset of winter period.On the other hand in the case of perennial herb the aboveground part stored food and parented during the winter season. For the particular species, higher frequency indicated its more frequency distribution at sites due to optimum soil and environmental conditions.P. annua showed 90% frequency in stand 1st and stand 2nd while Fragaria nubicola showed the maximum frequency in stand 3rd.Flat slope favours grazing and repeated defoliation in the heavily grazed areas stimulate tillering (Harper, 1977).It is from the present investigation that most of the species were contagions distribution in all season and sites.The regular and random distribution patterns are indicative of uniformity of environment report for temperate Himalayan forests (Sexsena and Singh, 1982;Singhal et al., 1986).Bankkotii and Tewari (2001), Khera et al. (2001), Sharma and Baduni (2000), Bhandari et al. (1998) and Ghildyal et al. (1998) etc. worker reported the similar pattern of species diversity in distributed forest of central Himalaya, with special reference aspect and altitudes.The present finding for diversity index falls well within the range of other temperate forest.Monk (1967) and Risser and Rice (1971) obtained 2.3 as the highest value for diversity index for temperate vegetation.The diversity ndex for Himalayan grazing land ranged between 1.91 and 3.74 (Pandey, 1997), which was higher than temperate tree and lower rate of evolution and diversification of communities (Simpson, 1949) and moreover due to severity in the environmental condition (Connell et al., 1964). The other component of diversity is evenness or equability, which means the apportionment of individuals among the species.The evenness varied between 1.72 (Stand 1st ) to 2.64 (Stand 3rd).Most cooler conditions with moderate soil temperature and lower degree of human disturbances is the main factors for equal share of individuals among species in all species in all stands.The highest evenness value was recorded at stand 3rd at the upper elevation while the least value was calculated as stand 1st.Grazing and fodders collection was more frequent in this stand as compared to other.In this stand diversity varied in each quadrate for same species because the women folk collected fodder and other important medicinal plants for traditional uses, while goat and sheep grazed the twinges of herbs species at steeper slopes.The other ground vegetation completed their life cycle in short period.Most of the other species were not able to complete their life cycle in short life period.Most Bisht and Bhat 405 of the other species were not able to complete their life cycle due to the grazing by livestock for the long time for the winter season.Alpha diversity expresses the species richness in a community or given area.In present investigation it has been observed that all aspects in upper elevation represent more species richness as compared to lower elevation.In stand 1st the mean alpha diversity was 25.12, in stand 2nd 26.00 and at stand 3rd alpha diversity was recorded 18.13.These stands had highest diversity as the communities were having more open canopy.These finding are agreement with those of Khera (2001). Beta diversity is other important factor of habit diversity of two different habitat system and provides information about the degree of partitioning of habitats by species and together with alpha diversity provides information about the overall diversity and biotic heterogeneity of area (Rawat, 2003).Higher beta diversity values are indicative of the high rate of species change as a function of environmental gradient. The low diversity stands are associated with higher level of exchangeable potassium, soil nitrogen and maximum water holding capacity and have relatively higher standing crop.This indicated that competition closely regulated the number of species capable of coexisting in comparatively more productive environments (Shuakal et al., 1981).Conclusively this area experiences more anthropogenic pressure, as evident by absence of higher girth classes and low rates of succession.Thus herb species of this region need the conservation strategies. Table 2 . Total basal cover and important value index (Mean ± Standard deviation) in three stands. Table 3 . Distribution pattern (%) of the plant species in different season and stands. Table 4 . Concentration dominance (Cd), General Diversity Index (H), Alpha diversity and evenness value of plant species in different s eason and year at all stands.
2018-12-22T07:57:16.436Z
2013-08-31T00:00:00.000
{ "year": 2013, "sha1": "bd3e58f701423d9c8f90acff7ca339a8ba438a33", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJPS/article-full-text-pdf/745683912485.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bd3e58f701423d9c8f90acff7ca339a8ba438a33", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
264309211
pes2o/s2orc
v3-fos-license
The “GU-GU-RU” project to eliminate discrimination related to the health effects of the Fukushima nuclear accident Background Although 12 years have passed since Great East Japan Earthquake and following Fukushima nuclear accident, approximately 40% of Japanese citizen still believe that the current radiation exposure in Fukushima residents will likely/ very likely to cause genetic effects of radiation. This incorrect understanding could continue unexpected discrimination and prejudice towards those from Fukushima now and in the future. In order to provide updated knowledge and eliminate rumors related to radiation, Japanese Ministry of the Environment has launched “GU-GU-RU” project in 2021 with consisting of five sections. Objective (1) To discuss the objectives and effects of the “GU-GU-RU” project (results after the first year), (2) to present administrative measures that may be effective in the long-term to prevent unjustified discrimination and prejudice, and (3) to eliminate rumors in the event of future large-scale disasters, including radiation disasters. Methods We showed the contents of each sections carried out under the project and observed the result of first-year activities in each section. Results Among the programs, the “Radiation College” has steadily produced positive results, with nearly 1,300 students participating and 50 students sharing their thoughts and ideas. In addition, the project has adopted strategies such as creating and broadcasting a TV program and collaborations with manga, which are expected to have a significant impact on society. Conclusions Compared to previous efforts on disseminating information related to health effect of radiation exposure, the “GU-GU-RU” project has taken a different approach in providing primary data of radiation and its health effects, which could become a better understanding of health effects of radiation for the general public, in order to eliminate rumors that may lead unjustified discrimination and prejudice. Background Racial and gender prejudice and social discrimination have always been present in society.Social movements, such as the civil rights movement in the US from the 1950 to 1970 s, have attempted to address these inequalities. From a public health perspective, after an epidemic of a novel infectious disease (such as HIV/AIDS or coronavirus disease 2019 [COVID-19]), discrimination has been concentrated against affected individuals and their families owing to the fear of infection.Indeed, it takes time for people to come to a correct understanding of infectious diseases and learn not to discriminate against affected individuals. The health effects following radiation disasters are similar to those of infectious diseases.Long-term and secondary psychological effects (such as discrimination, prejudice, and stress) related to radiation can occur owing to changes in people's social life and environment, in addition to the direct effects of radiation exposure on the body [1,2].In Japan, discrimination took place against residents who had evacuated from Fukushima, and bullying at schools and other places occurred following the accident at the Fukushima Daiichi Nuclear Power Station (FDNPS) of the Tokyo Electric Power Company after the Great East Japan Earthquake (GEJE) happened in 2011 [3].We consider that psychological effects of the events described above are considerably greater than the mere physical effects of radiation exposure; therefore, we believe that a comprehensive understanding of these incidents and countermeasures to address them is a significant public health issue.However, sufficient measures have not yet been established to direct the long-term response. The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) has reported on the health effects of radiation exposure from the FDNPS accident, stating, "No adverse health effects among Fukushima residents have been documented that are directly attributable to radiation exposure from the FDNPS accident.The Committee's revised estimates of dose are such that future radiation-associated health effects are unlikely to be discernible" [4].On the one hand, since this report does not state that there are no radiological effects on local residents who exposed radiation after FDNPS accident, we should keep focusing on the fact if stochastic effect could happen in the future.On the other, according to Fukushima Health Management Survey (FHMS) conducted by Fukushima Medical University, it reports that genetic effect of neonates or infants in Fukushima prefecture has not increased after the accident until 2019.However nearly 40% of the Japanese public believe that the health effects on the next generation of Fukushima residents due to current radiation exposure are likely/ very likely to occur, which could lead to discrimination or prejudice towards those from Fukushima [5][6][7]. Based on the current situation, Japanese Ministry of the Environment (MOE) launched the "GU-GU-RU" project in July 2021 to dispel incorrect understanding, discrimination, and prejudice associated with the health effects of radiation [8].In order to achieve the designated goal, which is to reduce the proportion of those who have incorrect perception related to health effects of radiation among Japanese citizen from 40 to 20%, this project determined five sections, which are "to know, " "to learn, " "to make decisions, " "to listen, " and "to research." However, to date, no academic report has presented the details and effects of this project.A discussion of the effects of this project, presenting the observations made related to each objective, would be useful and provide important guidance when considering future measures to deal with long-term health effects and other issues after a large-scale disaster. The purpose of this paper is to discuss the objectives and effects of the "GU-GU-RU" project (results after the first year), to present administrative measures that may be effective in the long-term to prevent unjustified discrimination and prejudice, and to eliminate rumors in the event of future large-scale disasters, including radiation disasters. Methods (activities) The "GU-GU-RU" project is conducted by MOE with the aim to create an occasion where people can learn to "understand and interpret information" and "judge and decide without being misled by rumors." Specifically, this project aims to update knowledge related to the health effects of radiation and dispel incorrect understanding that may lead to discrimination and prejudice.The project title, "GU-GU-RU, " was derived from the last letters of the Japanese verbs describing the three main pillars of the project, which are "learning facts and building knowledge (Tsumu-GU), " "connecting people, community, and society (Tsuna-GU), " and "messages are transmitted as an individual matter (Tsutawa-RU)." MOE aims to reduce the proportion of Japanese citizens who believe that the current radiation exposure in Fukushima residents is likely/ very likely to cause genetic effects, from 40% in fiscal year 2020 to 20% by the end of fiscal year 2025, which is the end of March 2026. The kick-off meeting was held on July 15, 2021.The Minister of the Environment, MP Shinjiro Koizumi, attended the event, and it was covered by media outlets [9,10].Fukushima Medical University is actively participating in the project, holding various seminars and recording sessions to fulfill its role as a medical institution that promotes the healthcare of local Fukushima residents [11]. The "GU-GU-RU" project consists of five sections, which are "to know, " "to learn, " "to make decisions, " "to listen, " and "to research" (Fig. 1).These were set at the kick-off meeting in 2021 and are outlined below. (1) "To know" (reading academic papers scientifically).This program provides students an opportunity to learn "how to read and write academic papers, " using published journal articles as reference material.This program also includes a review of how we perceive information from media and social networking services. For example, an article on the Fukushima nuclear accident and the corresponding increase in the number of pediatric patients with congenital heart diseases has been published in an academic journal.The workshop explains the study's data acquisition methods and logic development using the data presented in the article and describes how the paper can be validated using its data sources.The session teaches participants how to interpret and understand this information [12][13][14][15]. (2) "To learn" (Radiation College).This program includes seminars and other events for universities nationwide, providing an opportunity to learn the basic scientific knowledge about radiation and its health effects, as well as an opportunity to present what they have learned.Students can take part in face-toface seminars at universities or via online videos.MOE also holds an event for those who wish to present what they have learned through this program.This is divided into the "presentation section" and the "dialog writing section." In the presentation section, applicants can present their research and compilation regarding the health effects of radiation by expressing in their own words.They can choose to be recorded either by professionals or by themselves.The presentations are evaluated later by an outside panel of judges, who select the winners of the Excellence Award. In the dialog writing section, applicants can create a dialog on discrimination, prejudice, and scientific facts along with a designated scenario (for example, a woman presenting knowledge to her parents, who are concerned about the genetic effects of radiation in the context of her marriage to a male radiologist).An Excellence Award is selected in the same way as in the presentation section, and the dialog receiving the award is actually dramatized. (3) "To make decisions".This project provides information on topics such as the health effects of radiation so that individuals can make their own decisions with confidence based on the concept of "information provision and decision making." After the Chernobyl nuclear power station accident occurred in 1986, the incidence of thyroid cancer increased among children after 4 years of the accident due to internal exposure to radioactive iodine, which was one of the health effects associated with radiation [16].In the FDNPS accident, the radiation dose received by infants and children (which was mainly radioactive iodine Fig. 1 Five sections of the "GU-GU-RU" project released from the containment vessel of the nuclear reactor) was low, and the resulting health effects were not as serious or severe as those after the Chernobyl accident [4].However, the people of Fukushima Prefecture had a growing concern that a situation like the Chernobyl accident might occur again. Meanwhile, Fukushima Prefecture began the "Fukushima Health Management Survey (FHMS)" in 2011 to monitor the health of Fukushima residents [17].The FHMS provides thyroid screening examinations for approximately 380,000 people who ranged in age from fetuses to the age of 18 at the time of the accident [18]. The purpose of this program is to provide appropriate information and create an environment where individuals eligible for thyroid screening can make their own decisions whether to undergo an examination.This concept was derived from questions on whether the decision-making process for those eligible for thyroid screening was being carried out appropriately. (4) "To listen".The purpose of this program is to expand and improve the system that enables residents of Fukushima who are anxious about radiation to receive a consultation about radiation-related issues. One of the goals for this program is to strengthen the activities of the "Radiation Risk Communication Consultant Support Center" established by MOE [19].Another function of the program is to provide risk communication activities and consultation on radiation to residents of Fukushima who are anxious about radiation under the concept of "to pay attention closely to concerns and questions." (5) "To search".This program aims to reduce anxiety about radiation health effects by presenting information on the official website of the project.The website can be used like a dictionary to search for data whenever people are concerned about something related to radiation.The website is constructed based on the contents of a booklet titled, "Basic Information Regarding Health Effects of Radiation", and the "Portal Site on Radiation Health Effects" [20,21]. Results After the kick-off meeting on July 15, 2021, each project launched and obtained the following results: (1) and ( 2) "To know" and "To learn". In the first year of the project, the program integrated "to know" and "to learn." Specifically, "Radiation College" seminars were held at 49 universities and 1 high school throughout Japan, with 1,345 students participating.The seminar contents were primarily (i) basic knowledge of radiation and (ii) the credibility of data published in articles, which were related to the content of "to know." In December 2021, a public seminar (special dialog) was held at Fukushima Medical University in conjunction with the Radiation College seminar (Fig. 2). The presentations and dialogs submitted by students were reviewed by outside experts, and an award ceremony was held on February 28, 2022, at the "GU-GU-RU" project forum.Six students received awards for their outstanding work.Among the winners in the presentation section, one student stated, "I cannot shake off the fear that prejudice will be passed on to my children and ruin their lives, " and another warned that "decreased interest will cause individuals to fixate on past information and have limited access to the latest information." Another cited the UNSCEAR 2020/2021 report, pointing out that incorrect understanding about the health effects of radiation and rumors associated with the lack of updated and correct knowledge still persist. Fig. 2 Public seminar held at Fukushima Medical University (special dialog) In the dialog writing section, one piece addressed excessive anxiety about radiation owing to ambiguous information and feelings of being subjected to discriminatory re-marks in a profession that deals with radiation.Another presented the truth of what is written in academic papers, and the other addressed the lack of genetic effects on the second generation exposed to the atomic bomb. In addition, on March 4, 2022, a "Nikkei Seminar" was held online for business people in a tie-up with the Nihon Keizai Shimbun (Japan Economic Times).The seminar lectures included a viewpoint from behavioral economics.More than 500 people were estimated to have watched the seminar.The following six videos from the Radiation College are currently posted on the official website of the project explained in the activity Sect.( 5). ( (3) "To make decisions".The purpose of this program is to provide decisionmaking information for the people eligible for thyroid screening examination (aged 11 or older as of 2021) through the FHMS.As a first step, a poster and a clear file folder using a manga "Hataraku Saibo" (Cells at Work!, ©Shimizu Akane/ Kodansha LTD) were created and delivered in Fukushima Prefecture [32] (Fig. 3)."Hataraku Saibo" is a manga that illustrates various physiological phenomena that occur in the human body from the viewpoint of cells, in which anthropomorphic red blood cells, along with white blood cells and platelets, are the main characters.The poster and clear file folder indicate the websites showing the latest information on thyroid screening examination and recruiting medical organizations for examination, which can be easily accessed using a QR code. In addition, a leaflet describing the thyroid screening examination provided by FHMS was prepared and distributed to those eligible for thyroid screening examinations [33].It provides basic information about undergoing a thyroid examination, including the radiation dose of radioactive iodine for Fukushima Prefecture residents due to the FDNPS accident, and the relationship between thyroid cancer detected in previous examinations and radiation exposure.Moreover, it explains the rights that individuals eligible for thyroid examinations have to make decisions before undergoing the examinations (for example, deciding whether to take the examination, ask questions regarding the examination, and postpone to undergo the examination).These explanations and messages are described with illustrations. Additionally, information on the use and application of radiation in the medical field and radiation protection was disseminated through the production and broadcast of a TV program focusing on radiation-related professions (medical radiologists and radiology technicians enrolled at Fukushima Medical University and Kyoto Medical University) [34] (Fig. 4). (4) "To listen".In this program, students participating in the recording session for the presentation section of "Radiation College" had an opportunity to hear from returned residents after the evacuation order was lifted.Topics students heard were the local government's response at the time of the GEJE and following FDNPS accident, as well as reconstruction efforts to date [35].The students exchanged opinions on measures at the municipal level to promote the return of residents who had evacuated and gained new insights.They also visited the Great East Japan Earthquake and Nuclear Disaster Memorial Museum to learn about the actual damage caused by the earthquake and tsunami, and the evacuation following the nuclear accident [36]. The other study session, which a faculty member from Fukushima Medical University led out in, was held for local residents to provide information on current radiation levels in areas where the evacuation order is expected to be lifted in the future and to listen to their concerns [37] (Fig. 5).Participants raised questions about the difference between exposure to the atomic bombs unleashed in Hiroshima and Nagasaki and exposure to the FDNPS accident.Indeed, they asked the accumulation of radiation in the body from daily exposure, and the radiation levels contained in the natural mushrooms they eat daily.A faculty member from Fukushima Medical University answered these inquiries. In this project, various information was posted on the official "GU-GU-RU" project website.Examples of the information include the results of a questionnaire survey on the health effects of radiation, a link to the statement from MOE regarding the letter issued on January 27, 2022, by five former Japanese Prime Ministers to the President of the European Union, which was to clarify the misunderstanding of the causal relationship between radiation exposure and thyroid cancer [38,39].This page has been continuously updated so that individuals who are concerned about the health effects of radiation can easily access it as a reliable source of information.In addition, a link to the English version of the "Portal Site on Radiation Health Effects, etc. " has been established to provide overseas access [40]. Discussion Now that we have introduced and summarized the five sections of the "GU-GU-RU" project, we will discuss the characteristics of the project from four different perspectives. In the past, risk communication and many projects related to the health effects of radiation have aimed to provide residents with scientifically correct data about radiation.The focus has been concentrated on scientific content, such as types of radiation and the health risks associated with radiation exposure.The communication method has often been in the form of lectures, which is a one-way provision of scientific facts from the lecturers (experts) to the target audience (residents). Meanwhile, in the "GU-GU-RU" project, the objective of "dispelling incorrect understanding, discrimination, and prejudice" has been given priority over the scientific content.This includes people's sense of morality, social norms, and values.Scientific content can never be 100% accurate, as new findings may always emerge, and their interpretation can change.However, morality, social norms and values are easily understood among the general public because they are historically, educationally, and culturally embedded in the Japanese society. It is believed that the project participants themselves realize and make it personal that incorrect understanding about radiation are unconsciously related to discrimination and prejudice.Indeed, we can recognize that participants can achieve the goals of the project by realizing that updating knowledge. (2) Targeting.In large-scale, government-led projects such as the "GU-GU-RU" project, the target audience must always be considered, and it is difficult to measure the effectiveness of the project.Although scientific knowledge on radiation has been provided in past projects, it is questionable whether the Japanese public has understood it.This is evidenced by the fact that 40% of the Japanese citizens still believe that the genetic effects of radiation exposure on the next generation is likely, very likely to occur [38]. This result made it necessary to consider a different approach.Based on the concepts of behavioral economics and social marketing, the "GU-GU-RU" project has set its target as the generation (teens to 30s) who will be facing life events in the near future, such as entering university or vocational school, finding and beginning a job, getting married, becoming pregnant, giving birth, and raising children.This target was set for three following reasons. First, risk communication and projects related to radiation have been less effective in the past because they targeted people of all ages, thus blurring the target audience when creating content and forming strategies. Second, teens to 30s are the generation most likely to suffer from discrimination and prejudice since 40% of the general public believe the "genetic effects associated with radiation exposure" is likely/ very likely to occur, as indicated at the beginning of this article.Their acquisition and appropriate dissemination of correct knowledge, even if they are subjected to discrimination and prejudice, must be accompanied by the ability to present data with evidence and explain it from their own perspective. Third, the program encourages decision making by providing accurate information on radiation-related health effects.Teenagers to those in their thirties face a variety of life events described before, and each of these events is accompanied by decision making.For those who are anxious about radiation exposure or its subsequent effects, this project attempts to alleviate their anxiety and promote appropriate independent decision making by providing accurate information and options. Despite these assumptions, the circumstances surrounding the "GU-GU-RU" project are becoming different.Therefore, the project is progressing through a process of trial and error in a constantly changing situation. (3) Encouraging proactive action.One feature of the project is promoting proactive or active action for participants, rather than passive acceptance of information. For example, in the section of "to know, " the seminars are not only showing and explaining articles and their contents but also introducing contrast in articles that raise doubts about the genetic effects associated with radiation exposure after the FDNPS accident, along with verification papers and explanations of what parts of the data and logic of the papers are controversial. Finally, the seminars ask participants to verify the data presented to them from their own perspectives, rather than taking it for granted under "reliability of information." This could apply not only for the data related to health effects of radiation exposure, which is the main part of "GU-GU-RU" project, but also for all the information that surround us, which could be applicable for our daily lives. In fiscal year 2021, 50 students participated in a presentation section and a dialog writing section as opportunities to communicate their thoughts and ideas (Students who participated in both divisions are counted as one). In addition, it is also important to note that presentations and dialogs created by students are evaluated by experts, which motivate them to re-think about their own expression styles or techniques, which may lead to better works.Indeed, videos of the students' own presentations and dialogs are posted on the "GU-GU-RU" project website, which may encourage others who desire to participate to further refine their high-quality thoughts and ideas.One of the main features of the "GU-GU-RU" project is that it provides an opportunity for students who wish to share their own views on the health effects of radiation (an occasion to express their ideas).We think that continuous opportunities for students who take part in such events will be necessary. (4) General discussion.Since more than 12 years have passed since the GEJE and FDNPS accident, memories of the disaster are fading among the Japanese people.However, we should not overlook the growing indifference that has resulted from the accident disappearing from the minds of individuals.This is because indifference on the part of individuals is a major factor preventing the "updating of information" indicating one of the goals of the "GU-GU-RU" project. The background of this phenomenon is not simple.After the GEJE and subsequent FDNPS accident in 2011, natural disasters (earthquakes, typhoons, volcanic eruptions, etc.) have occurred every year in Japan.Indeed, the COVID-19 pandemic was one of the health issues people greatly paid attention to.These incidents constantly overwrite people's memories of other natural disasters and health crises.As a result, the awareness of the GEJE and FDNPS accident has become relatively low among the public, as their memories diminish or they gradually become indifferent.Consequently, 40% of the Japanese public still has incorrect understanding about the health effects of radiation. Particularly, the impact of the COVID-19 pandemic, which has been ongoing since 2020, has spread throughout Japan, transforming our way of life in terms of limited medical care, restrictions on daily activities, and continuous wearing of masks.The social impact of COVID-19 was greater than that of the GEJE, and information on COVID-19 took precedence over memories of the GEJE and FDNPS accident, accelerating the declining of such memories among the general public. However, it is necessary to shed light on discrimination and prejudice toward those from Fukushima regarding the health effects of radiation through the "GU-GU-RU" project, which must not be abandoned.Therefore, we need to update people's knowledge by providing and disseminating the latest details of scientific facts steadily, and the current situation of Fukushima, which could revive people's fading consciousness of the situation.This is a necessary task to eliminate discrimination and prejudice, which are based on incorrect understanding of radiation exposure relevant to the FDNPS accident. To think about the project in the future, on the one hand, it is important to try to eliminate discrimination and prejudice through updating knowledge of health effect of radiation for participants.On the other, we need to show them the possibility of stochastic effect of radiation with mathematical and scientific perspectives.Indeed, we have to explain the significance of radiation safety and radiation protection implemented at medical fields, which is necessary for us, and not to underestimate them. Conclusion The "GU-GU-RU" project was initiated to provide updated knowledge related to radiation and to eliminate rumors that may lead to unjustified discrimination and prejudice regarding the health effects, especially genetic effects, of radiation.The goal of the project is to reduce the proportion of Japanese citizens nationwide who believe that the current radiation exposure in Fukushima residents will likely cause genetic effects of radiation, from 40 to 20% by the end of March 2026. The kick-off meeting held in July 2021 was covered by the press, and the project attracted a great deal of public attention.Among the programs, the "Radiation College" has steadily produced positive results, with nearly 1,300 students participating and 50 students sharing their thoughts and ideas.In addition, the project has adopted strategies such as creating and broadcasting a TV program and collaborations with manga, which are expected to have a significant impact on society. Compared to previous activities, the "GU-GU-RU" project has taken a different approach in providing information related to radiation and its health effects.The project incorporates the perspective of behavioral economics and takes a proactive approach to the media.Each program have been carried out with its own unique characteristics, such as collaboration with manga. Although the project has just begun, it is difficult to obtain the prompt result in terms of percentage decrease after the first year [41].Therefore, a continued development for this project to achieve the goal by the end of March 2026 is expected. Fig. 3 A Fig. 3 A poster of "Hataraku Saibo" (Cells at Work!), noticing the website providing the latest information related to the thyroid screening examination (left: original Japanese version, right: English version translated by the first author) Fig. 5 A Fig. 5 A lecturer from Fukushima Medical University is providing information relevant to radiation
2023-10-20T14:00:58.289Z
2023-10-19T00:00:00.000
{ "year": 2023, "sha1": "11b0833d8e093f3728bb1b511e79350df30e8395", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/s12889-023-16883-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c8d1d6fc55d0fdfe051cccaef3477ae1582c744", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
91179940
pes2o/s2orc
v3-fos-license
Upper-twin-peak quasiperiodic oscillation in x-ray binaries and the energy from tidal circularization of relativistic orbits High frequency quasiperiodic oscillations (HF QPOs) detected in the power spectra of low mass x-ray binaries (LMXBs) could unveil the fingerprints of gravitation in strong field regime. Using the energy-momentum relation we calculate the energy a clump of plasma orbiting in the accretion disk releases during circularization of its slightly eccentric relativistic orbit. Following previous works, we highlight the strong tidal force as mechanism to dissipate such energy. We show that tides acting on the clump are able to reproduce the observed coherence of the upper HF QPO seen in LMXBs with a neutron star (NS). The quantity of energy released by the clump and relativistic boosting might give a modulation amplitude in agreement with that observed in the upper HF QPO. Both the amplitude and coherence of the upper HF QPO in NS LMXBs could allow us to disclose, for the first time, the tidal circularization of relativistic orbits occurring around a neutron star. I. INTRODUCTION The twin-peak high frequency quasiperiodic oscillations (HF QPOs), observed in the power spectra of low mass x-ray binaries (LMXBs) with either a neutron star (NS) or a black hole (BH), could carry information on the matter orbiting in the accretion disk around the compact object. Their central frequencies are typical of the orbital motion close to the compact object. HF QPOs are potential probes to prove the laws of gravitation close to a NS or a BH [1]. The first-discovered twin-peak HF QPOs were observed with central frequency up to ∼ 1130 Hz in a NS LMXB [2]. They were named twin-peak kilohertz QPOs because they often show up in pairs. The HF QPOs observed in BH LMXBs have frequencies of hundreds of hertz [3,4] and show different features than HF QPOs seen in NS LMXBs. While in NS LMXBs the central frequency of the peaks is seen to vary, in BH LMXBs the twin-peak HF QPOs are observed at fixed frequencies, showing a cluster at the 3:2 frequency ratio. The clustering has motivated models proposing that HF QPOs might be related to resonance mechanisms of the matter orbiting in the curved space-time [5][6][7][8][9]. The HF QPOs in BH LMXBs have a coherence lower than in NS LMXBs and an amplitude not displaying the characteristic patterns seen in NS LMXBs (e.g. Refs. [10,11]). Low frequency QPOs (< 100 Hz) seen in both NS and BH LMXBs may be related to relativistic frame dragging around the spinning compact object [12], a prediction of general relativity (GR) in strong field. The effect on the orbiting matter is known as Lense-Thirring precession [13]. Recent works have put forward strong evidence that the low frequency QPO seen in the BH LMXB H1743-322 is produced by frame dragging [14,15]. In the case of NS LMXBs, recent data analysis shows that the predic- * claudio.germana@gmail.com; claudio.germana@ufma.br tions of the modeling differ from the data because other factors may affect the modulation mechanism [16]. Other GR effects potentially detectable around the compact object in LMXBs are, e.g., the periastron precession of the orbits [17] occurring on milliseconds timescale as well as the existence of an innermost stable bound orbit (ISBO) [18,19]. The unprecedented opportunity to disclose such phenomena in the imprints left by the HF QPOs has stimulated several works on the modulations that would be produced by matter orbiting around a compact object [20][21][22][23]. Ray-tracing of the photons emitted by an overbright hot-spot orbiting a Kerr black hole shows the signal that a distant observer would see [22]. The light curve produced by the hot-spot is modulated at its orbital period because of relativistic effects. Increasing the inclination towards an edgeon view, the light curve becomes sharper because of increasing Doppler boosting and gravitational lensing. The power spectrum of the signal from a slightly eccentric orbit (e ∼ 0.1) shows several peaks: the keplerian frequency ν k , the radial frequency 1 ν r , the beats ν k ± ν r and their harmonics. Also, the authors have simulated the signal emitted by an arc sheared along the orbit. The power spectrum shows pronounced peaks at ν k and ν k ± ν r and much less power at the harmonics. Ray-tracing presented in Ref. [23] shows the different detectability that HF QPOs would have between current and future x-ray satellites, taking into account also the radial drift of the accreting hot-spot. In the power spectra the peaks and harmonics at ν r , ν k and ν k + ν r (or 2(ν k − ν r )) are detected. Differences between the signal from the orbiting hot-spot and the one from axisymmetrics disk oscillations are investigated as well. In a more dynamical framework, in Refs. [24,25] were introduced ray-tracing techniques in the case of clumps of matter stretched by the strong tidal force around a Schwarzschild black hole. Differently than the rigid hotspot case, the stretching of the clump, as long as it orbits, leads to a sudden increase of its luminosity producing a power law in the power spectrum [25,26]. Moreover, the stretching blurs the signal emitted in the case of a rigid sphere or a circular hot-spot. This implies some peaks and harmonics not be detected in the power spectrum. In other works the stretching of the clump is simulated as an arc along the orbit, in order to get power spectra with few peaks as in the observations. In the tidal model the stretching is a natural consequence of tidal deformation of the clump. The simulation using a slightly eccentric orbit (e = 0.1) gives a power spectrum with a power law and two peaks, as in the observations [3]. The lower peak in frequency corresponds to ν k , the upper one to the beat ν k + ν r [26]. Tidal disruption events have already been recognized in the case of stars disrupted by supermassive black holes (e.g. Refs. [27][28][29][30]). Efforts to model such events are going forward in the details (e.g. Refs. [31][32][33][34]). QPOs have been detected in the energy flux of some tidal disruption events [35,36]. Tidal interaction is a mechanism that can provide significant amounts of energy. In our neighborhood, some moons display geological activities whose energy is pumped by the tidal force of the parent planet: the strongest volcanism in Jupiter's moon Io [37] and possibly the discovered ocean [38,39] and geothermal activity in Saturn's moon Enceladus [40,41]. Thus, the strong tidal force by the compact object in LMXBs, acting on clumps of plasma orbiting in the accretion disk, may be a valid ingredient to model the main features observed in twin-peak HF QPOs. A planet/moon orbiting the central object on an eccentric orbit dissipates its orbital energy because of tides and its orbit gets circular [42]. In Ref. [43] has been shown that the orbit of a low-mass satellite around a Schwarzschild black hole circularizes and shrinks because of tides. Energy is transfered from orbit to internal energy of the satellite. The energy emission mechanism that would turn the released orbital energy into electromagnetic radiation has been investigated in Refs. [25,44]. The authors show that it may be x-ray radiation from synchrotron mechanisms if the clump of plasma is permeated by a magnetic field. In Ref. [45] the authors conclude that magnetically confined massive clumps of plasma might form in the inner part of the accretion disk. In Ref. [46] it is shown that the hard x-ray radiation, over 10-100 milliseconds time intervals, observed in two x-ray binaries is better interpreted through cyclosynchrotron self-Compton mechanisms. The calculations in Refs. [25,44] show that during tidal stretching the magnetic field could largely increase. Moreover, gravitational energy extracted through tides might go into kinetic energy of the electrons in the plasma, since the clump is rapidly expanding into a pole. This mechanism could provide relativistic electrons emitting syn-chrotron radiation. Magnetohydrodynamics simulations are required to know how this mechanism actually works. Recent numerical simulations of the magnetic field in a star disrupted by tides [34] show a magnetic field largely increasing, as from the calculations in Refs. [25,44]. The emission of radiation because of the orbital energy released during tidal circularization of the orbit thus would cause an overbrightness of the clump with respect to the background radiation from the disk. In Ref. [47] has been shown that the timing law of the azimuth phase φ(t) on an slightly eccentric relativistic orbit produces multiple peaks in the power spectrum: the keplerian frequency ν k and the beats ν k ± ν r . The beats ν k ± ν r are produced because of the eccentricity of the orbit. The orbiting body has a different orbital speed at periastron and apastron passage, happening at the frequency ν r = ν k in the curved space-time. This introduces a modulation in the phase φ(t) at the relativistic radial frequency ν r . In the case of a circular orbit (in every case in a flat spacetime) only the peak at ν k is produced. As already mentioned above, the timing law φ(t) turns into a modulated observable light curve because of relativistic effects on the emitted photons [22,23,25]. The amplitude of the beats ν k ± ν r thus originates because of the orbital energy released during tidal circularization of the orbit. Moreover, the coherence of the beats is related to the time-scale the circularization takes place, since once the orbit is circular or quasi-circular the beats ν k ± ν r fade and the emitted energy is modulated only at the keplerian frequency ν k . Most efforts to interpret the twin-peak HF QPOs have focused on the identification of their central frequencies with those of the orbital modes in the curved space-time. The proposed models link the upper HF QPO of the twin-peaks to the keplerian modulation ν k produced by a clump of plasma orbiting in the accretion disk, other models link the lower HF QPO to ν k [20][21][22][48][49][50]. On the other hand, attempting to interpret the amplitude and coherence of HF QPOs might disclose useful information on their nature as well. In Refs. [51][52][53][54] are reported both the amplitude and coherence of the twinpeak HF QPOs observed in NS LMXBs. The behavior of the amplitude as a function of the central frequency of the peaks shows characteristic patterns in atoll NS LMXBs [55]. The amplitude of the upper HF QPO displays a decrease with increasing central frequency of the peak, instead the amplitude of the lower HF QPO shows an increase and then a decrease. The coherence Q of the lower HF QPO (Q = ν/∆ν with ν central frequency and ∆ν full width at half maximum of the peak) shows a characteristic pattern too: Q as a function of ν increases and then drops abruptly [52][53][54]. In Ref. [52] has been underlined that the abrupt drop of Q, seen in several atoll NS LMXBs, could be a signature of the oscillation approaching the ISBO predicted by GR. This relevant issue was subsequently discussed with extensively data analysis in Refs. [53,54]. Although the excursion of Q of the lower HF QPO is more than an order of magnitude, the Q of the upper HF QPO shows an almost flat trend over a large range of frequencies, mostly remaining of the order of Q ∼ 10. In a previous work (Ref. [56], hereafter GC15) we have proposed that the amplitude and coherence of the lower HF QPO might originate from the energy released by a clump of plasma spiraling to inner orbits because of the work done by the tidal force, dissipating the orbital energy. In this paper we aim to investigate on the amplitude and coherence of the upper HF QPO [52]. Here is proposed that the upper HF QPO might originate from the energy released during tidal circularization of the clump's orbit. In Ref. [43] has been shown that the orbit of a clump of matter orbiting a Schwarzschild black hole circularizes and shrinks because of tides. The release of orbital energy during circularization of the orbit might provide the overbrightness of the clump required in order to produce detectable modulations [22,25]. The emitted photons are modulated at ν k and ν k ± ν r in the power spectrum [22,47]. The beats ν k ± ν r should show up only in the phase of tidal circularization of the orbit, since once the orbit gets circular ν k ± ν r fade and the emitted radiation is modulated only at ν k . Tidal disruption simulations show an upper HF QPO corresponding to the beat ν k +ν r [26]. Therefore, we believe and inspect that both the amplitude and coherence of the upper HF QPO in the observations [52] should be reproduced by the energy released during tidal circularization of relativistic orbits. The paper is organized as follows. In Section II we recall the main arguments described in GC15 about the tidal load on clumps of plasma orbiting in the accretion disk. In Section III we explore the idea presented in this manuscript, i.e. the amplitude and coherence of the upper HF QPO seen in NS LMXBs could be related to the energy released during tidal circularization of relativistic orbits. We estimate the energy released by an orbiting clump of plasma when its slightly eccentric orbit gets circular. We use the energy-momentum relation in the Schwarzschild metric 2 since it is the relativistic equation that embeds all the contributions to the total energy of an orbiting clump of matter. The time-scale of tidal circularization of the orbit is calculated. Afterwards, we calculate the coherence Q the produced beat ν k + ν r would have. We compare it to the upper HF QPO coherence pattern seen in the observations (e.g. Fig. 2 in Ref. [52]). In Section IV we attempt to tie the orbital energy released 3 during circularization of the orbit to the observable fraction of energy modulated by Doppler boosting. We follow the detailed results in Ref. [22] to get the observable amplitude of the beat ν k + ν r . In Section V we discuss the results in this paper in light of other theoretical and observational results. Section VI summarizes the conclusions. II. ORBITING CLUMPS OF PLASMA AND TIDAL LOAD Motivated by the results from tidal disruption of clumps orbiting a Schwarzschild black hole [24,25], reproducing power spectra much alike to the observed ones [26], in GC15 we have estimated the energy coming from the tidal disruption of a clump of plasma in the accretion disk around LMXBs. Magnetohydrodynamics simulations show that the inner part of the accretion disk is highly turbulent [58]. In Ref. [59] the authors reported the discover of large structures in the accretion disk of a x-ray binary. Propagating accretion rate fluctuations in the disk are modeled [60, 61] to reproduce the aperiodic variability observed in BH LMXBs. Thus, it is hard thinking to a smooth accretion disk, but rather it may be characterized by inhomogeneities propagating throughout it. Note that in Ref. [45] is shown that magnetically confined massive clumps of plasma might form in the inner part of the accretion disk. In light of this, in GC15 we explored the idea of treating a clump of plasma as characterized by some internal force keeping the clump together (e.g. electrochemical bounds and/or magnetic forces). In this section we recall the main arguments in GC15. A spherical clump of radius R, mass µ and density ρ undergoes a tidal force (between two opposite spherical caps of the clump, at r − R and r + R; see also GC15) where µ ′ = ρV ′ is the mass of the spherical cap, of height, say, one tenth of the radius, h = R/10. The volume of the (1) is the gravitational effective potential in the Schwarzschild metric (7). In the case of a solid-state clump of matter, the clump is kept together by an internal force (electrochemical bounds) characterized by the ultimate tensile strength σ, i.e. the internal force per unit area. The tidal force has to be weaker than internal forces, F T ≤ 2πRhσ. From this inequality we can get some order of magnitude on the HF QPO. Here our main purpose is not the spectral energy distribution (i.e. how the orbital energy then is emitted), which depends on the exact energy emission mechanism (see Sec. IV for a discussion on this point). maximum radius R max set by tides 4 where we wrote the density ρ = c 2 s /Y , Y is the Young's modulus of the material, c s the speed of sound in it. As mentioned above, in Section IV of GC15 we explored the idea of treating clumps of plasma in the accretion disk as characterized by some internal force per unit area σ (electrochemical bounds and/or magnetic forces). The speed of sound in the plasma is [63] where γ ∼ 5/3 is the adiabatic index, Z the charge state (Z = 1 for a hot plasma), m i the ion hydrogen mass, k the Boltzmann's constant [63]. In CG15 we pointed out that clumps with R = R max would not probably form at all because of tides. The tidal load (the tidal force (1) per unit area) has to be n times smaller than σ, i.e. Note that the Rmax calculated in GC15 in the case of a solidstate clump agrees to the dimensions derived in Ref. [62] of a bar falling into a gravitational field. In GC15 we constrained n = 5 as upper limit, giving R ∼ 3000 m. A larger n implies a clump with radius R emitting gravitational energy lower than that observed in HF QPOs (≈ 10 35 − 10 36 erg/s). On the other hand, a smaller n gives larger radii R, close to R max . As mentioned above, such clumps would not probably form/survive at all because of tides. Fig. 1 shows the radius R = R max / √ 5 set by tides (from equation (2)) as a function of the periastron r p of the orbit, in the case σ T = σ/5. In (2) the ratio σ/Y was constrained in GC15 (equation (9)) and is σ/Y = 300 in atoll sources (σ/Y = 70 in Z-sources; see Section VII B in GC15). The speed of sound c s is from (3). In Fig. 1 we see that, as long as the tidal force strengthen towards the inner regions, R decreases as expected. However, getting closer to ISBO (r ∼ 5.6 r g ) R increases and then drops. The slightly increase is caused by the weakening of the tidal force close to ISBO. Close to ISBO the gravitational potential (7) flattens and, therefore, the tidal force weaken. This can be seen in Fig. 2. It shows the tidal load σ T (4) in Pascal on a clump of plasma R = 3000 m big for several orbits of different periastron. Over each orbit (each segment in the figure) σ T changes from the periastron to the apastron of the orbit. Its overall behavior increases and then drops close to ISBO because of the flattening of the potential. The flattening of the minimum of the potential V ef f is a features of GR [18] and causes the decrease of the difference of potential energy between close orbits reported in GC15. The drop of R in Fig. 1 close to ISBO is caused by the drop at ISBO (inner edge of the accretion disk) of the speed of sound in the plasma (see equation (2)). The cusp seen at r p ∼ 6.4 r g is because of the orbit at which the tidal force is almost equal at periastron and apastron. Orbits with bigger radii have the tidal force stronger at periastron, as expected, therefore we calculate the radius R of the clump set by tides at periastron. However, orbits with r smaller than r ∼ 6.4 r g have a tidal force stronger at apastron, because of the flattening of the potential. This can been see in Fig. 2. Thus, we calculate the radius R set by tides at the apastron of the orbit. The patterns in both figures are for orbits of eccentricity 5 e = 0.1, for a neutron star of 2 M ⊙ and an accretion rate ofṀ ∼ 7 × 10 16 g/s, giving the luminosity observed in atoll sources, i.e. L ∼ 0.07 L Edd ∼ 10 37 erg/s ( [64], where L Edd ∼ 2.5 × 10 38 erg/s is the Eddington lumi- nosity for a ∼ 2 M ⊙ neutron star [63]). This accretion rate gives a density of the clump of plasma in the accretion disk of ρ ∼ 1 g/cm 3 and the speed of sound in it c s ∼ 4 × 10 7 cm/s [63]. III. THE ENERGY AND TIME-SCALE FROM TIDAL CIRCULARIZATION OF RELATIVISTIC ORBITS The total energy of an orbiting test-particle of mass µ is enclosed in the energy-momentum relation 6 with g αβ metric tensor and p α(β) contravariant fourmomentum of the test particle [18]. In the Schwarzschild metric substituting the p α = dx α /dτ (τ proper time; see e.g. [57]) and extending (5) we get m is the mass of the compact object 7 ,Ẽ andL total energy and angular momentum per unit mass µ of the test-particle, r is the radial coordinate. Equation (6) (whose square root, multiplied by µc 2 , we can write as E = µ rel c 2 , with µ rel relativistic mass) tells us the energy contributions to the total energyẼ. The first term is the energy coming from the radial motion, i.e. the motion 6 Hereafter we use geometric units (G = c = 1), unless differently specified. 7 The mass m in geometric units is equal to the gravitational radius of the compact object rg = GM/c 2 , where M is the mass of the compact object in international system units, G the gravitational constant and c the speed of light. For a 2 M ⊙ neutron star rg ∼ 3 km. from periastron to apastron and back to periastron. The second term is the effective gravitational potential [57] with contribution from the rest-mass energy (per unit mass µ), the gravitational and centrifugal potential. In Ref. [57] are reported exact parametrization for the total (or orbital) specific energyẼ and specific angular momentumL, for a generic orbit of semi-latus rectum p and eccentricity ẽ p is linked to the periastron r p of the orbit through r p = pm/(1 + e). The energy (in international system units) that a clump of matter of mass µ would release, if its orbit of eccentricity e is circularized, is from (8) We aim to compare the released energy ǫ to the energy (amplitude) carried by the upper HF QPO observed in NS LMXBs (Fig. 3 in Ref. [52]). The upper HF QPO of the twin-peaks corresponds to the beat ν k + ν r in the power spectrum from numerical simulations [26,47]. This beat is caused by the eccentricity of the orbit and originates only in the phase of tidal circularization of the orbit, when energy is released and it is modulated at ν k + ν r , till the orbit gets circular, then ν k + ν r fades. We calculate the relativistic keplerian 8 ν k and radial ν r frequency as in Ref. [47], for an orbit with eccentricity e = 0.1. Fig. 3 shows the orbital energy released ǫ (10) to circularize the orbit of initial e = 0.1 as a function of the frequency of the beat ν k + ν r , i.e. for clumps orbiting at different orbital radii. The range of orbital radii is ∼6 r g to 13 r g . At each orbital radius the clumps have R as in Fig. 1. The energy released corresponds to ∼ 0.3% µc 2 . We see that the energy released when, e.g., ν k + ν r ∼ 520 Hz (r p ∼ 13 r g ) is higher than that released by a clump orbiting at r p ∼ 7 r g (ν k + ν r ∼ 1100 Hz). Close to ISBO it drops. With the amount of orbital energy released by the clump to circularize its orbit we can investigate whether the upper HF QPO seen in the observations could actually originate from tidal circularization of relativistic orbits. We calculate the time-scale the circularization of the orbits by tides would take place. Then we compare the derived coherence of ν k + ν r to the coherence behavior of the upper HF QPO observed in several atoll NS LMXBs (Fig. 2 in Ref. [52]). The tidal force removes energy from orbit and loads it on the clump (e.g. Ref. [43,65]). We aim to estimate the energy loaded by tides on the clump over one radial cycle, from periastron to apastron and back to periastron (see Fig.2). To get order of magnitude, we substitute into (4) the parametrized radius r(χ) = pm/(1 + e cos(χ)) as a function of the radial phase χ [57]. The tidal load (4) as a function of χ, which is an energy per unit volume, is integrated over one radial cycle χ, from periastron (χ = 0) to apastron (χ = π) and back to periastron (χ = 2π). We multiply for the volume of the clump to get the energy loaded by tides per periastron passage. For a clump with R as in Fig. 1 and a density of the plasma typical for an atoll source (ρ ∼ 1 g/cm 3 ), the estimated amount of energy is of the order of 9 E tide ∼ 10 35 erg. We divide ǫ from (10) by E tide (as a function of the orbital radius) to get the number of periastron passages N in order to circularize the orbit. The time it takes to circularize the orbit then is t ′ = N/ν r , equal to 10 t ′ ∼ 0.01 s at r ∼ 8 r g . The coherence of the beat ν k + ν r is Q = (ν k + ν r )/∆ν = (ν k + ν r )t ′ . Fig. 4 shows the coherence Q obtained from our calculations as a function of the frequency ν k + ν r . Like in Fig. 3, the range of frequency corresponds to a range of orbital radii of ∼ 13 − 5.6 r g . The radius R of the clump is shown in Fig. 1. The coherence Q is mostly constant and of the order of 10. Both 9 Note that the order of magnitude obtained E tide ∼ 0.1% µc 2 agrees to that calculated with other formalisms in the case of a star disrupted by a supermassive black hole [31]. 10 This time-scale is in agreement with that from the calculations in Ref. [43]. its value and trend are much alike to the coherence of the upper HF QPO observed in NS LMXBs, Fig. 2 of Ref. [52] (filled star symbols). In Fig. 2 of Ref. [52] Q is of the order of Q ∼ 10 for most of the sources. We see that the Q calculated here strongly depends on the radius R of the clump, Q ∝ R −2 . It may be worth emphasizing that the R in Fig 1 is derived from the calculations in Section II and the way to derive it was described in Sections III, IV in GC15. We are not assuming an R to match the Q from the observations, but its value is derived from calculations. This may be a significant result within this framework. Indeed, from the calculated R this approximated modeling is able to give for the first time both Q and the amplitude of the upper HF QPO (see Sec. IV) in agreement with those from observations [52]. IV. TYING THE RELEASED ORBITAL ENERGY TO THE OBSERVABLE AMPLITUDE OF THE MODULATION The orbital energy released during tidal circularization of the orbit (eq. 10) gives time-scales of dissipation in agreement with the coherence Q of the upper HF QPO detected in atoll NS LMXBs [52]. However, this released orbital energy has to be converted somehow to electromagnetic radiation in order the upper HF QPO to be detected. Moreover, only a fraction of this radiation is modulated by Doppler boosting and detectable as HF QPOs [22]. In this section we discuss how the extracted orbital energy by tides would turn into radiation emitted by the clump (see footnote 3). We also estimate the amount of energy that would be modulated by Doppler boosting and detected as a QPO, following the results in Ref. [22]. From the energy emission spectra of LMXBs we know that HF QPOs are observed in hard x-ray, their amplitude keep increasing towards hard x-ray (> 5 keV [66]). This means that the only thermal emission from the disk (soft x-ray, ∼ 1 keV) can not justify their nature. A corona of hot electrons and/or a boundary layer contribute to the energy emission spectra observed in LMXBs (see e.g. Refs. [67,68]). These components produce the hard x-ray spectrum seen in LMXBs. The soft x-ray photons from the disk are inverse-Compton scattered to higher energy by the corona and/or the boundary layer. There are evidences that the same mechanism could amplify the amplitude of the HF QPOs at hard xray [69,70]. That is, the HF QPOs could be produced in the disk, but then they are amplified to hard x-ray by the corona and/or the boundary layer. It was recently suggested that the occurrence of the lower HF QPO could be because of some resonance between the comptonising medium and the accretion disk and/or the neutron star surface [71]. On the other hand, in Ref. [72] it is shown, by means of Monte Carlo ray-tracing, that multiple scattering of soft photons from the disk in a corona of hot electrons would smooth the oscillation that originates in the disk. In Ref. [72] it is suggested that it is unlikely that the same mechanism would produce HF QPOs at hard x-ray, since the emerging hard x-ray suffered more scattering than soft x-ray, thus the oscillation has a low amplitude at high energy bands (see Fig. 5 in Ref. [72]). It is also suggested that there may be in the disk a hot-spot already emitting hard x-ray photons, such that they are moderately scattered by the surrounding corona. In Ref. [46] the authors studied the energy spectra of two x-ray binaries over 10−100 ms time-scales. They concluded that the hard x-ray radiation is better explained through cyclo-synchrotron self-Compton mechanisms. Thus, if clumps of plasma in the accretion disk are permeated by some magnetic field, tidal stretching of the clump may provide a mechanism to produce nonthermal electromagnetic radiation. The orbital energy extracted through tides (e.g. Fig. 3) is transferred into internal energy of the clump (e.g. Refs. [43,65]). In Refs. [25,44] it is shown that during tidal stretching the magnetic field could largely increase. The extracted orbital energy could go into kinetic energy of the electrons in the plasma, since the clump is rapidly expanding into a pole. This mechanism could provide relativistic electrons winding around the magnetic filed of the clump and producing synchrotron radiation [25,44]. Synchrotron radiation by compact hot-spots has already been proposed as mechanism to produce the hard x-ray spectrum seen in HF QPOs [73]. It is clear that full magnetohydrodynamics simulations are required to see how the clump disrupted by tides would emit its energy. On the other hand, we have some clues which could be used to estimate the magnetic field the clump would have and checking whether it is consistent with that measured in LMXBs (B ∼ 10 8 − 10 13 G [74,75]). In Section IV of GC15 we explored the idea of treating the clump of plasma as characterized by some internal force keeping it together, e.g. electrochemical bounds and/or a magnetic force. In Ref. [45] is pointed out that magnetically confined massive clumps of plasma might form in the inner part of the accretion disk. We calculated in equation (9) in GC15 the value of the ratio σ/Y , where σ is the internal force per unit area, Y = ρc 2 s is the Young's modulus of the material Like in solid-state materials, we can think of σ/Y like a hardness of the magnetized clump of plasma. In GC15 we argued that the mechanical binding energy E b in (11), stored in the clump and keeping it together, should be at least of the same order of that observed in HF QPOs, if the HF QPOs are produced by the tidal disruption of the clump. The amplitude of HF QPOs is some percent the luminosity of the source, i.e. L QP O ∼ 10 35 − 10 36 erg/s in atoll sources. Following the results in Section III this energy is emitted over a time scale of the order of ∼ 0.01 s, thus the energy of the QPO is E QP O ∼ 10 33 − 10 34 erg. However, this observed energy is only some percent of the total energy emitted. It is the energy modulated by Doppler boosting. For a hot-spot with an overbrightness twice the background disk the modulated energy is only of the order of 1% [22]. Thus, the total energy emitted would be E b ∼ 10 36 erg. Substituting this E b in (11) we get σ/Y ∼ 300 for an atoll source with luminosity L ∼ 10 37 erg/s (see also Sec. IV and Sec. VII B in GC15). We can estimate the magnetic field of the clump of plasma. Indeed, if the E b above is the magnetic binding energy keeping the clump together, then σ = 300Y = 300ρc 2 s is the magnetic pressure P m = B 2 /2µ 0 , B the magnetic field and µ 0 = 4π × 10 −7 H/m is the magnetic permeability. Equating P m to σ (in Pascal) we derive a magnetic field permeating the clump of B ∼ 5 × 10 9 G. In the case of a Z-source, whose ratio was estimated in GC15 σ/Y ∼ 70, inserting ρ and c s for a Z-source with luminosity L ∼ 2 × 10 38 erg/s we get B ∼ 10 10 G, a larger value than atoll sources, as measured [74]. Note however that in Fig. 3 of Ref. [74] atoll sources are located in the region around B ∼ 5 × 10 8 G, while Z-sources in that with B ∼ 5 × 10 9 G. The discrepancy between these values and those calculated here may be because we did a crude estimation here. For example, we are using the vacuum magnetic permeability µ 0 = 4π × 10 −7 H/m, usually also used in plasmas. However, it may be different in the plasma we are dealing with. On the other hand, tidal stretching simulations of the magnetic field in a star [34] show that the magnetic field of the squeezed star strengths at least by a factor of 10. Thus, if HF QPOs are related to the energy emitted by a magnetic clump of plasma stretched by tides, the estimation of B shown here could give a B actually larger than that of the host LMXB. Although this result is interesting, giving a B consistent with that measured in NS LMXBs (B ∼ 10 8 − 10 13 G [74,75]), we would stress that the issues in this section need close attention in dedicated future works. A. Amplitude of the detectable modulation Numerical simulations of a hot-spot orbiting around a Kerr black hole and emitting photons show modulations detected as HF QPOs if the hot-spot has some overbrighteness with respect to the disk [22]. An overbrightness twice the background disk can give HF QPOs with an amplitude of the order of ∼ 1% the luminosity of the hot-spot. The light curve of the orbiting hot-spot is modulated at the orbital period because of Doppler boosting of the emitted photons, such as relativistic beaming, and gravitational lensing [22]. These relativistic effects magnify the intensity of the electromagnetic radiation emitted. In the case of relativistic beaming, the magnification depends on the velocity of the hot-spot with respect to the observer (see e.g. Ref. [76]) where I ν(o) and I ν(e) are the observed and emitted specific intensity I ν , p = 3 + α with α energy spectral index 11 , D is the Doppler factor where γ = 1/ (1 − β 2 ) is the Lorentz factor and β = v/c, with v orbital speed of the clump and c speed of light. Because we are investigating an interval of orbital radii ranging r ∼ 6−13 r g it would be worth checking the relative Doppler boosting at 6 r g and 13 r g . The Lorentz factor γ and the ratio β at these two radii are (β, γ) 13rg = (0.23745, 1.02944) and (β, γ) 6rg = (0.35482, 1.06959). For an edge-on view (θ = 0), inserting in (13) these numbers the relative increment of D 4 is by 12 67%. Thus, this relative increment affects by 0.67 I ν(o) any intrinsic trend of I ν(o) over r ∼ 6 − 13 r g . In Fig. 3 the energy that could be released and possible converted into radiation is in the interval of 2 − 8 × 10 35 erg, for an atoll source with a luminosity of L atoll ∼ 10 37 erg/s. Over the time-scale the energy is released, ∼ 0.01 s, the background energy of the source then is E atoll ∼ 10 35 erg. Therefore, we may have a clump of plasma a 11 In atoll NS LMXBs α ≥ 1 (see e.g. Ref. [77]) 12 For an inclination, e.g., θ = 50 the relative magnification drops to 24%. Amplitude the beat ν k + νr would have in the observations after the energy release by tidal circularization of relativistic orbits (Fig. 3). The amplitude is in percent of the luminosity of an atoll NS LMXB (∼ 10 37 erg/s). The amplitude is plotted as a function of the frequency of the beat ν k + νr. Such behavior is typical of the amplitude of the upper HF QPO. For a comparison with the data see Fig. 3 in Ref. [52]. factor 8 brighter than the background radiation. Following the results in Ref. [22], in which an overbrighteness of the hot-spot by a factor of 2 turns modulations of ∼ 1%, we may have modulations up to ∼ 4%, i.e. of the order of ∼ 10 33 − 3 × 10 34 erg. Thus, the amount of orbital energy released by the clump during tidal circularization of the orbit might give modulations that could be detected at ν k + ν r in the power spectrum. The mechanism to produce energy proposed here might justify how the orbiting hot-spot would have the overbrightness claimed in other works, in order to produce detectable HF QPOs [21][22][23]. We divide the modulated fraction of energy by the time-scale the tidal circularization of the orbit takes place, i.e. the time-scale over which the energy is emitted, as a function of the orbital radius. Fig. 5 shows the amplitude the beat ν k + ν r would have in percent of the luminosity of the source ∼ 10 37 erg/s. Both the value and the behavior in the figure are similar to the upper HF QPO amplitude seen in the observations (Fig. 3 in Ref. [52] (filled stars)), where it is seen to decrease from ∼ 10 − 15% to 1% over the range of frequencies ∼ 500 − 1200 Hz. V. DISCUSSION Several models have been proposed in order to identify the central frequency of the twin-peak HF QPOs with those of the orbital motion around the compact object [20-23, 49, 50, 78]. Some models link the keplerian frequency ν k of the orbiting matter to the upper peak of the twin-peak HF QPOs, other link ν k to the lower peak [20,21,49,50]. In Ref. [26] numerical simulations show that tidal disruption of clumps of matter [25] produces power spectra much alike to the observed ones. The power law and twin-peaks seen in the observations are reproduced. The upper peak corresponds to ν k +ν r , the lower one to ν k . The light curve of an orbiting clump/hot-spot is drawn by the timing law of its azimuth phase φ(t). The photons emitted by the clump are cyclically Doppler boosted by relativistic effects and when this happens is dictated by the timing law φ(t). Because in a curved space-time for non-circular orbits ν r = ν k the different orbital speed at periastron and apastron passage introduces an oscillating term in φ(t) at the frequency ν r . In a flat space-time φ(t) displays this oscillating term as well, but in that case ν r = ν k and in the power spectra of φ(t) only the peak at ν k is seen. In a curved spacetime the beats ν k ± ν r and ν k are seen [47]. The beats at ν k ±ν r and ν k are a characteristic of the orbital motion as much as ν k is in the case of a flat space-time. Therefore, if orbital motion in a curved-space time is producing the twin-peak HF QPOs, it is more natural to link the upper HF QPO to ν k + ν r . This is also what numerical simulations show [26,47]. It is interesting noting that in the BH LMXB XTE J1550-564 was reported the evidence of a triplet of HF QPOs in a harmonic relationships, 92:184:276 Hz [79]. The one at 92 Hz is the weakest. Individual observations show only a HF QPO, but when averaged together to increase the signal to noise ratio the triplet show up. It is unlikely the same HF QPO is going up and down in frequency since HF QPOs in BH LMXBs are observed at fixed frequencies. Moreover, it would be a really unlikely occurrence the same peak showing up only at these three different orbital radii in integer frequency ratios, 1:2:3. The triplet would fit to the case in which the uppermost peak is the beat ν k + ν r , while the other are ν k and ν k − ν r (see also Ref. [47]). The only orbital radius producing the triplet with 92:184:276 Hz is r p ∼ 7.3 r g for a Schwarzschild black hole with mass M BH ∼ 7.7 M ⊙ . The mass estimated from optical observations is M BH = 9.10 ± 0.61 M ⊙ [80]. Therefore, the pairs of frequency (ν k , ν k + ν r ), given by numerical simulations [26], is suitable for interpreting the harmonic relationships of the HF QPOs seen in XTE J1550-564. In Ref. [81] both the mass M BH and dimensionless angular momentum a of the BH LMXB GRO J1655-40 were measured by means of numerical fits, linking ν k to the upper peak (∼ 450 Hz) while ν k − ν r (periastron precession) to the lower one (∼ 300 Hz), as previously proposed by the model [21]. It is not straightforward making a direct comparison of the GRO J1655-40 mass measured in Ref. [81], using the frequency pairs (ν k − ν r , ν k ), to that using (ν k , ν k + ν r ) as here suggested. In Ref. [81] relativistic frequencies in the Kerr metric were used to fit the data. Also, a third low frequency QPO (∼ 18 Hz) linked to the modulation at the nodal precession frequency ν nod was used in the fit. The precession of the plane of the orbit would produce a modulation at ν nod , a general relativistic effect due to frame dragging and known as Lense-Thirring precession [13]. In this manuscript we are using relativistic frequencies of low eccentricity orbits in the Schwarzschild metric, since here we needed to use exact analytical expressions for both the energyẼ and angular momentumL for orbits with generic eccentricity e [57]. Moreover, in the Schwarzschild metric the nodes of the orbit do not precess. The mass of GRO J1655-40 from the fit in Ref. [81] agrees with great accuracy to that from optical observations. The best-guess from optical light curves is M BH = 5.4 ± 0.3 M ⊙ [82]. The radius at which the three QPOs would be emitted in Ref. [81] is r ∼ 5.6 r g , assuming that the low frequency QPO is the nodal frequency ν nod and not 2ν nod as originally proposed by the model [21]. Using the frequency pairs (ν k , ν k + ν r ) to produce twin-peak HF QPOs in a 3:2 ratio, with the lower HF QPO ∼ 300 Hz and the upper ∼ 450 Hz as in the observations, the mass of the Schwarzschild black hole is M BH = 4.7 M ⊙ , and the orbital radius where (ν k , ν k + ν r ) are in 3:2 ratio is 13 r ∼ 7.3 r g . We emphasize that a precise measurement of the mass of a compact object using the twin-peak HF QPOs is beyond the purpose of this manuscript. It demands close attention and accurate methodology, like that described in Ref. [81]. In Ref. [83] is reported an observational result that could challenge the results presented in this manuscript, i.e. the upper HF QPO corresponding to ν k + ν r (as numerical simulations [26] and Figs. 4, 5 suggest). The authors studied the behavior of the pulse amplitude in the accreting milliseconds x-ray pulsar SAX J1808.4-3658. It was noted, for the first time, that the pulse amplitude correlates with the frequency (300-700 Hz) of the upper HF QPO detected. It is shown that when the upper HF QPO frequency is below the spin frequency (401 Hz) of the pulsar, the pulse amplitude doubles. When the frequency of the upper HF QPO is above the spin frequency the pulse amplitude halves. This shows evidences on a direct interaction between the spinning magnetosphere of the neutron star and the physical mechanism producing the upper HF QPO. It strongly suggests that the upper HF QPO originates from orbital motion of the plasma in the accretion disk. The possible keplerian nature of the upper HF QPO is highlighted. On the other hand, it is emphasized that the findings also suggest a more general azimuthal nature of the upper HF QPO. It could be keplerian, precessional, or an azimuthally propagating disk wave. If orbital motion is producing the detected upper HF QPO, the findings in Ref. [83] would not discard an upper HF QPO corresponding to the beat ν k + ν r , since this beat is a natural consequence of orbital motion in the curved-space time around the spinning neutron star. It is interesting noting that if the upper HF QPO ranging 300-700 Hz in SAX J1808.4-3658 is the beat ν k + ν r , it would correspond to a range of keplerian frequency ν k ∼ 200 − 400 Hz, i.e. an upper limit equal to the spin frequency of the magnetosphere (401 Hz). The maximum keplerian frequency then is seen at the corotational radius r c , i.e. the orbital radius at which the keplerian frequency equals the spinning one. In Ref. [84] has been suggested that SAX J1808.4-3658 is near spin equilibrium, i.e. r m ∼ r c , where r m is the magnetosphere radius. Therefore, a maximum upper HF QPO of ∼ 700 Hz might mean a coherent oscillation produced close to or at the magnetosphere radius. Either the disk is truncated at the magnetosphere radius r m or inside the magnetosphere no coherent oscillations form. Within this interpretation, from the observations [83] we see that as long as the upper HF QPO is produced closer and closer to r m , the pulse amplitude of the neutron star decreases. Following the arguments in Ref. [83] on centrifugal inhibition, the interpretation of the upper HF QPO equal to the beat ν k + ν r and, therefore, ν k ∼ 200 − 400 Hz may give suitable arguments. When the plasma in the accretion disk orbits far away the magnetosphere, r > r m , or ν k < ν s , the centrifugal force at the magnetosphere would inhibit this plasma accreting. Therefore, for a clump of plasma orbiting in the disk and producing the upper HF QPO, some plasma of the clump would not be able to flow towards the magnetic poles and would not affect the pulse amplitude. Instead, a clump of plasma orbiting closer to the corotational radius, or close the magnetosphere, thus for keplerian frequencies approaching ν k = 401 Hz and for ν k + ν r above 400 Hz, it would be more likely that a fraction of the clump is accreted towards the poles, weakening the pulse amplitude [83]. This interpretation, rather than an upper HF QPO equal to ν k , might be more suitable for the excursions seen in the pulse amplitude of SAX J1808.4-3658. Such excursions cluster around a frequency of the upper HF QPO of ∼ 600 − 700 Hz [83], i.e. at ν k ∼ 330 − 400 Hz, close to the frequency at the corotational/magnetosphere radius (401 Hz), where some rest of the clump is more likely to flow to the magnetic poles, causing the pulse amplitude to flicker. Simultaneous twin-peak HF QPOs in SAX J1808.4-3658 are rarely seen. When HF QPOs were discovered in this source [85], the twin-peaks were detected only in one observation. A systematic study on the variability of SAX J1808.4-3658 has been presented in Ref. [86]. Twin-peak HF QPOs were detected only in three observations (out of many) with different central frequencies. These three detections give clues on the evolution of the twin-peaks frequency. The separation in frequency of the peaks is almost consistent with a constant value (∼ 180 Hz) close to half the spin frequency of the pulsar, as previously reported [85]. The highest frequency of the upper HF QPO is ∼ 730 Hz yet may be consistent with the fact that the upper HF QPO corresponds to ν k + ν r and the highest upper HF QPO of ∼ 730 Hz is produced at the corotational/magnetosphere radius. On the other hand, a constant separation in frequency of twin-peaks is inconsistent with the pairs (ν k , ν k + ν r ), since the difference ν r varies and does not match the separation mea-sured. However, a constant separation in frequency is a feature not seen in other atoll sources. The separation usually varies by several tens of hertz with varying central frequency of the peaks [52]. The lower HF QPOs in SAX J1808.4-3658 shows properties that make it to differ than the lower HF QPO in other atoll sources. In SAX J1808.4-3658 the upper HF QPOs is more prominent than the lower [86]. When detected simultaneously, in other atoll sources the lower HF QPO shows a larger amplitude [52,53]. The coherence Q ∼ 10 of the lower HF QPO in SAX J1808.4-3658 (of the same order of the upper one) [86] is much lower than in other atoll NS LMXBs, where it can be of the order of Q ∼ 100 [52,53]. Calculations in GC15 show that such high coherences may be typical of a keplerian modulation. If the upper HF QPO in SAX J1808.4-3658 is the beat ν k + ν r it might justify why its maximum frequency is ∼ 700 Hz, since this frequency corresponds to a keplerian frequency almost equal to the spinning one (401 Hz). Therefore, coherent oscillations can form up to the corotational/magnetosphere radius r m , since the source is in spin equilibrium [84]. Either the disk is truncated at the magnetosphere or inside no coherent modulations form. When the energy of such oscillations is released close to r m the interaction with the magnetosphere might cause the excursions in pulse amplitude seen in SAX J1808.4-3658 [83]. The lower HF QPO in SAX J1808.4-3658 might be a modulation different than keplerian [6,85]. It is rarely detected and shows different properties than the lower HF QPO detected in other atoll NS LMXBs. VI. CONCLUSIONS The power spectra of LMXBs are characterized by several peaks ranging from low to high frequencies. The highest frequencies detected often show up in pairs, named twin-peak HF QPOs. They have central frequencies typical of the orbital motion of matter close to the compact object [87]. In atoll NS LMXBs the lower and upper HF QPOs show different patterns of their amplitude and coherence versus central frequency [51][52][53][54]. The lower HF QPO shows an increase and then a decrease of its both amplitude and coherence. The amplitude of the upper HF QPO keep decreasing with increasing central frequency of the peak. The trend of its coherence remains of the order of Q ∼ 10 over a large range of frequencies. Following numerical simulations [26], in GC15 we have proposed that the lower twin-peak HF QPO could arise from the energy released during tidal disruption of clumps orbiting in the accretion disk. Here we have wondered whether the energy and coherence observed in the upper HF QPO could originate because of the tidal circularization of the clump's orbit. The tidal force acting on an orbiting clump circularizes and shrinks the orbit and the clump emits the released orbital energy as radiation [43]. The modulation at ν k + ν r caused by the eccentricity of the orbit [47] should originate because of the energy released in the phase of tidal circularization of the orbit. We have estimated the energy that clumps of plasma orbiting in the accretion disk would release because of tidal circularization of their relativistic orbits. We note for the first time that such physical mechanism might account for the amplitude and coherence of the upper HF QPO observed in atoll NS LMXBs (Figs. 2, 3 of Ref. [52]). Numerical simulations [26,47], the results presented here (Figs. 4,5) and the discussion on SAX J1808.4-3658 suggest that the upper HF QPO most probably corresponds to the beat ν k + ν r . The physical mechanism to release energy proposed here, together with the modulation mechanism in Refs. [22,23,26,47], might offer an explanation on why the upper HF QPO would originate. This work might be the first time we are recognizing the tidal circularization of relativistic orbits occurring around a neutron star. ACKNOWLEDGMENTS I would like to thank Rodolfo Casana, Manoel M. Ferreira Jr., Adalto R. Gomes and Alessandro Patruno for discussions on the topic. I thank the anonymous referees for valuable comments that helped to improve the manuscript. This work was fully supported by the program PNPD/CAPES-Brazil.
2017-11-05T17:59:25.000Z
2017-11-05T00:00:00.000
{ "year": 2017, "sha1": "5e7f66b1e539ca9367e528bc6111081a795692b9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1711.01626", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5e7f66b1e539ca9367e528bc6111081a795692b9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54901096
pes2o/s2orc
v3-fos-license
The perceived value of videos in newspapers on iPad This work sets out to analyse the perceived value of the videos inserted in paid-for editions of tablet newspapers. It is a qualitative investigation centred on a user profile defined by two parameters: regular reading of paid-for journalistic content and familiarisation with the iPad device. The object of this study is the video content inserted in the publication El Mundo in Orbyt. The results demonstrate that the sample subjects find that the videos do not give added journalistic value. Introduction Throughout the history of journalism new formats have arisen which standardise the way that content is presented, allowing the audience to recognize and interpret them accordingly. Printed media customs were passed on to the news programmes in radio and television, and newspaper web pages ended up being recognised by certain design characteristics defined by the news hierarchy and its use of headlines, section menus, columns, photos, videos, etc. The latest episode in the evolution of media formats has been the arrival of tablets (Spain leads Europe in the penetration of this medium according to a TNS report Mobile Life 2012). Various studies have shown that users of these mobile devices, together with smart phones, have increased news consumption (Barbosa, Da Silva, & Nogueira, 2013) 1 . Due to its physical characteristics the tablet is the perfect medium for viewing videos. In fact users of these devices consume double the amount of videos than users of smart phones (Palacios, 2013) 2 . 1 Four out of five tablet owners have downloaded a news application according to Reynolds Journalism Institute. Retrieved from http://www.rjionline.org/research/rji-dpa-mobile- media-project/2013-q1-research-report-4 In a context where the penetration of tablets is in continual expansion and there is a constant decrease in the number of readers that pay for news products, newspapers urgently want to be linked to this new device. However, the way in which newspaper content should be presented for tablet consumption, as a new line of business which can complement the traditional channel of selling copies and publicity, has still to be clarified (Kiuttu, 2013). It is evident that videos are a large part of navigator-accessed news content in tablets. However, it is not clear to what extent a newspaper reader, who pays for access by way of an application, is interested in the convergence of content. The object of this study is to examine the user experience of press readers with regard to the videos offered in subscription tablet versions of newspapers. The investigation centres on an edition presented in enriched pdf, as it is the format that is most similar to that used by readers who buy a daily newspaper in Spain. It is precisely these readers, who now pay for journalistic content, who fit the profile that the publishing houses are most interested in. We must bear in mind that the tablet is a medium adapted to reproduce a reading situation (surroundings, body posture, attitude…) similar to that used with paper (lean back, instead of lean forward as with smart phones and the web), in a way that the circumstances for discovering the experience of said users accustomed to paying for digital content is optimum. In summary, we plan to find out if these press readers who pay for journalistic content are interested in the accompanying videos while reading the news on the iPad. We have opted for this device because at the moment of carrying out the study it had the highest level of penetration (32.4%) 3 . We also want to find out the value placed on these videos and if they affect the perceived news quality of the journalistic product in question (credibility, depth, focus value and interest). Therefore it is an analysis of a qualitative nature centred on a user profile defined by two parameters: regular reading of paid-for journalistic content and familiarity with the iPad device. This study is part of a wider investigation which intends to repeat the test on a public segmented by age: under 25-years-old, elderly population, etc. and with other types of newspaper applications. Theoretic framework and bibliographic review This study was approached by way of a qualitative investigation which is understood to be that which seeks to understand the social world from the point of view of the actors (Wildemuth, 1993) and helps to describe the workings of social systems in a holistic manner, detecting previously unknown relationships and generating more complete descriptions to facilitate generalisation (Janesick, 1994;Fidel, 1993). From the general theoretical body of writing on qualitative methodologies we selected those studies which were centred on Communication Science, such as those by Berganza Conde & Ruiz San Román (2010), Igartua (2006, Wimmer & Dominick (1996) and Jensen & Jankowaki (1993), among others. By way of this qualitative investigation we attempted to find out the user experience of readers of tablet newspapers. Though different definitions of user experience exist (Arhippainen & Tähti, 2003), here we understand it to be that "the combination of ideas, sensations and user evaluations which result from the interaction with a product; it is a result of the user objectives, the cultural variables and the interface design" (Knapp Bjerén, 2003, p. 4). As D'Hertefelt (2000) indicates, user experience derives from the concept of usability, focusing on the enjoyment and satisfaction of use (Hassan & Martín Fernández, 2005). We based this qualitative investigation on studies carried out by Jacob Nielsen on the usability of mobile devices (including tablets) and user trials in general, as well as those of other members of his team including Don Norman and Bruce Tognazzini. They pose key questions such as the necessary number of subjects in the sample, taking into account that they are not looking for exact representation, or questions related to interview design. In the same way the indications on user trials provided by Thomas Thornton or Daves Yeats in Sentier were taken into account, as well as studies on the press for tablets by the Poynter Institute. Until now there has been few in-depth studies on videos in the press for tablets, even though the multimedia content of these publications constitutes one of the strong points of this device. Among the few exceptions in other countries we find the works by Kiuttu (2013), who works on the integration of these contents on digital storytelling because they are also suitable for the lean-back nature of tablet usage. Also Fernandes Teixeira (2014) and Fernandes (2014) focused on this topic. The first one highlights the characteristics required of multimedia contents in order to not to be a simple transposition from other media. The coexistence of convergence with divergence; the lack of standardization of links and display windows; and the dominance of the visual to attract the attention of users and to illustrate news content are the main conclusions of this work. The second one, after a study of the portuguese media tablets, points that tablets have some cyberjournalism features like multimedia contents. The nearest thing to this which has been analyzed has been the multimedia content of cybermedia. In fact, in various occasions, in the same way as with tablets, the importance of videos in this media as a strategy for crisis relief has been made obvious. (Greer & Mensing, 2006;Micó, Masip, & Barbosa, 2009). Given the object of study, numerous published works on videos in the digital press and on smartphones were taken into account: Masip, Mico & Meso (2010); Guallar, Rovira & Ruiz (2010); Opgenhaffen (2009) or Sundar (2002), among others, even though there is an obvious difference between these mediums and tablets in the way people relate and interact. In a similar way, this investigation can be seen within the framework of other recent studies on the press and tablets approached by differents researchers like books edited by García (2012) or Valentini (2012). Sanjuán, Nozal and González-Neira (2012) after conducting a study on user experience on apps point the limited use of videos. Paulino (2013) focuses on magazines in tablets and the role of the videos in their narrative. Guedes (2013) compares the issue on the internet with the app in El País. Finally Carvajal (2013) concludes that the contents are no more multimedia developed in these versions. So, in all these works, the multimedia content is an element of the analysis of the tablet press, but without going into its features in depth. Object of Study To reach the aforementioned goals, videos from the newspaper El Mundo, which is found on the application Orbyt 4 , were studied. The choice of this newspaper was made on diverse criteria. First of all, El Mundo is one of five daily newspapers with the highest circulation in Spain 5 . In second place, the edition which is offered by way of Orbyt is the same as the paid-for version in iPad, as opposed to its WebApp and other free access points to journalistic content which are related to the same brand 6 . Finally, El Mundo is the main general information newspaper which puts more emphasis on video content in its paid-for tablet edition. We only focussed on the videos related to journalistic content and ignored those with advertising content 7 , not considering them an object of this study. Method Before carrying out the user interviews and to achieve a more in-depth knowledge of the object of this study, the researchers classified the videos offered in this daily newspaper in pdf during the dates of the study. Moreover, an interview was held with the person responsible for content of El Mundo in Orbyt, Juan Carlos Laviana. Video Classification Table To carry out the classification of the videos we followed a typology which adapts that defined by Pere 2. The newspaper in pdf designed and laid out exactly the same as on paper, enriched with more or less interactive and multimedia content. 3. The version specifically created for iPad, with different design and content from the printed or web versions. This last version is virtually non-existent among the big, daily newspapers in Spain at the moment, although they are in the process of development. For other types of tablet applications also see Suárez & Martín (2013). 7 For a study on advertising formats for the tablet also see the work of Martínez Costa, Quintas & Sanjuán (2012). Observatorio (OBS*) Journal, (2015) Teresa Nozal Cantarero, Ana González Neira and Antonio Sanjuán Pérez 119 located. Then the level of editing is classified depending on whether the video has statements, stand up, text, voice in off, bumpers, closure, a signature or music. Finally the video is classified by genre using the categories: news, feature, interview, opinion and report. Interview Design Two interviews are designed, one for the director of Orbyt, Juan Carlos Laviana, and the other for the users, from which we obtained their evaluation of said videos. The interview with Laviana was semistructured and recorded over the phone. It was designed to obtain more internal information on the videos inserted in the tablet edition, and especially the procedures and work routines followed in its elaboration. With the users we carried out personal in-depth, semi-structured interviews. In these interviews, apart from finding out if they opened the videos during their reading of the newspaper, how many videos, and if they watched them to the end, we also incorporated more qualitative aspects such as asking why they acted in this way, distinguishing between motives of personal interest and those of journalistic interest. In this way the interview looked at the potential journalistic value that the video adds to the written information: if the user considers that the journalistic product is improved, if these audiovisual pieces add credibility, depth or interest, etc. Moreover, the format of the videos that they preferred, the function they consider the videos to fulfil, and their expectations of them were discussed. These interviews were carried out the same day that the users read the newspaper, less than two hours after reading. In five cases the interviews were carried out at home and in ten cases in the workplace. Users To reach the objectives of this investigation a non-probabilistic sample was used, initially for convenience and then by the nomination of fifteen regular readers of the press who used iPads. The determination of these two characteristics of the sample -that they read the press and are users of iPad-is important to avoid distortions in the study. On one hand, the interest of the study revolves precisely around knowing how people who are in the habit of relaxed reading rate paid-for journalistic content. On the other hand, it is important that the components of the sample are used to using an iPad to avoid making the user experience conditioned by the novelty of the device. The sample is composed of eight women and seven men, all of whom are between 35 and 50 years of age. Eight of them are older than 45, three are between 40 and 44, and four are between 35 and 39. This profile age can cause the result to be slanted, for recent studies demonstrate that video use is greater with young people. 8 Nonetheless, this homogeneity precisely supposes a value in this study that attempts to approximate the value of news videos for this age segment. 8 According to a 2013 report conducted in Germany by AP, in collaboration with Deloitte and GFK, "60% of young people between the ages of 16 and 24 assure having seen online news video, 59% of whom have done so at least once a week". In http://www.marketingdirecto.com/actualidad/medios/los-videos-son-la-sal-y-la-pimienta-de-los-portales-120 Teresa Nozal Cantarero, Ana González Neira and Antonio Sanjuán Pérez Observatorio (OBS*) Journal, (2015) Their professions are the following: three journalists (one of whom works in television), another works in audiovisual production and another as a communications consultant; two are former journalists now working as university professors of communications; another two are university professors of communications; in addition, a computer scientist and designer, both of whom work as university professors. Two other subjects are school teachers, one of secondary school and the other of primary; finally, there is an executive engineer and a businessman. In summary, there are five members who are or have been professionally linked with journalism, and six members who are employed as university professors (four in the field of communications). Two others are directors of companies, whether their own or of a third party, and the remaining two are employed as school teachers. These characteristics make the sample result especially interesting since the subjects are accustomed to evaluating and engaging in intellectual tasks that are, in the majority of the cases, associated with the media. All are daily iPad users with diverse reading habits. Five of them read conventional print newspapers, exclusively or mainly, while the others combine different mediums. All read a paid-for edition at least once a week. The majority, except two, devote more time to reading at the weekend than on weekdays. Only two read El Mundo and, as such, are familiar with the object of this study. The lengths of the videos tend to range between a minute and a half and three minutes. On occasion, depending on the content and the edition, they may be less than one minute long (when they are recorded by the journalist himself, in situ) or surpass four minutes (reports). Although some general patterns exist (like the initial bumper) the level of editing is quite varied: from the talking head to the voice-in-off over agency images, from interviews to the self-videos of the reporters in lieu of the news. Likewise, the type of editing that appears in each of the sections is repeated regularly. The videos with opinion content tend to be talking heads (stand up), while those inserted in the EM2 and Sports regularly have a more creative editing style, and include music. The above-mentioned video-blog of Carlos Cuesta tends to include animations that illustrate the content of his words. With regard to the genres that are employed, news, opinion and features predominate. This last one tends to appear in the Sports, Culture or Science sections. Reports and interviews are in the minority. As previously mentioned, El Mundo has staked in favour of multimedia content in Orbyt. All of the inserted videos have been edited by their own newspaper 9 , and as such never include unedited agency versions. Each one of the clips is accompanied by a headline that bears the identifying logo of the newspaper. Likewise, all of them are labeled with the names of the reporter and the people responsible for the editing and image. All have voice-over and end with the phrase "Para El Mundo en Orbyt…" (For El Mundo in Orbyt…) and with the name of the journalist or announcer. Results and Discussion In this part, we shall match and discuss the results obtained from the interviews with the user samples and the representative from Orbyt for El Mundo, Juan Luis Laviana. It is structured according to the distinct topics of the interview. Opening of the Video It is astonishing that two thirds of the subjects did not open any videos while reading the newspaper. Not because they were uninterested in the content, but because they did not realize that such a possibility existed. They were unfamiliar with the pdf of El Mundo in Orbyt, and the design of the interface ineffectively indicated the presence of the videos. In addition, it should be noted that at least two of the users that participated in the study read conventional newspapers exclusively, thus the possibility of their opening a video is minor. On the other hand, as it was previously indicated in describing the sample, we need to take into account The Function of the Videos Three attitudes have been detected regarding the function the videos inserted in the newspaper must fulfill: 1. On the one hand, users demand that videos be strictly informative, although with the understanding that they be purely visual. In this case, the expectation is to find a brief video that contains a piece of information that is relevant in itself. This range of options is reflected in the diverse phrases used by the sample group. Some give weight to the informative value: "the videos have to give information, be short, to the point…" or "it is not so much the quality of the video that matters, but rather the content it offers". Others stress the curiosity value that the video should have in order to arouse the interest of the viewer: "the video has to have a good story and be interesting for it to be opened". In fact, the term "interesting" was repeated by various people to describe the videos they preferred and opened when they read the newspaper. Finally, other phrases demonstrate and exemplify the sensational value: "I only want to watch those that have sensationalism, humour, sex…, something interesting in itself, of the style "Relaxing cup of café con leche" 10 . In summary, there exists a clearly defined tendency in a large part of the sample users to expect clips in which the information value stands out, and the interest that the users have in the images themselves. This is directly related to the tendency that exists in cybermedia to convert themselves into providers, not only of information but also, of entertainment and services (Masip & Micó, 2013). 2. On the other hand, there is a call for videos which serve as context or comparative information to the text: an audiovisual piece which is opened in order to delve further into the article which has already been read. In this sense it is stated that "video must be complementary, not the news itself". Another user notes "if I watch a video, it is because it provides me with something that the text or the photo cannot". There are those who state that when they notice that there is video content, they expect it to add "context to the news, or something that it can be compared with, by adding information from previous years or other countries". 3. Finally, a minority, more precisely only two people, expect the videos to summarize the news, in effect substituting the need to read. This hardly seems relevant in comparison with the aforementioned options, which were repeated much more often. Furthermore, in this case, the professional profile of these users has defined their preferences given that one writes press summaries, and the other works in audiovisual production. One of them even said that she wants news in which "the text is short, and I am told the news by video", in order to make the consumption of information faster. Reading vs. Watching As for the reasons why videos are not opened or are closed before they end, the difference between reading an article and watching a video, that the two actions require different attitudes, is repeatedly cited. Expressions such as "I want to read an article, not watch videos," or "when you read the news, videos are almost never important," were repeated in various ways on several occasions by different users. One person even states that "when I read the news I just skim it, and although sometimes I stop to read something I don't want to stop to watch a video". It can be said that the subjects in the sample are not prepared for the strict convergence of content in the sense that understands Barbosa (2008) as a mixture of journalistic media formats and languages to create differentiated products. It is interesting to keep in mind that all of the people who say that they are more interested in reading the news than watching videos, and thereby identifying different attitudes in themselves as readers as of spectators of audiovisual content, are over forty years-old. Among the reasons given for this preference is that reading allows for a quicker selection of news, the usual 'skimming', whereas video must be seen through to the end to obtain the full meaning. Also mentioned is the idea that reading provokes a more active attitude than that of watching a video, which implies passive conduct, and that there is a certain difficulty in switching from one to the other. Also keep in mind that audiovisual contents are juxtaposed to the texts. They are not strictly convergent, as Fernandes Teixeira (2014) observed in their work. Another relevant question is the context in which the news is read. Video typically includes audio content, which can disturb others if earphones are not used. One person in particular closed video content because they were reading the news in public, and the noise was deemed to be a nuisance. Journalistic Value Speaking in general terms of the value that video content provides, it is stated that it adds journalistic value to written information. However, when the five people that watched the video content offered by the newspaper were asked to define this added value, they reverted to vague phrases such as "I don't know, it gives more journalistic opinion". Only in two cases could this be further expanded, because the added value came directly from the informative weight of the image. In one case because the news was about political reactions in Spain after the broadcast of a television report in Catalonia, and the video contained fragments of this report. In the other, because the video introduced little-known images of the life of Juan Luis Panero, about whom the article was written, and which interested the user. For their part, two of the five who watched the videos openly confirmed that watching them did not add any type of journalistic value to the text. What is more, on occasions it seems that it created a negative reaction. In particular, upon watching a correspondent's self-video, one person said "I like this journalist very much when he writes, but he is terrible on camera, and the video is just awful, unpleasant". In summary, although the added value which video content provides to the news is generally a priori taken for granted, when looked at case by case it is not seen as such. In fact, certain users unconsciously express the scarce journalistic value that they expect video to provide, despite being a product created by journalists and positioned in the news pages, with phrases such as: "if I want to watch news videos I look for them on YouTube". These opinions are the opposite of those desired by the Madrid newspaper. El Mundo's Juan Luis Laviana points out that the pdf videos are exclusive Orbyt content "created with journalistic goals far beyond the simple replay of statements". A difference is intended to be drawn between this type of paid-for multimedia content offered by Orbyt, and that found on the newspaper's own website. The same images are used in both the digital edition and the analysed content, however, in Laviana's words, "we give them a different treatment: we comment or have an expert analyse the information from the studio, we link them with other images, other sounds, ambient sounds, music, etc.". In spite of the newspaper's efforts, the sample subjects who participated in the investigation, and who opened the videos, did not encounter this journalistic value. Video format and genre It is commonly repeated that videos in news articles must be short, "brief capsules". Specifically, shorter than they are now: "the videos inserted are usually long". The idea that longer videos provoke the viewer to abandon the article must be insisted upon. At times expressions such as "I didn't finish watching the video because it took too much time", or "I don't watch videos in news articles because I don't have time," are often heard. Evidently, the perception of videos as too long is related to the lack of interest they provoke in the user. Those interviewed concurred that there should not be opinion pieces, and least of all in the form of the 'talking head': "I consider video opinion pieces unbearable". In fact, if the video is an opinion piece, although not in the talking head format, it creates a negative reaction in more than one user "as soon as I see that a video is an opinion piece I close it immediately". In general video created exclusively by the newspaper is desirable. In one case, a user expressed their preference for production which is "to the point and does not mince words". But not much is generally specified. In the end, narrative editing is preferred, although in several cases it is stated that it should not appear as if it were television news. Those interviewed tended to reject the presence of the 'talking head' even in video-diary entries. According to Laviana, the newspaper El Mundo has invested much effort in order to cultivate multiplatform journalists, capable of writing, speaking to camera, commenting or editing: "We have had to teach people to record themselves with an iPod, and send a diary entry with all the flavour and freshness of internet images". For Laviana, Orbyt represents this added value, although the users who participated in the study did not see it as such. In fact, the only video-diary entry opened by a user was harshly criticised precisely for the journalist's jump from the written word to the spoken, as has been previously mentioned. The format and genre that are accepted for videos are strongly tied to the function that is attributed to them. So, those looking for an informative function are in favour of video content with "partially edited relevant documents or images", if possible without voice-over because "journalists who are not from television speak very badly". On the contrary, those who are more interested in a complementary function are more in favour of the reporting genre and editing. The interview is the genre of video which shows most a priori acceptance. However, upon further examination it is noted that in general the interview video is not itself watched, but rather that the text is read as it permits greater freedom of selection of individual fragments. For this reason, one user came to say: "I would prefer that the text had a link to the corresponding part of the interview". Conclusions Attitudes towards video in news articles are closely related to users' consumption habits. The act of reading a newspaper in public or in private, professionally or recreationally, directly affects the perception one has of these audiovisual pieces. Video seems more closely associated with infotainment than with purely journalistic content, and the sample users analysed limited themselves to a quick read, particularly mid-week. Thus, the effort that the newspaper El Mundo makes to enrich content with artistic or opinionbased video which does not serve a strictly informative function, in order to create a more relaxed consumption, is fruitless in the case of the subjects who form part of the study. However, the users do call for very short videos which really add information to the text or photo. El Mundo also makes an effort at convergence in the form of the multi-purpose journalist (González Molina, 2013;Aguado & Palomo Torres, 2010), or the multi-platform as Laviana calls it. However, the users' perceptions and experiences in the sample studied show the effort made to be fruitless, in relation to the satisfaction it provides. Following the line initiated by Salaverria (2009) This investigation shows that audiovisual content in news read on the iPad is not a priority for the readers who formed part of the sample. The typical reader in this investigation is not familiar with the presence of video in the press, nor expects it to be included. This justifies the perception that the value provided from a journalistic point of view is scarce. This confirms findings set out on others papers (Sanjuan, Nozal & González-Neira, 2012). At least when the video content is juxtaposed to the text, common feature to the already cited work by Fernandes Teixeira. The profile of the user who presently pays for content, especially their age, is very closely connected to the expectations that they have of video, and the scarce use they make of it compared to younger users. The great challenge continues to be achieving that the younger public, accustomed to multimedia consumption and more interactive reading, begin to move from free consumption (in Spain, mainly through the internet, despite the recent introduction of the paywall in El Mundo) to paid-for journalistic content. To this end, a study focused on a younger sample group remains to be carried out, even if they are not habitual consumers of paid-for news content, in order to compare users' perceptions in function of their age. It is generally understood that the tablet must incorporate multimedia content, as it is a way to give added value to the text. However, the video format and the tablet version of the newspaper itself have still to be defined. As opposed to the initial stage in which raw content was inserted, all the videos in El Mundo have been edited and watermarked. Although user-satisfaction results of this study do not correlate well with the newspaper's efforts, it is understood that experimentation of new formats that the newspaper is carrying out (such as the recently released tablet evening edition) will ease the consolidation of a native application of the newspaper. In this case convergence is likely to be more effective and profitable, given that the pdf format does not seem to be suitable for riskier attempts at convergence.
2018-12-06T21:16:56.346Z
2015-02-06T00:00:00.000
{ "year": 2015, "sha1": "7e373ef872dc82c15ec4e84d922148a47e7abd11", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.15847/obsobs912015779", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "58f648b6369c293f4bb09d4d3fe5a44958d352fe", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
267038992
pes2o/s2orc
v3-fos-license
Real‐world glycaemic outcomes in patients with type 1 diabetes using glucose sensors—Experience from a single centre in Dublin Abstract Aims To evaluate changes in glycated haemoglobin (HbA1c) and sensor‐based glycaemic metrics after glucose sensor commencement in adults with T1D. Methods We performed a retrospective observational single‐centre study on HbA1c, and sensor‐based glycaemic data following the initiation of continuous glucose monitoring (CGM) in adults with T1D (n = 209). Results We observed an overall improvement in HbA1c from 66 (59–78) mmol/mol [8.2 (7.5–9.3)%] pre‐sensor to 60 (53–71) mmol/mol [7.6 (7.0–8.6)%] on‐sensor (p < .001). The pre‐sensor HbA1c improved from 66 (57–74) mmol/mol [8.2 (7.4–8.9)%] to 62 (54–71) mmol/mol [7.8 (7.1–8.7)%] within the first year of usage to 60 (53–69) mmol/mol [7.6 (7.0–8.4)%] in the following year (n = 121, p < .001). RT‐CGM‐user had a significant improvement in HbA1c (Dexcom G6; p < .001, r = 0.33 and Guardian 3; p < .001, r = 0.59) while a non‐significant reduction was seen in FGM‐user (Libre 1; p = .279). Both MDI (p < .001, r = 0.33) and CSII group (p < .001, r = 0.41) also demonstrated significant HbA1c improvement. Patients with pre‐sensor HbA1c of ≥64 mmol/mol [8.0%] (n = 125), had attenuation of pre‐sensor HbA1c from 75 (68–83) mmol/mol [9.0 (8.4–9.7)%] to 67 (59–75) mmol/mol [8.2 (7.6–9.0)%] (p < .001, r = 0.44). Altogether, 25.8% of patients achieved the recommended HbA1c goal of ≤53 mmol/mol and 16.7% attained the recommended ≥70% time in range (3.9–10.0 mmol/L). Conclusions Our study demonstrated that minimally invasive glucose sensor technology in adults with T1D is associated with improvement in glycaemic outcomes. However, despite significant improvements in HbA1c, achieving the recommended goals for all glycaemic metrics remained challenging. Input on the glucose sensor type, percentage of time CGM data was captured by sensor, average glucose, glucose management indicator (GMI), coefficient of variation (CV), percentage time in very high range (>13.9 mmol/L), percentage time in high range (10.0-13.9mmol/L), percentage time in range (3.9-10.0mmol/L), percentage time in low range (3.0-3.9 mmol/L) and percentage time in very low range (<3.0 mmol/L) were obtained.Changes in HbA 1 c values before and during sensor use were analysed, along with sensor based glycaemic metrics in accordance with international consensus on GCM reporting guidelines. 14atistical analysis was performed using IBM SPSS Statistics for Macintosh, Version 27 (IBM Corp., Armonk, N.Y., USA). Nonparametric tests were used for data that was not normally distributed.Wilcoxon signed-rank test was used to compare the most recent HbA 1 c values to the pre-sensor values in all patients, including subgroup analyses based on sensor type and baseline treatment modalities.Similar test was used to compare the changes in HbA 1 c within group of patients who had either a pre-sensor HbA 1 c < 64 mmol/mol [8.0%] or ≥ 64 mmol/mol [8.0%].To indicate the effect size, calculated r was performed with any value of above 0.1, 0.3 and 0.5 indicating small, medium and large effect, respectively.Friedman test was used to compare the changes in HbA 1 c presensor within the first and the second year on-sensor.HbA 1 c within the first and second year on-sensor was defined as the average HbA 1 c within that year.Data are presented as median (interquartile range) or mean ± standard deviation. | Baseline results A summary of baseline characteristics of the patients (n = 209) for which the data was collected is presented in 4 and Figure 1). | HbA 1 c changes based on sensor type In the Libre (FGM) group (n = 13), there was a nonsignificant reduction in the most recent HbA 5). | HbA 1 c changes based on baseline diabetes treatment (MDI or CSII) In 8). Our study observed a significant improvement in HbA 1 c in the RT-CGM groups (Dexcom G6 and Guardian TA B L E 2 3) with medium to large effect size.However, such a significant improvement was not observed in the FGM group (Libre).This may be due to a smaller sample size (n = 13). In a recent meta-analysis, the HbA The tendency of individuals with T1D to maintain a high glucose levels in order to avoid hypoglycaemia, is more commonly observed in individuals with higher HbA 1 c. 17 The use of a glucose sensor has been shown to reduce fear of hypoglycaemia, 18 thus potentially contributing to the greater glycaemic benefits seen in participants with the higher baseline HbA 1 c compared to participants with lower HbA 1 c. We observed a significant improvement in HbA1c in the MDI and CSII groups, with both exhibiting medium effect size.These findings suggest that the use of glucose sensor provides benefit to both groups.The minor difference in the effect size observed in the CSII group may be attributed to patients familiarity with diabetes technology, the ability to administer a more precise insulin adjustment and utilising sensor-augmented pump therapy with predictive low glucose suspend capabilities including other advanced hybrid closed loop features. 19,20spite observing significant improvements in HbA the most likely to achieve >70% time in range. 22In recent years, we have seen a widespread use of advanced hybrid closed loop (ACHL) technology that demonstrates real-world success in safely achieving these glycaemic targets, [23][24][25][26] and is a potential tool to further improve the HbA 1 c levels.This trend suggests that the use of insulin pumps and closed loop technologies with glucose sensors may be required, for the more precise glycaemic control required to achieve recommended clinical targets. The strengths of this study included the real-world nature of the results, the sample size, use of average HbA 1 c values to assess changes during the first and the second year of sensor use, and a high level of sensor data availability.There were several limitations to our study.This was a retrospective observational study evaluating the impact of introducing a glucose sensor, which was limited to Dexcom, Guardian 3 and Libre 1, in unselected patients with T1D attending a diabetes service in a public hospital.Furthermore, patients' options were impacted by the CGM funding at the time in which Libre 1 was approved to patients under 21 years old while Dexcom and Guardian 3 were approved for all ages.Consequently, the cohort of patients that we identified may have affected the data. Additionally, there are several factors that may contribute to HbA 1 c changes such as diabetes severity index, the presence of diabetes related complications, duration of diabetes, age at diagnosis, rate of DAFNE completion, patients receiving other adjunctive noninsulin therapies, outpatient review frequency and nonattendance rate. 27,28ese factors may need to be controlled in future studies.A small number of patients were using sensor-augmented pump therapy with predictive low glucose suspend capabilities (Medtronic 640G) (n = 12) and advanced hybrid closed loop system (Medtronic 780G) (n = 12).The frequency of attendance for laboratory HbA 1 c measurements may also have been reduced due to the COVID-19 pandemic.Both factors may contribute to the changes in HbA 1 c seen in this study. | CON CLUS ION We observed a clinically significant and sustained improvements in FU N D I N G I N FO R M ATI O N The author(s) reported there is no funding associated with the work featured in this article. CO N FLI C T O F I NTER E S T S TATEM ENT consent for their data to be remotely linked and shared with the diabetes clinic.Data on gender, age, duration of diabetes, types of insulin therapy, HbA 1 c, duration of CGM use and the completion of the Dose Adjustment for Normal Eating (DAFNE) structured diabetes education course were collected.The manufacturers' proprietary webbased glucose monitoring platforms, including Libreview (Abbott Diabetes Care; Oxon, UK), Dexcom Clarity (Dexcom Inc, San Diego, CA, USA) and Carelink (Medtronic Inc, MN, USA) were reviewed. 1 c levels over time with the introduction of glucose sensor technology, achieving the recommended goals for all glycaemic metrics, as defined by the ADA standards of care (2021), 21 remained challenging.In our cohort, 25.8% of patients achieved the recommended HbA 1 c goal of ≤53 mmol/mol [7.0%], while 16.7% achieved the recommended TIR of ≥70% and 91.9% achieved the recommended goal of <4% for time below range.In a multinational cohort study including 5219 children, adolescents, and young adults with T1D, the proportion of individuals achieving the recommended time in range target was found to be associated with treatment modality.Users of RT-CGM concurrently with an insulin pump were TA B L E 6 Patients with pre-sensor HbA 1 c < 64 mmol/mol [8.0%] compared to most recent HbA 1 c. Summary of sensor-based metric results. type n Pre CGM HbA 1 c n Most recent HbA 1 c p Value Calculated r HbA 1 c (mmol/mol) and [%] are expressed in median (IQR).Within-person changes assessed by the Wilcoxon Signed Ranks Test.HbA 1 c change within 2 years of starting a glucose sensor (n = 121).HbA 1 c (mmol/mol) and [%] are expressed in median (IQR).HbA 1 c change assessed by Friedman Test.Pre-sensor HbA 1 c compared to the most recent HbA 1 c by sensor type. Note:TA B L E 4Note: Note: HbA 1 c (mmol/mol) and [%] are expressed in median (IQR).Within-person changes assessed by the Wilcoxon Signed Ranks Test. Patients with pre-sensor HbA 1 c ≥ 64 mmol/mol compared to most recent HbA 1 c.Pre-sensor HbA 1 c compared to the most recent HbA 1 c by diabetes treatment type.HbA 1 c (mmol/mol) and [%] are expressed in median (IQR).Within-person changes assessed by the Wilcoxon Signed Ranks Test. TA B L E 8Abbreviations: CGM, continuous glucose monitor; CSII, continuous subcutaneous insulin infusion; HbA 1 c, glycated haemoglobin; MDI, multiple daily injection; n, number.Note: REL, RAW, SYG, AR, MOS, HJK, KN, DOS, RC and WAWM declare no conflict of interest.CB declares receiving honoraria for educational events and conference attendance from Astra Zeneca, Behaviour Change Training Ltd., Diabetes Ireland, EASO, International Medical Press, Eli Lily, Medscape, MSD, Novo Nordisk and Sanofi Aventis and is a former member of a Dexcom Advisory Board.She is a member of an Obesity National Clinical Programme Clinical Advisory Group, and MECC working group in Ireland.
2024-01-20T05:10:42.928Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "d70c3f46a13515b10ec1ac872274012267d7cb9e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d70c3f46a13515b10ec1ac872274012267d7cb9e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
213656158
pes2o/s2orc
v3-fos-license
Disorder detection of tomato plant(solanum lycopersicum) using IoT and machine learning India is an agricultural country and this sector accounts for 18 percent of India’s GDP. This sector is the backbone of the country and focuses on better yield by using pesticides and fertilizers to prevent plant disorders which directly affects the yield. The primary method adopted for detecting disorders is through visual observation and other methods are quite expensive. Many authors have proposed solutions to this problem such as IoT for grapes, or system designed for accurate disorder detection using machine learning with limited scope. This paper showcases a prototype that uses multi-modal analysis through sensor data, computer vision. The main objective of this system is to accurately detect disorders in tomato plant using IoT, Machine Learning, Cloud Computing, and Image Processing. Introduction Global warming has been increased due to man-made activities over the past few decades. Due to such activities, it gave rise to uncertain climatic conditions. Moreover, these unusual climatic conditions influence all the major aspects of plant growth which includes soil fertility, temperature and cropping intensity. Infertile soil is not preferable for crops, because of which fertilizers are used as they contain all the much-needed nutrients such as potassium, nitrogen, and phosphorus. This paper showcase a system which can be used to detect disorders found in Solanum Lycopersicum plant commonly known as Tomato plant [3] which belongs to the nightshade family, Solanaceae [1]. According to research, leaves are the most affected parts of the plant in case of any disorder. Their properties provide important insight into the identification of the disorder and its current status. Our system focuses on both biotic and abiotic factors affecting the growth of the plant. In biotic disorders, the system is mainly concentrating on two disorders commonly seen in a tomato plant, early blight, late blight and one class of healthy. Temperature, humidity, and soil moisture are abiotic factors. Feature extraction is the major step in our system where patterns from image and sensor readings are learned by our machine learning algorithms [4]. The sensor's data has been collected using IoT and images are captured manually and processed further. Image data is collected from multiple sources like Plant Village Dataset, Real world images captured at the farm and some images downloaded from the internet. The system reads live sensor data and leaf image and predicts whether the plant is healthy or not and if not, it also predicts which disorder is present among the three classes. Related Work and Motivation In the past, many researchers have worked on many techniques on identifying features in image data. Stephan Gang Wu et al. [5] used a probabilistic neural network along with leaf image and data preprocessing to implement leaf recognition system. Harish Velingkar et al. used feature extraction techniques like K-Means clustering algorithm for clustering important colors and then SVM for [6]. In [7] and [11] the authors used CNN for raw feature extraction from leaf images. Alvaro Fuentes et al [12] [14] classified three diseases Cercospora leaf spot, leaf rust and powdery mildew in Sugar Beet Leaves using Support Vector Machines (SVM). The major drawbacks of all this work are that they only consider visual aspects of leaf i.e. leaf image. Only the visual characteristics of the leaf is not a suitable measure for determining the plant condition. Materials and Methods This paper describes the complete walkthrough from data collection to the building system. Different image processing, feature extraction, and dimensionality reduction techniques are used to get insights from both sensors and image data. All the steps carried out are discussed in detail as follows( Figure 1 and Figure 2). Data Collection Sensors data like temperature, humidity, and soil moisture are being collected from setup of 10 tomato plants located at M. H. SabooSiddik College, Mumbai with latitude 18.9685103° N, longitude 72.8288362° E, temperature 29°C and Humidity: 69%. The setup of 10 tomato plants can be seen in Figure 1. The IoT circuit diagram for data collection is shown in Figure 2. The data plot in Figure 3. shows the variation in abiotic parameters over a day. Methodology Disorder detection process consists of following steps: Pre-processing. Data pre-processing is an important step, as real-world data comes with a lot of variation, outliers, and unexpected values. To make the predictions of the system accurate, data needs to be scaled down to a standard format. Real-world sensor data comes with some redundancies like missing values or NaN values, values over the threshold, etc. need to be handled before further processing. Upper bound values are clipped in this step and missing values or NaN are replaced with the mean value of thatspecific parameter.In the case of leaf image data, leaf images are not always perfect as required by the model. The leaf image is pre-processed to remove the background and mainly concentrated on segmenting the green leaf part to train the deep learning model as shown in Figure 5. The green leaf part is segmented by converting RGB image to HSV image and selecting green color Hue value range, to segment only the green leaf and rest with Black Colour as mentioned in [19]. Feature Extraction and Training. Most of the past and available feature extraction techniques only consider visual aspects of leaf and try to extract necessary information from them. The system considers both, sensor's data and leaf image. It maps the semantic representation between the visual properties and environmental parameters (Humidity, Temperature, Soil Moisture). Features are divided into two categories 1) Sensor-Based Feature Extraction, 2) Deep Learning Based Feature Extraction.The environmental conditions play a vital role in determining the health of the plant. The abiotic factors like temperature, soil moisture and humidity help to determine whether the plant is growing in healthy conditions or not. The system uses two sensors soil moisture and temperature-humidity sensor. This data is gathered using IoT and stored on a cloud. All the data will be used in machine learning algorithms to predict new samples. It has two classes healthy and not healthy. Supervised learning algorithms like SVM and Random Forest Algorithms [13] are used as these algorithms have a good performance on statistical data. Unsupervised Learning Technique K-Means Clustering is also used to learn from the abiotic factors and form clusters. The block diagram for the sensor's-based feature extraction and training is shown in Figure 6. To train the model, images must be pre-processed by carrying out resizing, noise removal and segmentation. The image processing techniques are carried out using the OpenCV library in Python [20] which was developed by Intel. For training is performed using Keras Library with Tensorflow Backend [21] and this whole dataset is trained on Google Colab Platform. Keras [18] package supports various state-of-the-art pre-trained deep learning models ready for classical machine learning problems. For precise learning, using pre-trained models gives a huge boost in learning and prediction. Mohanty et al. [16] used deep learning for leaf image classification on Plant Village Dataset using AlexNet with 99.5% accuracy. Aravind et al. used pre-trained AlexNet and VGG16 [15] for leaf disorder classification with 86% accuracy on the testing set. Using Transfer Learning, the images are trained on pre-trained models by fine-tuning like VGG16 [17], VGG19. These are the CNN based architectures which are best for image classification-based problems. These pre-trained models are trained on a huge dataset called "ImageNet" with 1000 classes and can be retrained by freezing some of its layers on a new dataset with a new number of classes. Both the architectures have been fine-tuned by adding one Convolution Layer and three Dense layers followed by softmax activation. Results The dataset contains 5923 samples with timestamps, soil moisture value, temperature, and humidity. The dataset has been manually classified for training. The stats of dataset are given in Table 1. The performance of various algorithms is displayed in Table 2. The unsupervised algorithm, K-Means Clustering clusters visualization is shown in Figure 8. The dataset was also trained on multiple machine learning models like Support Vector Machines and Random Forest Classifiers. K-means required pre-processing steps like standard scaling and Principle Component Analysis (PCA). The leaf image dataset contains 5,838 real-world images with unbalanced classes. The dataset has three classes Early Blight, Late Blight and Healthy. The stats of image dataset are given in Table 3. The fined tuned architecture of VGG16 and VGG19 is shown in Figure 7. As shown in Figure 10. VGG16 training accuracy is greater than VGG19 and training loss is minimum in VGG16 architecture. On validation set, VGG16 outperforms VGG19 architecture with high validation accuracy and low validation loss as compared to VGG19. Table 4 showcases the results of VGG16 and VGG19 on training and test set. The dataset was also trained over a Vanilla CNN Image Classifier which performs poor compared to VGG Architecture as shown in Figure 9. Conclusion The past work on this field was mainly focused on classifying data using visual properties of leaf image using pattern recognition and Deep Neural Networks. The system is trained with real-world sensor data and image data. Both the results can be used for ensemble prediction for better results.
2020-01-09T09:14:51.729Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "945d7af8e5757d56e4d0bff513791887415a9412", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1432/1/012086", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ff2f52cf9b28970d584f99e194377e7f86440532", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
266925159
pes2o/s2orc
v3-fos-license
Antimicrobial Resistance in Streptococcus pneumoniae before and after the Introduction of Pneumococcal Conjugate Vaccines in Brazil: A Systematic Review Streptococcus pneumoniae causes serious illnesses, such as pneumonia, bacteremia, and meningitis, mainly in immunocompromised individuals and those of extreme ages. Currently, pneumococcal conjugate vaccines (PCVs) are the best allies against pneumococcal diseases. In Brazil, the 10-valent and 13-valent PCVs have been available since 2010, but the threat of antimicrobial resistance persists and has been changing over time. We conducted a systematic review of the literature with works published since 2000, generating a parallel between susceptibility data on isolates recovered from colonization and invasive diseases before and after the implementation of PCVs for routine childhood use in Brazil. This systematic review was based on the Cochrane Handbook for Systematic Reviews of Interventions and Preferred Reporting Items for Systematic Literature Reviews and Meta-Analyses (PRISMA) guidelines. Despite the inclusion of PCVs at a large scale in the national territory, high frequencies of non-susceptibility to important drugs used in pneumococcal diseases are still observed, especially penicillin, as well as increasing resistance to macrolides. However, there are still drugs for which pneumococci have a comprehensive sensitivity profile. Introduction Streptococcus pneumoniae is a common colonizer of the human upper respiratory tract.However, pneumococci can cause milder diseases, such as acute otitis media (AOM) and sinusitis, as well as severe diseases, including community-acquired pneumonia (CAP), bacteremia, and meningitis, affecting individuals of all age groups, especially those of extremes ages and immunocompromised people [1][2][3][4]. The main prevention strategy against pneumococcal diseases is pneumococcal conjugate vaccines (PCVs).They confer a high degree of protection against specific serotypes, interfering in the fluctuation of their distribution and in the prevalence of resistance to antimicrobial agents [5,6].There are several PCVs approved for use in children and adults in different countries [7][8][9][10][11][12][13]. In Brazil, the 7-valent PCV (PCV7; serotypes 4, 6B, 9V, 14,18C, 19F, and 23F) was initially made available, in 2001, in private immunization clinics for children and in the Brazilian public health system (Sistema Único de Saúde, SUS) for children < 5 years old who were at high risk of invasive pneumococcal diseases (IPD).In 2010, the 10-valent PCV (PCV10; PCV7 serotypes + 1, 5, and 7F) was introduced into the Brazilian National Immunization Program (NIP) for free-of-charge immunization of all children < 5 years old.Initially, the PCV10 schedule comprised three primary doses at 2, 4, and 6 months of age and a booster dose at 12-15 months of age (3p + 1), but, since 2016, the PCV10 dosing regimen in the Brazilian NIP changed to 2p + 1 (at 2 and 4 months of age and a booster dose at 12 months of age).In 2010, the 13-valent PCV (PCV13; PCV10 serotypes + 3, 6A, and 19A) replaced PCV7 in private clinics, and it was made available via SUS in 2019 for individuals aged 5 years or older who are at the highest risk for IPD, including patients living with HIV/AIDS, patients with cancer, and those who underwent solid organ or bone marrow transplantations.In 2023, the PCV15 (PCV13 serotypes + 22F and 33F) was approved for use in Brazil [13][14][15][16][17][18][19]. Although not compulsory, vaccination in Brazil is strongly recommended.The most used antipneumococcal vaccine in Brazil is PCV10, and between 2011 and 2022, considering the different geographical regions, the average PCV10 vaccination coverage with primary doses and the booster dose was 88.5% and 80.6%, respectively.Data on PCV13 coverage are limited, but a few studies report a low coverage (<8%) among children < 5 years old [20][21][22].Some post-PCV10 introduction studies in Brazil indicate a reduction in the average mortality rate of pneumonia (11%; from 29.69 to 23.40 per 100,000) in children younger than 1 year after four years of vaccination [23] and a significant reduction, between 13.9% and 17.6%, in hospitalizations for pneumonia in the target groups of vaccination over five years after PCV10 implementation [24].On the other hand, the incidence of pneumococcal meningitis remains high in Brazil, with approximately 1000 cases/year [25]. Beta-lactams, especially penicillin and amoxicillin, are the main, but not exclusive, choices to treat pneumococcal diseases.Other antimicrobial agents frequently used against pneumococcal diseases include macrolides, fluoroquinolones, and lincosamides [26][27][28].The first choice for AOM is amoxicillin, which may be combined with clavulanate in cases of recurrence within 30 days or when associated with other symptoms.For allergic people, cefdinir or azithromycin has been frequently prescribed [29].For CAP patients without comorbidities, the most indicated treatment includes amoxicillin, doxycycline, or a macrolide, and for those with comorbidity, a fluoroquinolone or a combination of amoxicillin with clavulanate or cephalosporin plus a macrolide or doxycycline.Furthermore, for patients admitted to a hospital, a fluoroquinolone in monotherapy or the combination of a macrolide with a beta-lactam is recommended, with a difference in treatment for patients in intensive care, who require the combination of a beta-lactam with a macrolide or a fluoroquinolone [30]. Antimicrobial resistance, however, is a concern among S. pneumoniae.Penicillin nonsusceptible pneumococci (PNSP) are considered a medium priority risk to human health by the World Health Organization [31].Drug-resistant S. pneumoniae is also classified as a serious threat in the USA [32].The growing report of resistance to different antimicrobial agents has been a cause for concern in public health and demands strategies in public policies, as well as therapeutic alternatives [27]. This systematic literature review aims to verify the Brazilian scenario pre-and post-PCV10 regarding antimicrobial resistance among S. pneumoniae isolates associated with colonization and diseases recovered from individuals from all age groups.For this reason, we selected the year 2000 as a starting point, considering that it corresponds to 10 years before the introduction of PCV10 in Brazil. Assessment of the Methodological Quality of the Articles Regarding the description and case definition of the population of the studies, only two (11.8%) of seventeen articles were negatively classified.Seven (41.2%) of the seventeen articles described the representativeness of the sample and its sampling in a clear way.All articles described the type of test used and mentioned or referenced the evaluative standard used.However, only five (29.4%) articles described the use of internal quality control.Detailed data can be found in Table 1. Assessment of the Methodological Quality of the Articles Regarding the description and case definition of the population of the studies, only two (11.8%) of seventeen articles were negatively classified.Seven (41.2%) of the seventeen articles described the representativeness of the sample and its sampling in a clear way.All articles described the type of test used and mentioned or referenced the evaluative standard used.However, only five (29.4%) articles described the use of internal quality control.Detailed data can be found in Table 1. [ Considering all the references included in this study, we obtained data on 18,273 isolates; data on 15,437 (84.5%) isolates were provided by SIREVA II (invasive isolates) and data on 2839 (15.5%) isolates were obtained through the included articles.Of 18,273 isolates, 2683 (14.7%) isolates were associated with colonization, 117 (0.6%) isolates with non-invasive diseases, and 39 (0.2%) isolates were associated with invasive diseases, but not presented by SIREVA II.In total, 8991 (49.2%) isolates were from the pre-PCV10 period and 9285 (50.8%) were from the post-PCV10 period. Invasive isolates included those from sterile sites, such as blood, pleural fluid, and cerebrospinal fluid (CSF).The colonization isolates, obtained through the articles, were mainly collected through sterile swabs in contact with the nasopharynx and oropharynx.Other types of isolates were included in non-invasive pneumococcal diseases, such as ear abscess, cervical abscess, buttock abscess, nasal/eye abscess, bronchial aspirate, corneal aspirate, sinus aspirate, pulmonary aspirate, tracheal aspirate, sputum, bronchoalveolar lavage, auricular secretion, bronchial secretion, tear duct secretion, conjunctival secretion, ocular secretion, wound secretion, skin secretion, pulmonary secretion, postauricular secretion, tracheal secretion, rectal swab, corneal ulcer, and urine.Antimicrobial resistance data were compiled and organized into tables separated by pre-and post-PCV10 introduction periods (Tables 3 and 4).Higher frequencies of resistance to sulfamethoxazole-trimethoprim were observed in invasive isolates in the pre-PCV10 period (60.1%; 4815/8016).No case of non-susceptibility (intermediate + resistant) in the pre-PCV10 period was observed for vancomycin, linezolid, trovafloxacin, telithromycin, and quinupristin-dalfopristin, as well as resistance to amoxicillin.In the post-PCV10 introduction period, no resistance was observed to vancomycin, linezolid, telithromycin, and quinupristin-dalfopristin. Data on susceptibility to penicillin and ceftriaxone were separated into meningitis and non-meningitis and by period, respectively, in For ceftriaxone, we observed a higher proportion of resistance to general in the post-PCV10 introduction period (6.5%, 25/384), similar to penicillin, which showed a higher proportion (44.6%; 499/1118). Regarding macrolide resistance, a greater volume of data were obtained for erythromycin.There is a high susceptibility for colonization isolates in the pre-PCV10 period (95.2%; 719/755), with a decline in susceptibility in the post-PCV10 period (82%; 596/727).These findings were similar to invasive isolates, which in the pre-PCV10 period were 94.5% susceptible (6795/7186) and 81.1% (6397/7884) in the post-PCV10 period.Finally, a small number of isolates was tested against fluoroquinolones and, as a result, there are data on ofloxacin and trovafloxacin susceptibility only for the pre-PCV10 period.All ninety-two (100%) carriage isolates tested against ofloxacin and the two (100%) carriage isolates tested against trovafloxacin were susceptible.For invasive (n = 1) and non-invasive (n = 2) disease isolates, the susceptibility against trovafloxacin was also 100%.Levofloxacin had a higher number of susceptible isolates, with a proportion of 98.8% (557/564) in the pre-PCV10 period and 100% (565/565) in the post-PCV10 period for colonization isolates.All the 20 invasive isolates from the pre-PCV10 period were susceptible to levofloxacin.All non-invasive disease isolates from the pre-PCV10 (48/48) and post-PCV10 (22/22) periods were also susceptible to levofloxacin. Statistical Analysis The proportion of erythromycin non-susceptible isolates was higher among carriage (p < 0.01) and invasive (p < 0.01) isolates of the post-PCV10 period.The proportion of sulfamethoxazole-trimethoprim susceptibility (p < 0.01) was higher among isolates of the post-PCV10 period, regardless the isolation source.Although a limited number of isolates has been tested against meropenem, susceptibility to this drug was higher among non-invasive isolates (p < 0.01) of the post-PCV10 period.Among carriage isolates, the frequencies of susceptibility to chloramphenicol (p = 0.01), as well as non-susceptibility to clindamycin (p < 0.01) and tetracycline (p < 0.01), were higher after PCV10 introduction.Among invasive isolates (meningitis and non-meningitis), the proportion of susceptibility to penicillin (p ≤ 0.01) and ceftriaxone (p ≤ 0.02) was higher after PCV10 introduction.On the other hand, the frequency of penicillin non-susceptible pneumococci was higher among carriage isolates (p < 0.01) in the general parameter in the post-PCV10 period.Figure 2 shows the main results of proportion tests when statistically significant differences in the antimicrobial susceptibility profile were detected between isolates of the pre-and post-PCV10 periods. Discussion Based on the 17 articles selected through this systematic literature review, a high number of articles (88.2%; 15/17) were positively classified within the tool used (modified Newcastle-OQawa assessment scale) [63][64][65], offering greater reliability in the use of the data obtained.Notably, a considerably high number of invasive isolates originated from SIREVA II (84.4%, 15,437/18,276), considered an important epidemiological surveillance tool for S. pneumoniae and other microorganisms in Latin America. The susceptibility to sulfamethoxazole-trimethoprim (SXT) was higher after PCV10 introduction for routine use in Brazil (p = 0.01).In the pre-PCV10 period among invasive isolates, the proportion of SXT susceptibility and non-susceptibility was 40.3% (2907/7218) Discussion Based on the 17 articles selected through this systematic literature review, a high number of articles (88.2%; 15/17) were positively classified within the tool used (modified Newcastle-Ottawa assessment scale) [63][64][65], offering greater reliability in the use of the data obtained.Notably, a considerably high number of invasive isolates originated from SIREVA II (84.4%, 15,437/18,276), considered an important epidemiological surveillance tool for S. pneumoniae and other microorganisms in Latin America. The susceptibility to sulfamethoxazole-trimethoprim (SXT) was higher after PCV10 introduction for routine use in Brazil (p = 0.01).In the pre-PCV10 period among invasive isolates, the proportion of SXT susceptibility and non-susceptibility was 40.3% (2907/7218) and 59.7% (4311/7218), respectively.In the post-PCV10 introduction period, there was a drop in the number of non-susceptible isolates (37.7%; 2957/7839) compared to the susceptible ones (62.3%; 4882/7839).This comparison is interesting because it presents a change in the general panorama of antimicrobial resistance of this drug, tending to a drop in resistance levels.However, it is noteworthy that this phenomenon is not uniformly observed in other countries; for example, a recent study carried out in Malawi (southeast Africa) with colonization and invasive isolates verified a high frequency of resistance to SXT (96%; 137/143), with similar resistance profiles worldwide [66]. For penicillin, there was a statistically significant difference in the percentage of nonsusceptibility between the pre-and post-PCV10 introduction periods among invasive isolates, with lower frequencies for both meningitis (31.9% to 28.7%; p = 0.01) and nonmeningitis (17.7% to 6.3%; p < 0.01) after PCV10 use.This finding is very important since in Latin America most countries usually report a prevalence of penicillin resistance among meningitis isolates over 30% [62].On the other hand, regarding the general parameter, there was an increase in non-susceptibility between the same periods from 25.9% (387/1495) to 44.1% (461/1000) for colonization isolates (p < 0.01), respectively.This may be explained mainly by the impact of childhood vaccination with PCV10 in Brazil since before PCV10 introduction, resistance to beta-lactams was mostly associated with serotypes included in the vaccine formulation, especially 6B, 14, 19F, and 23F [38,40,67].After PCV10 introduction, these serotypes were nearly eliminated from both colonization and diseases [21,22,67]. Due to the serotype replacement phenomenon, some of the main serotypes circulating in Brazil are currently 19A in invasive diseases with high resistance to different classes of antimicrobial agents and 6C in colonization isolates [21,36,40,44].In this context, a replacement by PCV13, PCV15, PCV20, and even Pneumosil ® , which also protects against 10 vaccine serotypes, would be appropriate to replace PCV10 in the Brazilian National Immunization Program [6,[9][10][11]19].However, this phenomenon may continue due to the varied range of capsular serotypes and their distribution among populations. For ceftriaxone, the general parameter shows a higher frequency of non-susceptible isolates (4.6%; 25/542) in isolates associated with colonization in the post-PCV10 introduction period.Although not statistically significant (p = 0.43), this is of paramount importance since third-generation cephalosporins are frequently used to treat pneumococcal meningitis [26], and isolates with this resistance profile circulating within a population represent a high risk of transmission and development of severe diseases.In turn, susceptibility to ceftriaxone was significantly higher (p ≤ 0.02) among invasive isolates recovered in the post-PCV10 period. Frequencies of susceptibility to macrolides, namely erythromycin and clarithromycin, exceeded 70% across all periods evaluated.A similar profile between invasive and colonization isolates was observed, with an important decline in susceptibility in the post-PCV10 period.Resistance in the pre-PCV10 period was around 5% for both colonization and invasive isolates.However, the proportion of macrolide-resistant isolates almost reached 20% in the post-PCV10 period.Macrolide resistance has been increasing worldwide.A nationwide surveillance in the USA between 2018 and 2019, with isolates recovered from blood and respiratory specimens from adults, revealed a high burden of macrolide resistance among S. pneumoniae, reaching almost 40% [68]. Levofloxacin is the fluoroquinolone with the greatest amount of data available for analysis, and the authors observed a high proportion of susceptibility among colonization isolates in the pre-PCV10 (98.8%; 557/564) and the post-PCV10 (100%; 565/565) periods.A similar scenario was observed for invasive isolates, in which all isolates (24 isolates from pre-PCV10 and 50 isolates from post-PCV10 periods) were susceptible to fluoroquinolones. Despite the increasing and worrying resistance to beta-lactams and macrolides, all isolates were susceptible to vancomycin, linezolid, telithromycin, and quinupristin-dalfopristin in both the pre-and post-PCV10 introduction periods. The main limitation of this work was the high variation of data presentation in the articles included in this review, making it difficult to group them.Also, 22 articles with important data were not made available in time by the authors, despite attempts to contact them.Still, we retrieved data on an extensive collection of isolates recovered from various clinical sources, mainly associated with IPD, and from different geographical regions of Brazil, providing a comprehensive scenario of antimicrobial resistance in pneumococci before and after PCV introduction for routine use in Brazil. Search Strategy This systematic review was structured between May 2022 and July 2023 with the search date on 23 May 2023.The following databases were consulted: Lilacs (Latin American & Caribbean Health Sciences Literature), Embase, Pubmed, Scopus, and Web of Science.In addition to these, a manual search was carried out in the bibliographic references of the selected articles and data extraction from the documents was produced by the System of Surveillance Networks of Responsible Agents for Bacterial Pneumonia and Meningitis (SIREVA II; Electronic page: https://www3.paho.org/hq/index.php?option= com_docman&view=list&slug=sireva-ii-8059&Itemid=270&lang=pt#gsc.tab=0;accessed on 23 May 2023). The files referring to the search strategies according to the base can be found in the Supplementary Material as Table S1 and the manual search as Table S2. This review was based on the question: "How is the resistance profile to antimicrobial agents of Streptococcus pneumoniae isolates before and after the introduction of pneumococcal conjugate vaccines in Brazil?".It is noteworthy that this research was submitted to the Prospero platform [PROSPERO acknowledgment of receipt (364743)]. Article Selection and Data Extraction All articles found were initially evaluated based on titles and abstracts.After this step, some articles were selected for full reading based on the inclusion and exclusion criteria listed in Table 7. Two authors performed these steps and a third author was consulted in case of doubt.Then, data were extracted using the Microsoft Excel ® program. Quality Assessment Individual quality control of each academic work was evaluated according to the Newcastle-Ottawa Quality Tool Assessment Scale with modifications according to models by Sugianli et al., 2021 andMancini et al., 2017 for cross-sectional studies and according to data demand [63][64][65]. Data Compilation The data obtained were compiled and analyzed using Excel ® , allowing the division of data according to the vaccination period: pre-or post-PCV introduction in the Brazilian immunization program. In the case of penicillin and ceftriaxone (beta-lactams), from 2007 onwards, the evaluation parameters were divided into two groups: meningitis and non-meningitis [69].Data from articles with these definitions were added to their respective classifications (meningitis and non-meningitis).Articles that did not use the parameters listed above for beta-lactams were assigned to the general parameter column for better organization and analysis of the data.Furthermore, when the data were provided by the authors (raw data), in the case of penicillin specifically, originating from sources of colonization (non-invasive), the parameters of oral penicillin were used and the data were added in the general column; when invasive, meningitis and non-meningitis parameters were used.In the case of ceftriaxone, meningitis criteria were applied for invasive isolates and non-meningitis for colonization isolates. Articles presenting data from a long time covering both periods (pre-and post-PCV10 introduction) had their data organized separately. It is also noteworthy that among the works that required data supplementation, we received only the raw data from the article by Pinto et al. 2019 [33] on time for this study.In this context, the data were separated into three distinct groups according to the scope of this systematic literature review. Statistical Analysis We used a two-proportion Z-test to compare independent samplings, at a confidence level of 95%, and verify if the proportion of pneumococci non-susceptible to antimicrobial agents has significantly changed in the post-PCV10 period. Ethical Aspects All included studies were approved by their respective Ethics Committees.Other data were retrieved from a public database. Conclusions There is evidence that the proportion of isolates that are susceptible to chloramphenicol and sulfamethoxazole-trimethoprim is higher after PCV10 implementation for routine use in Brazil.More importantly, the same scenario was observed for penicillin and ceftriaxone among isolates associated with IPD.However, it is important to highlight the higher frequency of penicillin non-susceptible pneumococci associated with colonization in the post-PCV10 introduction period due to the emphasis on its use in the treatment of pneumococcal diseases.The emergence of macrolide-resistant isolates, associated with both colonization and diseases, is also a concern.Similarly, resistance to clindamycin and tetracycline is significantly higher among carriage isolates of the post-PCV10 period.On the other hand, susceptibility to other antimicrobial agents, such as ansamycins, fluoro- Figure 1 . Figure 1.Detailed flowchart for obtaining and selecting eligible articles for this systematic review. Figure 1 . Figure 1.Detailed flowchart for obtaining and selecting eligible articles for this systematic review. Figure 2 . Figure 2. The proportion of isolates susceptible and non-susceptible to antimicrobial agents according to isolation source ((a).carriage isolates; (b).invasive isolates) before and after the introduction of the 10-valent pneumococcal conjugate vaccine (PCV10) for universal use in Brazil (p-value was calculated using a two-proportion Z-test to compare independent samplings). Figure 2 . Figure 2. The proportion of isolates susceptible and non-susceptible to antimicrobial agents according to isolation source ((a).carriage isolates; (b).invasive isolates) before and after the introduction of the 10-valent pneumococcal conjugate vaccine (PCV10) for universal use in Brazil (p-value was calculated using a two-proportion Z-test to compare independent samplings). Table 2 . (a) Main results retrieved from articles with data of the pre-PCV10 period.(b) Main results retrieved from articles with data of the post-PCV10 period.(c) Main results retrieved from article with data of the extended period (covering pre-and post-PCV10 period) with raw data provision. Table 3 . Data related to antimicrobial resistance evaluated in the pre-PCV10 period divided into colonizing, non-invasive, and invasive isolates. Table 4 . Data related to antimicrobial resistance evaluated in the post-PCV10 introduction period divided into colonizing, non-invasive, and invasive isolates. Table 5 . (a) Data related to penicillin resistance in the pre-PCV10 period divided into meningitis, non-meningitis, and general parameters.(b) Data related to penicillin resistance in the post-PCV10 period divided into meningitis, non-meningitis, and general parameters. Table 6 . (a) Data related to ceftriaxone resistance in the pre-PCV10 period divided into meningitis, non-meningitis, and general parameters.(b) Data related to ceftriaxone resistance in the post-PCV10 introduction period divided into meningitis, non-meningitis, and general parameters. N = number of isolates; I = intermediate; R = resistant; S = susceptible.The "general parameter" column refers to data prior to the change in interpretation criteria for beta-lactams or not specified in academic productions. Table 7 . Criteria used for article selection.Regional Surveillance System) is a compilation of data on Haemophilus influenzae, Neisseria meningitidis, and Streptococcus pneumoniae from Latin American countries since 2000.
2024-01-11T16:19:45.585Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "c6a26c305e6d07990668b755beb3bf8da3a35a1f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/13/1/66/pdf?version=1704804257", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46abc32610e0b1929ec0f53035a5ea1ed37f7db5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7708843
pes2o/s2orc
v3-fos-license
Towards a Semantic Web of Things: A Hybrid Semantic Annotation, Extraction, and Reasoning Framework for Cyber-Physical System Web of Things (WoT) facilitates the discovery and interoperability of Internet of Things (IoT) devices in a cyber-physical system (CPS). Moreover, a uniform knowledge representation of physical resources is quite necessary for further composition, collaboration, and decision-making process in CPS. Though several efforts have integrated semantics with WoT, such as knowledge engineering methods based on semantic sensor networks (SSN), it still could not represent the complex relationships between devices when dynamic composition and collaboration occur, and it totally depends on manual construction of a knowledge base with low scalability. In this paper, to addresses these limitations, we propose the semantic Web of Things (SWoT) framework for CPS (SWoT4CPS). SWoT4CPS provides a hybrid solution with both ontological engineering methods by extending SSN and machine learning methods based on an entity linking (EL) model. To testify to the feasibility and performance, we demonstrate the framework by implementing a temperature anomaly diagnosis and automatic control use case in a building automation system. Evaluation results on the EL method show that linking domain knowledge to DBpedia has a relative high accuracy and the time complexity is at a tolerant level. Advantages and disadvantages of SWoT4CPS with future work are also discussed. Introduction Web of Things (WoT) [1] aims at reusing the REpresentational State Transfer (REST) architectural style [2] and Web protocols to make networked physical objects first-class citizens of the World Wide Web. In the context of WoT, physical things are abstracted as building blocks of Web applications with uniform resource identifiers (URI), standard HyperText Transfer Protocol (HTTP) interfaces, and hypermedia-based representations. To enable discovery and interoperability of the Internet of Things (IoT) service on a worldwide basis, the W3C WoT Interest Group has proposed a distributed "Web Servient" [3] as a soft-defined virtualization middleware for physical things. In cyber-physical system (CPS) applications, the virtualization of devices with RESTful Web APIs lowers the barriers of developing IoT application across domains, especially in complex building automation, industry 4.0, or smart city applications. However, designing more uniform knowledge representations of devices is quite necessary to improve system interoperability and facilitate resource discovery, composition, and Related Work In this section we provide an overview of related work on leveraging semantic technology to annotate sensor data. We summarize the methods into two main categories: (1) top-down methodology based on linked data and ontological knowledge engineering; and (2) bottom-up methodology based on machine learning models by extracting knowledge from existing data sources. Semantic Sensor Web and Linked Sensor Data There are some notable works that attempt to build semantic models and link semantic annotations for IoT. The semantic sensor web (SSW) proposes annotating sensor data with spatial, temporal, and thematic semantic metadata [6]. This approach uses the current Open Geospatial Consortium (OGC) and Sensor Web Enablement (SWE) specifications and attempts to extend them with semantic web technologies to provide enhanced descriptions to facilitate access to sensor data. The W3C Semantic Sensor Networks Incubator Group (SSN-XG) [7] is also working on developing an ontology for describing sensors. Effective description of sensors, observations, and measurement data, and utilizing semantic web technologies for this purpose, are fundamental steps to construct semantic sensor networks. However, associating this data to the existing concepts on the web and reasoning the data is also an important task to make this information widely available for different applications, front-end services, and data consumers. Linked sensor middleware (LSM) [9] provides many functionalities, such as (i) wrappers for real-time data collection and publishing; (ii) a web interface for data annotation and visualization; and (iii) a Simple Protocol and RDF Query Language (SPARQL) endpoint for querying unified linked stream data and linked data. It facilitates the integration of sensed data with data from other sources, both sensor stream sources and data are being enriched with semantic descriptions, creating linked stream data. Sense2Web [18,19] supports flexible and interoperable descriptions and provides associations of different sensor data ontologies to resources described on the semantic web and the web of data, it focuses on publishing linked-data to describe sensors and link them to other existing resources on the web. SPITFIRE [8] is a Semantic Web of Things framework which provides abstractions for things, fundamental services for search and annotation, as well as integrating sensors and things into the LOD cloud using the linked data principles. Moreover, SPITFIRE also provides a semi-automatic creation of semantic sensor descriptions by calculating the similarities and correlations of the sensing patterns between sensors. Gyrard et al. [20] proposes a cross-domain IoT application development platform-the M3 framework. M3 [21] is based on semantic web technology to explicitly describe the meaning of sensor measurements in a unified way by reusing and combining domain ontologies. The M3 taxonomy describes sensor names, measurement names, units, and IoT applicative domains. This is a kind of dictionary for IoT to easily deal with synonyms, etc. This M3 taxonomy is implemented as an ontology extending the W3C SSN ontology, which is a cornerstone component to semantically annotate data and extract meaningful information from IoT/sensor data. LOV4IoT [22] designs the Linked Open Vocabularies for Internet of Things (LOV4IoT) dataset. Moreover, it also proposes a dataset of interoperable domain rules to deduce high-level abstractions from sensor data by designing the Sensor-based Linked Open Rules (S-LOR). The two components are integrated within the M3 framework. Ploennigs et al. [23] presents an architecture and approach that illustrates how semantic sensor networks, semantic web technologies, and reasoning can help in real-world applications to automatically derive complex models for analytics tasks, such as event processing and diagnostics. The work extends the SSN ontology to enable detection and diagnosis of abnormal building behaviors. Knowledge Extraction and KB Construction on Sensory Data Some studies have proposed machine learning models and frameworks to extract and annotate knowledge from streaming sensor data. Wang et al. [24] proposes to model time series data using a novel pattern-based hidden Markov model (pHMM), which aims at revealing a global picture of the system dynamics that generates the time series data. Ganz et al. [25] proposes an automated semantic knowledge acquisition from sensor data. The framework uses an extended k-means clustering method and apply a statistic model to extract and link relevant concepts from the raw sensor data and represent them in the form of a topical ontology. Then it uses a rule-based system to label the concepts and make them understandable for the human user or semantic analysis and reasoning tools and software. The CityPulse framework [14] deals with real-time pattern discovery and prediction analysis. It addresses the specific requirements of business services and user applications in smart city environments. It uses cyber-physical and social data and employs big data analytics and intelligent methods to aggregate, interpret, and extract meaningful knowledge and perceptions from large sets of heterogeneous data streams, as well as providing real-time adaptive urban reasoning services. Längkvist et al. [26] has surveyed unsupervised feature learning and deep learning models for time series modeling, which is feasible for high-level pattern recognition and finding states from streaming sensor data. Xu et al. [27] propose an upper-ontology-based approach for automatically generating an IoT ontology. The method proposes an end-to-end framework for ontology construction based on calculating semantic similarity with input schemas with existing IoT ontologies, e.g., SSN. Some other studies are not directly used for IoT domain knowledge base construction; however, their methods could still be transferred for semantic annotation and extraction on structural metadata of devices. For instance, several EL studies on web tables, web lists, and other relational data are highly related to the automatic annotation task on WoT metadata. Limaye et al. [28] proposed to simultaneously annotate table cells with entities, table columns with types, and pairs of table columns with relations in a knowledge base. They modeled the table annotation problem using a number of interrelated random variables following a suitable joint distribution, and represented them using a probabilistic graphical model. The inference of this task is to search for an assignment of values to variables that maximizes the joint probability, which is NP-hard. They resorted to an approximate algorithm called message-passing [29] to solve this problem. Mulwad et al. [30] also jointly model entity linking, column type identification, and relation extraction using a graphical model, and a semantic message passing algorithm is proposed. TabEL [31] uses a collective classification technique to collectively disambiguate all mentions in a given table. Instead of using a strict mapping of types and relations into a reference knowledge base, TabEL uses soft constraints in its graphical model to sidestep errors introduced by an incomplete or noisy KB. In conclusion, the state of the art works have established a concrete foundations for our research: the top-down knowledge engineering methods adopt domain ontological knowledge to annotate the representation of WoT resources, and existing ontologies could be partly reused as a reference for constructing a domain knowledge base of most of CPS applications, but extended concepts and relations are still needed for resource composition and causal deduction scenarios, such as anomaly diagnosis and automatic control, while the bottom-up machine learning methods focus on extracting knowledge from existing sensory data or linking descriptive metadata of devices to background KBs. These ideas could be references to enrich and further unify the domain knowledge from existing data with prior knowledge. Thus, this paper will propose a SWoT4CPS framework with knowledge extraction and KB construction modules. The modules will design an ontology by reusing and extending existing ontologies and use the EL-based method to enrich and align domain knowledge to common sense KBs. Moreover, a rule-based reasoning model will be proposed to support WoT resource composition and causal deduction applications in CPS scenarios based on the constructed KB. Problem Statement and Requirements CPS has been vastly utilized in building automation, industry 4.0, or smart city application domains, and IoT devices have been interconnected and need to interact with each other to provide automatic and intelligent services in these scenarios. To achieve better interoperability and smarter decision-making process, semantic IoT domain knowledge are modeled and utilized to represent attributes and raw data of IoT devices, as well as relationships between each other under certain circumstances. Thus, a Semantic Web of Things framework needs to be designed to provide knowledge base construction and reasoning tools for these automatic and intelligent CPS applications. Specifically, to achieve the ultimate goal, there are some challenges to be dealt with: • SSN is a well-known uniform upper ontology to model a semantic sensor network, and it mainly describes the physical attributes, deployment, and sensing-observing process of sensors. However, CPS scenarios are usually complex and dynamic. For example, in building automation or industry 4.0 applications, it is common that sensors and actuators collaborate with each other in the loop to provide anomaly diagnosis and automatic controlling services, while SSN, or other extended solutions, studied in Section 2 could not cover such use cases particularly well for the reason that monitor-control relationships between devices or cause-effect relationships between anomaly events and observed causes have not been fully considered yet. • Though the semantic representation of devices could be formatted with domain ontologies, it only specifies the conceptual type that the facts belong to. Since the understandings of the concepts are variable for people who model the services with different backgrounds, the meanings of the facts filled with textual contents are sometimes similar or ambiguous, which needs alignment or disambiguation. For instance, precipitation and rainfall sensors have the same meaning for sensor types, and Peking is another representation of Beijing for a city name. While a common sense KB, such as DBpedia [32], Yago [33], and Freebase [34], have defined such common sense concepts, entities, and relationships, therefore linking domain facts and concepts with common sense KBs could facilitate enlargement and alignment of a domain knowledge base. Moreover, it is more interoperable and unified for human-machine interaction and semantic reasoning with linked common knowledge in domain CPS applications. Accordingly, our research goals and requirements could be summarized as: firstly, we need to model a uniform semantic WoT ontology for CPS applications by extending SSN and reuse some existing ontologies for interoperability. The ontology needs to describe sensing-observing processes, monitoring-controlling and cause-effect relationships among devices, as well as the physical attributes of sensors and actuators. The knowledge could facilitate the reasoning and decision-making in intelligent services, such as automatic controlling and anomaly diagnosis. Secondly, we need to provide a (semi-) automatic semantic extraction, linking, and alignment model to interlink domain knowledge with common sense KBs. Specifically, it should annotate the facts of ontological concept/types to the entities in the common sense KB and associate/align ontological concepts/types to the concepts/types in the common sense KB. Moreover, it should also annotate pairs of ontological concepts/types with a binary relation in the common sense KB if the relations exist. If two keys are not involved in any binary relation in our KB, it should determine that as well. Thirdly, we need to design and implement a demonstration system to assist in building an IoT knowledge base with both domain and linked common knowledge and, to testify to the feasibility and of the platform, it is necessary to implement a semantic rule-based reasoning engine based on the constructed knowledge base to perform anomaly diagnosis and automatic controlling services for a specific CPS application. Semantic Web of Things Framework This section mainly introduces the overall Semantic Web of Things for CPS (SWoT4CPS) framework with proposed hybrid KB construction modules: (1) SWoT-O ontology model and (2) the entity linking model with iterative message passing algorithms used for disambiguating entities and aligning concepts. The semantic search and reasoning models are also proposed based on the constructed hybrid KB. SWoT4CPS Architecture An overview of the SWoT4CPS framework is depicted in Figure 1 It describes a sensor-observation-event-rule-actuator-action model that is useful for further reasoning tasks in most intelligent and automatic CPS applications, and it also describes some common necessary attributes of physical devices (including sensors and actuators), such as Location, Ownership, Unit, and DeviceType. The annotator provides a graphical user interface (UI) for the service modeler to create domain services. • EL Annotator: This building block is designed for extracting semantic entities from the metadata of WoT resources and aligning SWoT-O ontologies with the concepts in the common sense KB. In our testbed, DBpedia is used for the referent common sense KB. The input of this module are the annotated WoT instances according to the SWoT-O ontology, and the output are the annotated WoT metadata data with linked entities and aligned ontological concepts to DBpedia. The subtasks are divided into schema type extraction and identification, candidate entity generation and ranking, and relation extraction. The extraction model is extended from the EL [35] framework that is usually used for semantic extraction and alignment on relational Web data. • Knowledge Storage: This building block provides common APIs for storing the knowledge graph into a persistent database. For storage efficiency and scalability, the graph database is proposed to be used. Concepts, properties, entities and relationships in resource description framework (RDF) formats are transferred to graphical data structures. For compatibility with the existing semantic reasoner, RDF-based knowledge storage is also used. The query of the KB is based on a SPARQL-compatible language which could also be used for the graph database. • Semantic Reasoner: This building block is aimed at providing a semantic reasoning capability based on the linked hybrid knowledge. The reasoning process could be based both on logical rules or statistical methods or hybrid methods. The query is based on a SPARQL language, and a list of ranked entities that matches the query are returned. The rule is modeled based on Jena's rule language, and the reasoning process is based on Jena API and Jena Reasoner [36]. Sensors 2017, 17, 403 6 of 23 generation and ranking, and relation extraction. The extraction model is extended from the EL [35] framework that is usually used for semantic extraction and alignment on relational Web data. • Knowledge Storage: This building block provides common APIs for storing the knowledge graph into a persistent database. For storage efficiency and scalability, the graph database is proposed to be used. Concepts, properties, entities and relationships in resource description framework (RDF) formats are transferred to graphical data structures. For compatibility with the existing semantic reasoner, RDF-based knowledge storage is also used. The query of the KB is based on a SPARQL-compatible language which could also be used for the graph database. • Semantic Reasoner: This building block is aimed at providing a semantic reasoning capability based on the linked hybrid knowledge. The reasoning process could be based both on logical rules or statistical methods or hybrid methods. The query is based on a SPARQL language, and a list of ranked entities that matches the query are returned. The rule is modeled based on Jena's rule language, and the reasoning process is based on Jena API and Jena Reasoner [36]. SWoT-O Model for CPS According to the requirements mentioned in Section 2, some key relational triples should be annotated to describe the meta-information of WoT resources. To provide a uniform ontology of CPS applications, the SWoT-based ontology (SWoT-O) [37] (seen in Figure 2) is mainly referred to and extended from SSN ontology, as well as reusing other IoT ontologies, such as a semantic actuator network (SAN) [38] for actuators, stream annotation ontology (SAO) [39] for streaming data, and QUDT ontology [40] for units of measure. The main structure can be categorized as: • Sensor-Observation-State-Event: Sensors observe some objects with SensorProperty, which has low-level raw data, and high-level states could be extracted from these observed raw data. The observed system runs and switches among certain states, and when the internal state of the observed system is changed, an event will be generated and published. The high-level state could be extracted by pattern discovery and prediction methods based on streaming mining algorithms, such as unsupervised cluster models [41], hidden Markov models (HMM) [24], or SWoT-O Model for CPS According to the requirements mentioned in Section 2, some key relational triples should be annotated to describe the meta-information of WoT resources. To provide a uniform ontology of CPS applications, the SWoT-based ontology (SWoT-O) [37] (seen in Figure 2) is mainly referred to and extended from SSN ontology, as well as reusing other IoT ontologies, such as a semantic actuator network (SAN) [38] for actuators, stream annotation ontology (SAO) [39] for streaming data, and QUDT ontology [40] for units of measure. The main structure can be categorized as: • Sensor-Observation-State-Event: Sensors observe some objects with SensorProperty, which has low-level raw data, and high-level states could be extracted from these observed raw data. The observed system runs and switches among certain states, and when the internal state of the observed system is changed, an event will be generated and published. The high-level state could be extracted by pattern discovery and prediction methods based on streaming mining algorithms, such as unsupervised cluster models [41], hidden Markov models (HMM) [24], or deep learning models [42] with temporal feature representations. In Figure 2, the processing of raw streaming data into high-level states and events are represented with dotted lines, and the line does not represent the exact predicts/relations between the entities but only describes reference methods of how data streams could be transformed into states or events. The SAO ontology is reused to annotate streaming sensory data for further high-level state mining and reasoning with other prior knowledge as [25] proposed. • Event-Rule-Actuator-Action: Since events were generated from sensory observations, the rule will be defined by service providers/developers which describes which event to subscribe to and what action should be triggered by actuators in some condition. The rule is defined for further semantic reasoning by combining forward knowledge with events (ssn:Sensors :hasState :State and :generates ssn:Stimulus) and backward knowledge with actions (san:Actuator :triggers :Action). In an automatic controlling CPS application, the action could change the current state of system to another one. To better describe the actions performed by actuators, the SAN ontology is reused to annotate actuators or controllers. IoT-O [43] could be a reference reusable ontology as well. Since there is no significant difference of using SAN or IoT-O for the actuator concept in our use case, SAN is mainly considered as a reference to reveal the concepts and relations. • WoTProperty: WoTProperty describes some basic properties of WoT resources, including Location, EntityType, Owner, and Unit, and SensorProperty is inherited from WoTProperty. WoTProperty contains more common sense knowledge and facts, which could be linked to entities and concepts in existing global KBs. In this paper, DBpedia is used as the background KB. • FeatureofInterests: Feature of Interest (FoI) defines the CPS scenario which is composed of related sensors or actuators. It includes the relations between devices which are interlinked with predefined sets of rules. The rule defines which WoTProperty of devices are considered in the scenario and which SensorProperty of devices should be activated as the filtering condition of the scenario. In the SWoT4CPS framework, FoI will be used as a set of rules to automatically compose related devices as certain CPS scenarios. • PhysicalProcess: The properties of many real world features observed by sensors are related by physical processes. The Physical Process models this as a directed relationship of a source sensorProperty (ssn:hasInput), which influences a target sensorProperty (ssn:hasOutput) via itermediateProperty. It represents the causal relationship between the source property of one device and the target property of another device. For example, in building automation systems, the state of cooling machine or the number of people in the room both influence the indoor energy (intermediateProperty), while energy influences the indoor temperature. Hence, by modeling the process chain between devices with their properties and generated events, it could be used for causal reasoning tasks, such as anomaly diagnosis in building automation systems or predictive maintenance for industrial machines. device and the target property of another device. For example, in building automation systems, the state of cooling machine or the number of people in the room both influence the indoor energy (intermediateProperty), while energy influences the indoor temperature. Hence, by modeling the process chain between devices with their properties and generated events, it could be used for causal reasoning tasks, such as anomaly diagnosis in building automation systems or predictive maintenance for industrial machines. Semantic Extraction and Alignment for WoT Metadata via Entity Linking At the SWoT-O Annotator stage, metadata representations of WoT resources are annotated with the SWoT-O vocabulary. Since metadata describes the meta-information about the generated datasets and how they could be accessed and exploited, it is essential to allow discoverability and self-description of sensor datasets. To facilitate WoT resource discovery and composition based on prior knowledge, it is necessary to extend the domain KB of WoT entities with common sense knowledge, such as DBpedia, to improve the results of the semantic-based fuzzy search. For instance, three devices are annotated by SWoT-O with location information "Beijing", "Peking", and "Shanghai", respectively. Though it explicitly describes where the sensors are located, the relation between "Beijing" and "Shanghai" are missing, as well as the exact type of the location, in the scope of domain knowledge. Thus, it is not applicable to query "all sensors located in China" or "all sensors deployed in the city" if the background knowledge base is not sufficient. While this knowledge exists in DBpedia, and if "Beijing" and "Shanghai" could be linked to the corresponding entities in the DBpedia, the fuzzy discovery of sensors is feasible. Consequently, in the SWoT4CPS framework, the challenges are how to extract similar semantics of domain facts with entities in common KBs, as well as aligning the domain concepts with the entity types in common KBs. Approach and Model Some previous works have proposed methods to annotate entities, types, and relations from web tables or relational data [28,31]. Similar to these studies, the ontological WoT metadata with SWoT-O are structured hierarchical data, which can also be modeled as domain semantic web tables with headers and cell values (as shown in Figure 3). Approach and Model Some previous works have proposed methods to annotate entities, types, and relations from web tables or relational data [28,31]. Similar to these studies, the ontological WoT metadata with SWoT-O are structured hierarchical data, which can also be modeled as domain semantic web tables with headers and cell values (as shown in Figure 3). WoT metadata is semi-structural relational data with key-value pairs. The EL Annotator can transfer the WoT metadata into tabular data and perform entity linking tasks. EL tasks use a probabilistic graphical model to jointly inference the linking and mapping. The EL Annotator queries the background KB sources to generate initial ranked lists of candidate assignments for schemas, content values, and relations between schemas. Once candidate assignments are generated, the joint inference component uses a probabilistic graphical model to capture the correlation between schemas, content values, and schema relations to make class, entity, and relation assignments. Details of the model are described as follows: • Candidate entity generation and ranking WoT metadata is semi-structural relational data with key-value pairs. The EL Annotator can transfer the WoT metadata into tabular data and perform entity linking tasks. EL tasks use a probabilistic graphical model to jointly inference the linking and mapping. The EL Annotator queries the background KB sources to generate initial ranked lists of candidate assignments for schemas, content values, and relations between schemas. Once candidate assignments are generated, the joint inference component uses a probabilistic graphical model to capture the correlation between schemas, content values, and schema relations to make class, entity, and relation assignments. Details of the model are described as follows: • Candidate entity generation and ranking The candidate generation contains Query and Rank module which generates a set of candidate assignments for column types, cell texts, and relations between columns from a given KB. The query module generates an initial ranked list of candidate assignments for the cell values using data from DBpedia. DBpedia could be accessed via its open endpoint through its URL [44]. The SPARQL query is formulated as Figure 4. The cell text is used as SPARQL query inputs defined by DBpedia [45], and the outputs are limited with predefined conditions. The candidate entity should be a "resource" category and the query string should fuzzily match the content of rdfs:label. According to the rule, an initial ranked list of entities for each SPARQL query statement are generated, along with the entity popularity score. The ground truth ranking is expected as high as possible, while the initial ranked list of entities is a disordered one which does not fully meet our requirements. For instance, when we input cell text "USA" into the query module as a query string, the entity "United_States" ranking first in the return list is expected. However, what we actually get from the top of the return list are "Democratic_Party_(United_states)", "Republican_Party_(United_States)", etc. The target entity "United_States" ranks out of the top 50 of initial ranked list. using data from DBpedia. DBpedia could be accessed via its open endpoint through its URL [44]. The SPARQL query is formulated as Figure 4. The cell text is used as SPARQL query inputs defined by DBpedia [45], and the outputs are limited with predefined conditions. The candidate entity should be a "resource" category and the query string should fuzzily match the content of rdfs:label. According to the rule, an initial ranked list of entities for each SPARQL query statement are generated, along with the entity popularity score. The ground truth ranking is expected as high as possible, while the initial ranked list of entities is a disordered one which does not fully meet our requirements. For instance, when we input cell text "USA" into the query module as a query string, the entity "United_States" ranking first in the return list is expected. However, what we actually get from the top of the return list are "Democratic_Party_(United_states)", "Republican_Party_(United_States)", etc. The target entity "United_States" ranks out of the top 50 of initial ranked list. To raise the target entity's ranking, we train an open-sourced support vector machine (SVM) ranking classifier [46] that scores how likely a candidate entity is to be a correct assignment for a given query string, and we use this prepared model to re-rank the initial candidate entities list. The SVM ranking classifier is a pairwise classifier, and it is trained on a set of string similarity and entity popularity metrics as its feature vectors, which we present as follows: • Entity Popularity: Entity popularity is a kind of prior probability feature. It helps in cases where disambiguation is difficult due to the existence of entities having similar names. It has already been integrated into the DBpedia so that we can access it from the query module directly. The popularity score of an entity is based on how frequently they are referenced by other entities. Entity popularity score is increased as a function of the score of the referencing entity, that is, the higher popularity score an entity obtains, the greater the reference to this entity. • String Similarity: In contrast to popularity, string similarity ( ) provides a syntactic comparison between cell text and candidate entities. Many candidates do not fully match cell text, so we select five common features such as Keyword Coefficence (KC), Levenshtein Distance (LD), Dice Score (DS), String Length (SL), and Word Contain (WC) to measure string similarities between this pairs. Several 0/1 metric checks are developed to measure whether entities fully contain the words in the cell text, or whether entities are equal to the cell text. We also check whether all words in the cell text are found in the same order in the candidate entity. To raise the target entity's ranking, we train an open-sourced support vector machine (SVM) ranking classifier [46] that scores how likely a candidate entity is to be a correct assignment for a given query string, and we use this prepared model to re-rank the initial candidate entities list. The SVM ranking classifier is a pairwise classifier, and it is trained on a set of string similarity and entity popularity metrics as its feature vectors, which we present as follows: • Entity Popularity: Entity popularity P pro is a kind of prior probability feature. It helps in cases where disambiguation is difficult due to the existence of entities having similar names. It has already been integrated into the DBpedia so that we can access it from the query module directly. The popularity score of an entity is based on how frequently they are referenced by other entities. Entity popularity score is increased as a function of the score of the referencing entity, that is, the higher popularity score an entity obtains, the greater the reference to this entity. • String Similarity: In contrast to popularity, string similarity Sim s (k) provides a syntactic comparison between cell text and candidate entities. Many candidates do not fully match cell text, so we select five common features such as Keyword Coefficence (KC), Levenshtein Distance (LD), Dice Score (DS), String Length (SL), and Word Contain (WC) to measure string similarities between this pairs. Several 0/1 metric checks are developed to measure whether entities fully contain the words in the cell text, or whether entities are equal to the cell text. We also check whether all words in the cell text are found in the same order in the candidate entity. A rankingScore function is defined based on the feature vectors to represent the relevance degree between the target entity and candidate entities. The rankingScore function is a linear function with weight α for the entity popularity feature P pro and weight β k for the other five string similarity features Sim s (k). The weights will be pre-trained in a supervised method with labeled datasets. • Candidate type and relation generation For each cell in the input table, we select a candidate entity which ranks at the top of the re-rank list as the current candidate. Then we set another query to the DBpedia endpoint for seeking all types that current candidate belongs to. The SPARQL query for generating candidate types is shown in Figure 5. • Candidate type and relation generation For each cell in the input table, we select a candidate entity which ranks at the top of the re-rank list as the current candidate. Then we set another query to the DBpedia endpoint for seeking all types that current candidate belongs to. The SPARQL query for generating candidate types is shown in Figure 5. We pairwisely combine input columns and select all combination without repetition. To each column pair combination, we use the links between pairs of current entities to generate candidate relations. For each row in a combination, we obtain each possible relation between the current entities by querying DBpedia in either direction, for example, <candidateRow1 property1 candidateRow2> or <candidateRow2 property2 candidateRow1>. The candidate relation set for the entire column pair combination is generated by taking a union of candidate relations between individual pairs of row current candidates. The SPARQL query for generating candidate relations is shown in Figure 6. Joint Inference Model Once the initial sets of candidate assignments are generated, the joint inference module assigns values to schemas and content values and identifies relations between the schemas. The result is a representation of the meaning of the WoT metadata as a whole. Probabilistic graphical models provide a powerful and convenient framework for expressing a joint probability over a set of variables and performing inference or joint assignment of values to the variables. We represent a set of WoT metadata data with the same domain-specific structures as an undirected Markov network We pairwisely combine input columns and select all combination without repetition. To each column pair combination, we use the links between pairs of current entities to generate candidate relations. For each row in a combination, we obtain each possible relation between the current entities by querying DBpedia in either direction, for example, <candidateRow1 property1 candidateRow2> or <candidateRow2 property2 candidateRow1>. The candidate relation set for the entire column pair combination is generated by taking a union of candidate relations between individual pairs of row current candidates. The SPARQL query for generating candidate relations is shown in Figure 6. • Candidate type and relation generation For each cell in the input table, we select a candidate entity which ranks at the top of the re-rank list as the current candidate. Then we set another query to the DBpedia endpoint for seeking all types that current candidate belongs to. The SPARQL query for generating candidate types is shown in Figure 5. We pairwisely combine input columns and select all combination without repetition. To each column pair combination, we use the links between pairs of current entities to generate candidate relations. For each row in a combination, we obtain each possible relation between the current entities by querying DBpedia in either direction, for example, <candidateRow1 property1 candidateRow2> or <candidateRow2 property2 candidateRow1>. The candidate relation set for the entire column pair combination is generated by taking a union of candidate relations between individual pairs of row current candidates. The SPARQL query for generating candidate relations is shown in Figure 6. Joint Inference Model Once the initial sets of candidate assignments are generated, the joint inference module assigns values to schemas and content values and identifies relations between the schemas. The result is a representation of the meaning of the WoT metadata as a whole. Probabilistic graphical models provide a powerful and convenient framework for expressing a joint probability over a set of variables and performing inference or joint assignment of values to the variables. We represent a set of WoT metadata data with the same domain-specific structures as an undirected Markov network Joint Inference Model Once the initial sets of candidate assignments are generated, the joint inference module assigns values to schemas and content values and identifies relations between the schemas. The result is a representation of the meaning of the WoT metadata as a whole. Probabilistic graphical models provide a powerful and convenient framework for expressing a joint probability over a set of variables and performing inference or joint assignment of values to the variables. We represent a set of WoT metadata data with the same domain-specific structures as an undirected Markov network graph in which the schemas and content values represent the variable nodes and the edges between them represent their interactions. We propose iterative message passing (IMP) to collectively disambiguate all assignments in a given WoT metadata template. To clearly describe IMP, we first represent the given table as a factor graph which is kind of Probabilistic Graph Model (PGM) that shown in Figure 7. In this factor graph, each solid circle is called cell node and each hollow circle is called column node. Both of them belong to what are known as 'variable nodes', and each square node belongs to what is known as a 'factor node'. All variable nodes in one column can be linked to factor node f1 and all cell nodes in different two columns can be linked to factor node f2. Factor nodes could provide agreement functions on selecting the best assignment to each variable node according to the joint constraints. given WoT metadata template. To clearly describe IMP, we first represent the given table as a factor graph which is kind of Probabilistic Graph Model (PGM) that shown in Figure 7. In this factor graph, each solid circle is called cell node and each hollow circle is called column node. Both of them belong to what are known as 'variable nodes', and each square node belongs to what is known as a 'factor node'. All variable nodes in one column can be linked to factor node f1 and all cell nodes in different two columns can be linked to factor node f2. Factor nodes could provide agreement functions on selecting the best assignment to each variable node according to the joint constraints. IMP is an iterative inference method which reassigns each candidate of a variable node to its maximum-likelihood value. IMP evaluates the likelihood according to the constraints of column type and relation which are relevant to the corresponding variable node. In each iteration, the maximumlikelihood value for each variable node is computed by using its linked factor nodes. Algorithm 1 shows how IMP performs iterative inference over the factor graphical model to find the maximumlikelihood set of referent entities for all assignments. The method initializes each current assigned candidate with an entity ranking at the top of the re-rank list derived from re-rank module (lines 2-4), and then iteratively recomputes constraints between column type, relation, and entity (lines 5-12). Finally, the candidate entity is reassigned until there is no change in assignment to any variable node or the maximum iteration limit is reached (lines 13-15). IMP is an iterative inference method which reassigns each candidate of a variable node to its maximum-likelihood value. IMP evaluates the likelihood according to the constraints of column type and relation which are relevant to the corresponding variable node. In each iteration, the maximum-likelihood value for each variable node is computed by using its linked factor nodes. Algorithm 1 shows how IMP performs iterative inference over the factor graphical model to find the maximum-likelihood set of referent entities for all assignments. The method initializes each current assigned candidate with an entity ranking at the top of the re-rank list derived from re-rank module (lines 2-4), and then iteratively recomputes constraints between column type, relation, and entity (lines 5-12). Finally, the candidate entity is reassigned until there is no change in assignment to any variable node or the maximum iteration limit is reached (lines 13-15). based on iterative reassignments, we only present Algorithm 2, which describes the column constraint factor, as an example. Algorithm 2 shows how the column constraint factor works, the factor first elects a most common type as the column type annotation through majority voting process (lines 2-13). The query function in line 4 is available in Section 4.3.1. If more than one type obtains the same vote score, the function will obtain their granularity, which is computed by counting the number of instances that belong to the type. The more specific a type is, the fewer instance it obtains. IMP selects the type with maximum vote score and minimum granularity as the column annotation, which means all assignments in this column should have this type. Then, the column constraint is used as feedback. The factor node will send a change message along with the target type to those variable nodes, whose current assigned candidate's type does not comply with the column annotation. The relation constraint process is similar to the column constraint. We only list the difference here. Compared to the column constraint, the difference is that relation constraint generates candidate relations with the current assignment of cell nodes in the same row of two columns at first, and then sends a message to both cell nodes. The query is available in Section 4.3.1. The relation can be established in both directions. When a relation annotation between two columns is elected, IMP should take both directions into account. Semantic WoT Search and Reasoning To perform semantic sensor search, anomaly diagnosis and automatic control tasks, the Apache Jena reasoner is used for SPARQL query and rule-based semantic reasoning based on the prior knowledge. The details of the process are described as follows: Semantic Search for WoT Resource Discovery Since domain knowledge are linked to common sense knowledge via the EL model-for example, the instances of Region, SensorType, Owner, and Unit are linked and aligned to entities of Location, Organization, and UnitofMeasurement in DBpedia-the common relationships are inherited to the domain knowledge as well. The enriched knowledge could be used to facilitate searching for semantic entities which has common relationships. To annotate the linking relationship between domain and common sense knowledge, the linkTo property is used to represent the linkage. Searching for WoTEntity instances which have similar properties requires a fuzzy inference according to their potential relationships, while these relationships rely on reasoning according to the common background knowledge. For instance, Beijing and Shanghai are both cities of China, and these knowledge have already been stored in the DBpedia already. If a query of "Searching the sensors located in China", then the common sense knowledge could be used to inference the correlations among sensors having similar properties. The inference process could be divided into two steps, one which is to search for the domain Region instances that has linkTo properties with entities in DBpedia, and the other is to search these linked entities that meet the queried relations (located in China). The SPARQL query is illustrated in Figure 8. Semantic WoT Search and Reasoning To perform semantic sensor search, anomaly diagnosis and automatic control tasks, the Apache Jena reasoner is used for SPARQL query and rule-based semantic reasoning based on the prior knowledge. The details of the process are described as follows: Semantic Search for WoT Resource Discovery Since domain knowledge are linked to common sense knowledge via the EL model-for example, the instances of Region, SensorType, Owner, and Unit are linked and aligned to entities of Location, Organization, and UnitofMeasurement in DBpedia-the common relationships are inherited to the domain knowledge as well. The enriched knowledge could be used to facilitate searching for semantic entities which has common relationships. To annotate the linking relationship between domain and common sense knowledge, the linkTo property is used to represent the linkage. Searching for WoTEntity instances which have similar properties requires a fuzzy inference according to their potential relationships, while these relationships rely on reasoning according to the common background knowledge. For instance, Beijing and Shanghai are both cities of China, and these knowledge have already been stored in the DBpedia already. If a query of "Searching the sensors located in China", then the common sense knowledge could be used to inference the correlations among sensors having similar properties. The inference process could be divided into two steps, one which is to search for the domain Region instances that has linkTo properties with entities in DBpedia, and the other is to search these linked entities that meet the queried relations (located in China). The SPARQL query is illustrated in Figure 8. Semantic Reasoning for Anomaly Diagnosis and Automatic Control After the search and composition of WoT resources, the reasoning will be triggered for anomaly diagnosis and automatic control. The reasoning strategies are based on predefined Jena rules and modeled as SPARUL (SPARQL/Update) [47] statements for execution. The process of the reasoning model could be divided into three parts: (1) Setup the FoI and PhysicalProcess models: inferring relationships among WoTEntity instances with sensorProperty according to configurable rules of FOI and PhysicalProcess instances. Once the instances of WoTEntity are initialized, the FOI instances will be created, as well as the relations to the WoTEntity's common and intermediate sensorProperty instances according to the rule of FOI model (#1, #2, and #3 SPARUL codes in Figure 9). To initialize the PhysicalProcess instances, the common sensorProperty instances are linked to PhysicalProcess instances as input and output parameters, while the intermediate sensorProperty instances are also linked as intermediate parameters (#4 SPARUL code in Figure 9). Finally, the State instances are initialized and linked to Actuator instances, and the inferred relations reveal which states the actuator could change (#5 SPARUL code in Figure 9). (1) Setup the FoI and PhysicalProcess models: inferring relationships among WoTEntity instances with sensorProperty according to configurable rules of FOI and PhysicalProcess instances. Once the instances of WoTEntity are initialized, the FOI instances will be created, as well as the relations to the WoTEntity's common and intermediate sensorProperty instances according to the rule of FOI model (#1, #2, and #3 SPARUL codes in Figure 9). To initialize the PhysicalProcess instances, the common sensorProperty instances are linked to PhysicalProcess instances as input and output parameters, while the intermediate sensorProperty instances are also linked as intermediate parameters (#4 SPARUL code in Figure 9). Finally, the State instances are initialized and linked to Actuator instances, and the inferred relations reveal which states the actuator could change (#5 SPARUL code in Figure 9). (2) Setup anomaly diagnosis model: firstly, inferring causal relationships among input, output, and intermediate sensorProperty instances of WoTEntity instances. The causal relationships could be modeled as Positive or Negative correlations, or more complicated correlations (#6 SPARUL code in Figure 10). Secondly, inference the causes of anomaly effects according to the PositiveCorrelation or NegativeCorrelation or other correlations among sensorProperty instances of corresponding WoTEntity instances (#7 SPARUL code in Figure 10). (2) Setup anomaly diagnosis model: firstly, inferring causal relationships among input, output, and intermediate sensorProperty instances of WoTEntity instances. The causal relationships could be modeled as Positive or Negative correlations, or more complicated correlations (#6 SPARUL code in Figure 10). Secondly, inference the causes of anomaly effects according to the PositiveCorrelation or NegativeCorrelation or other correlations among sensorProperty instances of corresponding WoTEntity instances (#7 SPARUL code in Figure 10). (3) Setup automatic control model: according to the anomaly diagnosis model, it infers the subscription relationship between Actuator instances with anomaly Event instances. Once the Actuator instance has been initialized, it will subscribe to an anomaly Event instance generated by a Sensor instance (#8 SPARUL code in Figure 11). Then, according to the FOI and PhysicalProcess model, it infers the relationships between Action instances of Actuator instances and State instance observed by Sensor instances with sensorProperty instances (#9 SPARUL code in Figure 11). (3) Setup automatic control model: according to the anomaly diagnosis model, it infers the subscription relationship between Actuator instances with anomaly Event instances. Once the Actuator instance has been initialized, it will subscribe to an anomaly Event instance generated by a Sensor instance (#8 SPARUL code in Figure 11). Then, according to the FOI and PhysicalProcess model, it infers the relationships between Action instances of Actuator instances and State instance observed by Sensor instances with sensorProperty instances (#9 SPARUL code in Figure 11). (3) Setup automatic control model: according to the anomaly diagnosis model, it infers the subscription relationship between Actuator instances with anomaly Event instances. Once the Actuator instance has been initialized, it will subscribe to an anomaly Event instance generated by a Sensor instance (#8 SPARUL code in Figure 11). Then, according to the FOI and PhysicalProcess model, it infers the relationships between Action instances of Actuator instances and State instance observed by Sensor instances with sensorProperty instances (#9 SPARUL code in Figure 11). Figure 11. Creating the automatic control model. Use Case and Proof-of-Concept Implementation To testify the feasibility and performance, we demonstrate the framework by designing an anomaly diagnosis and temperature automatic control use case for a building automation system. The framework has been open-sourced on Github (https://github.com/minelabwot/SWoT)and more details could be found there as complementary materials. Figure 11. Creating the automatic control model. Use Case and Proof-of-Concept Implementation To testify the feasibility and performance, we demonstrate the framework by designing an anomaly diagnosis and temperature automatic control use case for a building automation system. The framework has been open-sourced on Github (https://github.com/minelabwot/SWoT) and more details could be found there as complementary materials. The scenarios are composed of a temperature sensor, a camera sensor, and a cooling air conditioner (CAC) deployed in each room of buildings at different locations. The temperature sensor can directly detect the indoor temperature, while the CAC can tune the indoor temperature by turning the machine on/off, or the temperature up/down. The camera is used to detect the occupation of the room and the exact number of persons in the room. Our goal is to provide anomaly diagnosis and automatic temperature adjusting services according to indoor temperature anomaly events. (1) According to SWoT-O vocabulary, we then setup a basic domain knowledge base of how these sensors and actuators collaborate with each other via SWoT-O Annotator (seen in Figure 12). The temperature sensor and camera are annotated as ssn:Sensor with :WoTProperty, such as qu:Unit, :Location (:Region and :Spot), :Owner and :EntityType, while the CAC is annotated as san:Actuator with :Action. The ssn:FeatureofInterest is modeled as the target scenario composed with :SensorProperty of the temperature sensor, camera, and CAC. The :PhysicalProcess is modeled as the causal relation among these devices with their :SensorProperty as input and output parameters. In this use case, the causal relations are categorized into two types (:PositiveCorrelationProcess and :NegativeCorrelationProcess) as a reference knowledge for diagnosing the cause of the anomaly. As a proof-of-concept implementation, Protégé is used as the modeling tool (seen in Figure 12) for SWoT-O ontology, and we implement the SWoT-O Annotator based on J2EE. Neo4j [48] and Jena TDB [49] are used for knowledge graph and RDF triples storage. To validate the SWoT-O, the ontology has been submitted to Linked Open Vocabulary (LOV) [50] and visualized with WebVOWL [51] for linking vocabularies and following the best ontological practices. causal relations are categorized into two types (:PositiveCorrelationProcess and :NegativeCorrelationProcess) as a reference knowledge for diagnosing the cause of the anomaly. As a proof-of-concept implementation, Protégé is used as the modeling tool (seen in Figure 12) for SWoT-O ontology, and we implement the SWoT-O Annotator based on J2EE. Neo4j [48] and Jena TDB [49] are used for knowledge graph and RDF triples storage. To validate the SWoT-O, the ontology has been submitted to Linked Open Vocabulary (LOV) [50] and visualized with WebVOWL [51] for linking vocabularies and following the best ontological practices. (2) To link the knowledge base with common knowledge in DBpedia, we run EL Annotator to execute EL task. In this use case, the instances of :Region, qu:Unit, :Onwer and :EntityType are linked to the DBpedia., and the linking relations are stored into both Neo4j and Jena TDB. Figure 13 presents parts of the EL results stored in Neo4j graph database. The data sources of the constructed hybrid KB based on the use case have been published at https://github.com/minelabwot/SWoT. (2) To link the knowledge base with common knowledge in DBpedia, we run EL Annotator to execute EL task. In this use case, the instances of :Region, qu:Unit, :Onwer and :EntityType are linked to the DBpedia., and the linking relations are stored into both Neo4j and Jena TDB. Figure 13 presents parts of the EL results stored in Neo4j graph database. The data sources of the constructed hybrid KB based on the use case have been published at https://github.com/minelabwot/SWoT. (3) Based on the hybrid knowledge base, the system will perform semantic search and reasoning tasks. Since devices are created and deployed in a distributed manner, the first step is to query the target devices which could be composed into temperature adjusting and anomaly diagnosis scenario. The second step is to inference the cause of the anomaly once an anomaly event occurs, and the system will automatically adjust the indoor temperature by controlling the CAC. In our case, it is Figure 13. Parts of the EL results stored in Neo4i graph database. The blue circles represent the linked entities and types to DBpedia. It is annotated as the "linkTo" property. (3) Based on the hybrid knowledge base, the system will perform semantic search and reasoning tasks. Since devices are created and deployed in a distributed manner, the first step is to query the target devices which could be composed into temperature adjusting and anomaly diagnosis scenario. The second step is to inference the cause of the anomaly once an anomaly event occurs, and the system will automatically adjust the indoor temperature by controlling the CAC. In our case, it is assumed that the anomaly events have been detected via algorithms [24,42,52] and annotated with SWoT-O ontology. According to the model in Section 4.4, an Apache Jena reasoner is used in the implementations. By triggering a "High_Temperature_Anomaly_Event", the reasoner will infer that the Occupation of the room and the State of the CAC have either positive or negative correlations with the high temperature Event, thus, to adjust the temperature to a normal state, the "turn-down" operation of the CAC will be actioned automatically. The demo results can be found at https://github.com/-minelabwot/SWoT/. Entity Linking Evaluation We use fixedWebTable from Bhagavatula et al. [31] and WoT metadata generated from our use case as the experimental dataset. The fixedWebTable contains 9177 original texts with their annotations from 428 tables extracted from the web. Text data (7500 samples) will be used for pre-training re-ranking model, and the WoT metadata are generated by using the SWoT-O ontolgy in our application denoted as the Application Generated (AG) Figure 14. Figure 13. Parts of the EL results stored in Neo4i graph database. The blue circles represent the linked entities and types to DBpedia. It is annotated as the "linkTo" property. (3) Based on the hybrid knowledge base, the system will perform semantic search and reasoning tasks. Since devices are created and deployed in a distributed manner, the first step is to query the target devices which could be composed into temperature adjusting and anomaly diagnosis scenario. The second step is to inference the cause of the anomaly once an anomaly event occurs, and the system will automatically adjust the indoor temperature by controlling the CAC. In our case, it is assumed that the anomaly events have been detected via algorithms [24,42,52] and annotated with SWoT-O ontology. According to the model in section 4.4, an Apache Jena reasoner is used in the implementations. By triggering a "High_Temperature_Anomaly_Event", the reasoner will infer that the Occupation of the room and the State of the CAC have either positive or negative correlations with the high temperature Event, thus, to adjust the temperature to a normal state, the "turn-down" operation of the CAC will be actioned automatically. The demo results can be found at https://github.com/minelabwot/SWoT/. Entity Linking Evaluation We use fixedWebTable from Bhagavatula et al. [31] and WoT metadata generated from our use case as the experimental dataset. The fixedWebTable contains 9177 original texts with their annotations from 428 tables extracted from the web. Text data (7500 samples) will be used for pre-training reranking model, and the WoT metadata are generated by using the SWoT-O ontolgy in our application denoted as the Application Generated (AG) Figure 14. To pre-train the candidate's re-ranking module, we select 7500 origin texts from fixedWebTable as training sets and another 1000 texts of fixedWebTable with 100 texts from the AG Table as validation sets for tuning the model. The rest of the 677 fixedWebTable texts and 100 AG table texts are used as test sets. To label the training and validation datasets, the cell texts of the tables are used as input queries to the DBpedia endpoint, and the returned results which fully match the entities are labeled with a high ranking score (five points), and other results are labeled with low score (one point). To preprocess the test datasets, we manually annotate each cell text with ground truth in DBpedia. Then we drop the texts which have no ground truth in DBpedia from our dataset and denote this new test dataset as D. Accuracy is chosen as the evaluation metric to score the annotation results, which is standard in information retrieval. Re-rank is an important step in EL Annotator. It is known that only one assignment matches the ground truth in DBpedia. For each text in D, we find its re-ranked candidate list, and evaluate re-rank accuracy by judging whether the top N re-ranked entities contain the ground truth. Figure 15 presents the top-N ranking lists of parts of the candidate entities before and after re-ranking. Table 1 shows the accuracy comparison between two test dataset's parts in the re-rank module. standard in information retrieval. Re-rank is an important step in EL Annotator. It is known that only one assignment matches the ground truth in DBpedia. For each text in D, we find its re-ranked candidate list, and evaluate rerank accuracy by judging whether the top N re-ranked entities contain the ground truth. Figure 15 presents the top-N ranking lists of parts of the candidate entities before and after re-ranking. Table 1 shows the accuracy comparison between two test dataset's parts in the re-rank module. Figure 15. The top-N ranking lists of parts of the candidate entities before and after re-ranking. The left figure is the initial ranking list before re-ranking, while the right figure is the re-ranked top-N lists with ranking scores. To compare with the accuracy that a text's ground truth ranks in the top N after the re-ranking model, we use the origin rank list which is obtained from querying the DBpedia SPARQL endpoint. The judgment is the same as what we do to the re-ranked candidate list. It can be seen in Table 1, without a candidate entities re-ranking, that the accuracy of "ground truth rank in top 1" has a significant decrease, which means the iterative inference model will take more iterations to converge. For each cell text, we only set the top 10 entities in re-ranked list as the input candidate entities for the iterative inference module. The decreasing accuracy of "ground truth rank in top 10" will directly result in annotation errors due to the absence of the target assignment. After convergence iterations, every cell text in an input table is assigned to an entity in KB or to a string "no-annotation". The IMP algorithm could not annotate a nonexistent entity to a text. Thus, we have dropped the entities that are missing ground truth in DBpedia. Then we compare the entity links generated by our system to the ground truth manually annotated before. If the assignment generated from the iterative inference model comply with the ground truth, we consider it as a correct prediction. Otherwise, it is wrong. We present the results regarding accuracy of our algorithm in Figure 15. The top-N ranking lists of parts of the candidate entities before and after re-ranking. The left figure is the initial ranking list before re-ranking, while the right figure is the re-ranked top-N lists with ranking scores. To compare with the accuracy that a text's ground truth ranks in the top N after the re-ranking model, we use the origin rank list which is obtained from querying the DBpedia SPARQL endpoint. The judgment is the same as what we do to the re-ranked candidate list. It can be seen in Table 1, without a candidate entities re-ranking, that the accuracy of "ground truth rank in top 1" has a significant decrease, which means the iterative inference model will take more iterations to converge. For each cell text, we only set the top 10 entities in re-ranked list as the input candidate entities for the iterative inference module. The decreasing accuracy of "ground truth rank in top 10" will directly result in annotation errors due to the absence of the target assignment. After convergence iterations, every cell text in an input table is assigned to an entity in KB or to a string "no-annotation". The IMP algorithm could not annotate a nonexistent entity to a text. Thus, we have dropped the entities that are missing ground truth in DBpedia. Then we compare the entity links generated by our system to the ground truth manually annotated before. If the assignment generated from the iterative inference model comply with the ground truth, we consider it as a correct prediction. Otherwise, it is wrong. We present the results regarding accuracy of our algorithm in Table 2, and also present accuracy without the re-rank module in contrast to show how important re-ranking is. As the input table has several columns, we evaluate entity linking accuracy for each column and find that entity annotations under the column which represents the unit has an extraordinarily low accuracy, only 26.1%, far below the other columns whose mean accuracy is 88%. Lower accuracy is likely due to the lack of relevant data in DBpedia. Although our system might have discovered the correct assignments for column type and relation, if this entity does not have the same type and relation information in DBpedia, the system will miss this correct assignment. Performance is also evaluated to test how quickly the IMP converges. The number of variable nodes that need to be reassigned decrease to zero after the first two iterations. Two reasons can be used to explain this phenomenon. One is that the re-rank model has such a high accuracy that it can precisely re-rank the target assignment at the top of the re-rank list; the other is that few relations exist between the columns in our test dataset. The number of relations is positively related to the messages that a variable node received. A variable node with several relations may result in an increase in iterations. The average time consumed in each annotation on the WoT table is 4.1 s, and there is a considerable variation depending on the number of rows and columns. Discussion We have demonstrated an anomaly diagnosis and automatic temperature control application to testify to the feasibility and performance of the SWoT4CPS framework. Advantages and disadvantages can be concluded as follows: (1) Compared with research in the state of the art, SWoT-O ontology describes not only the physical attributes, deployment, and sensing/observing process of sensors, but also defines actuators with its actions, as well as monitor-control relationships or cause-effect relationships between them. Based on the prior knowledge, assisted decision-making services could be processed via semantic reasoning, such as anomaly detection and diagnosis, according to the sensing results and with automatically controlling actuators for anomaly discovery. These intelligent services are necessary in CPS applications for building automation, industry 4.0, and smart city scenarios. However, current SWoT-O still needs to be improved by reusing external ontologies. Regarding actuators with their actions for control loops, SAN and IoT-O ontologies could be referred to, while we are not considering very detailed properties for actuators and how to align these concepts or relationships (i.e., san:actsOn, san:actuatorInput, san:actuatorOutput, and iot-o:Operation) to current SWoT-O ontologies. It could be a future work to complement the SWoT-O, and its usage in automatic control/service composition applications should be modeled and designed when details of actions and services need to be planned and invoked. (2) The EL method could semi-automatically link domain knowledge to common sense knowledge, which enlarges and improves the interoperability of the domain KB. Based on the hybrid knowledge base, CPS applications could perform a two-stage reasoning task with both common sense knowledge reasoning for semantic relatedness and domain knowledge reasoning for causal relations. According to the experiment, the EL method has more than 90% accuracy and the time complexity is at a tolerant level for offline annotation tasks. The results show that the framework could improve the effectiveness and efficiency of semantic annotation and KB construction for CPS applications. (3) Though the EL method provides a semi-automatic and scalable model, existing prior knowledge for sensor networks or CPS applications are not sufficient enough, currently, to adopt machine learning methods to train the model for extracting semantic entities from WoT representations, or linking entities and aligning concepts with the global KB. Consequently, the current EL model could only extract some of the entities with types from the WoT metadata, while the relation extraction among these entities or types does not work well. Thus, a hybrid framework with both manual knowledge engineering methods and semi-automatic learning methods are more practical for a bootstrap for constructing a uniform KB for CPS applications. (4) Compared with current SWoT4CPS framework with static annotated knowledge, semantics have not been fully exploited from streaming data yet. Section 2 listed the state of the art works on this issue, and these studies mainly use ontology to annotate directly on the raw sensory data and use extended SPARQL (e.g., CQELS [53], C-SPARQL [54], EP-SPARQL [55], etc.) to efficiently query the RDF stream for semantic event processing. It focuses on mining and reasoning on online streaming data in a relatively short time window for real-time event-driven applications, while historical time series data reflect dynamic states and the state transfer of the observed system. Thus, mining semantics from multivariate time series could enrich the CPS KB with more dynamic knowledge, which could reveal how the system works. These knowledge and predicting models will be very useful for predicting future states of the system, finding correlations between observed properties, and making decision for further automatic operations. Thus, integrating event processing models with time series mining models will be a future work for online semantics extraction, annotation, and prediction of sensory data. Conclusions This paper proposes the SWoT4CPS framework, and it provides a hybrid solution with both ontological engineering methods by extending SSN and machine learning methods based on an entity linking (EL) model. To provide a uniform ontological representation of WoT resources in CPS applications, SWoT-O ontology is modeled for further semantic logical reasoning. To link and align the domain knowledge with global KB, an EL framework for WoT metadata is proposed based on a probabilistic graph model. The inference of the model is based on a message-passing algorithm to calculate the joint probability of contexts inside the representation of WoT metadata. To testify to the feasibility and performance, we demonstrate the framework by implementing a temperature automatic control and anomaly diagnosis use case in a building automation system based on the SWoT-O ontology. Evaluation results on the EL method show that linking domain knowledge to DBpedia has a relative high accuracy and the time complexity is at a tolerant level. Future work of our framework will focus on following existing best practices and methodologies of state of art's work to improve the SWoT-O ontology by better reusing external ontologies, especially for actuators with their abilities to deal with automatic control loops. Furthermore, designing and integrating semantic streaming mining models from multivariate time series are also future challenges. Especially, pattern recognition and semantic reasoning methods based on machine learning and deep learning models will be further investigated and researched in the future.
2017-05-09T07:19:20.026Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "3fd392e60c347f4fa11c216e268699b9adb65585", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/17/2/403/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fd392e60c347f4fa11c216e268699b9adb65585", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
259862783
pes2o/s2orc
v3-fos-license
Navigating the Complex Solid Form Landscape of the Quercetin Flavonoid Molecule Quercetin, a naturally occurring bioflavonoid substance widely used in the nutraceutical and food industries, exists in various solid forms that can have different physicochemical properties, thus impacting this compound’s performance in various applications. In this work, we will clarify the complex solid-form landscape of this molecule. Two elusive isostructural solvates of quercetin were obtained from ethanol and methanol. The obtained crystals were characterized experimentally, but the crystallographic structure could not be solved due to their high instability. Nevertheless, the desolvated structure resulting from a high-temperature treatment (or prolonged storage at ambient conditions) of both these two labile crystals was characterized and solved via powder X-ray diffraction and solid-state nuclear magnetic resonance (SSNMR). This anhydrous crystal structure was compared with another anhydrous quercetin form obtained in our previous work, indicating that, at least, two different anhydrous polymorphs of quercetin exist. Navigating the solid-form landscape of quercetin is essential to ensure accurate control of the functional properties of food, nutraceutical, or pharmaceutical products containing crystal forms of this substance. ■ INTRODUCTION Quercetin, 2-(3,4-dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one (Scheme 1), is a major dietary flavonol found in many fruits and vegetables, including onions, tomatoes, apples, and berries. 1,2 It belongs to a group of plant metabolites, named flavonoids, which are thought to provide health benefits through cell signaling pathways and antioxidant effects. 3 The quercetin molecule consists of a pyrone ring and phenyl ring, which constitute the hydrophobic part of the molecule and can form hydrophobic interactions such as van der Waals forces of attraction. 2,4 The hydrophilic part of the molecule consists of five hydroxyl groups that determine the molecule's biological activity and can act as hydrogen-bond acceptors and/or donors, as well as an ether and carbonyl group acting as acceptors for both intramolecular and intermolecular hydrogen bonding. 4−7 Quercetin has stimulated considerable interest in recent years, and it is the most extensively studied flavonoid, due to its significant association between dietary consumption and various health benefits, including antioxidant, anti-inflammatory, and antitumoral activities. 1,2,4,8,9 Due to this wide range of health benefits and biological effects, quercetin finds a multitude of applications in the food and nutraceutical industries. 2 Quercetin dihydrate is marketed as a dietary supplement in a capsule form, to help improve antiinflammatory and immune response. 10 As several solvents and processing conditions can be used in the manufacturing of crystalline quercetin as well as for its application, it is important to have a clear understanding of the solid-form landscape of the compound. 11−13 A thorough knowledge of the crystal forms of quercetin and the transformation conditions between them is essential to design storage conditions, avoid any unexpected transformations during manufacturing, and ensure accurate control of the functional properties of products containing quercetin. A Cambridge Crystallographic Data Centre (CCDC) search on quercetin crystal forms yields a vast range of structures. These include solvates and hydrates, cocrystals, and cocrystal solvates, such as a quercetin-DMSO solvate, quercetinisonicotinamide, quercetin-praziquantel, and quercetin-theophylline cocrystals, just to name a few. 14−17 Figure 1 summarizes briefly the most common solid forms of quercetin reported in literature, showing the powder X-ray diffraction (PXRD) patterns for the two deposited anhydrous quercetin forms, the two hydrates, and the DMSO solvate (QDMSO) recently solved by our group. The more stable and commercially available solid form of quercetin is the dihydrate (space group P1̅ , triclinic). This was solved in 1985 by Rossi 1,18 Quercetin monohydrate (space group P2 1 /c) was first reported in 2011 by Domagata et al. who determined its PXRD pattern and applied the multipolar atom model to analyze the structure in terms of its geometry, molecular packing, and intra-and intermolecular interactions (CCDC identifier: AKIJEK). 19 The monohydrate structure was nucleated from an acetonitrile solution; however, the exact experimental procedure remain unclear. Information on the anhydrous polymorphs of quercetin is confusing, due to the difficulty in obtaining crystals of sufficient size and quality for single-crystal X-ray diffraction. Such issue is well highlighted in literature, and it is the reason why the structure of anhydrous quercetin and its more or less stable polymorphs still remains poorly understood. 1,18−20 In 2004, Olejniczak et al. confirmed the existence of an anhydrous form by several experimental techniques (PXRD, DSC, TGA, and SSNMR) and DFT calculations. 20 However, the authors did not deposit any structure in the CCDC. Filip et al. in 2013 followed a multitechnique approach, combining PXRD data with information from SSNMR and molecular modeling to elucidate the conformation of quercetin in the anhydrous structure and gain insight into the relationship between the hydrogen-bond network and crystal packing pattern. 21 In 2016, the first structure of anhydrous quercetin (CCDC identifier: NAFZEC) was deposited by Vasisht et al., who solved it using computational analysis of PXRD data, and indicating that quercetin crystallizes in an orthorhombic anhydrous form with four molecules per unit cell, and space group Pna2 1 . 6 However, the data and lattice parameters for the anhydrous quercetin suggested by Vasisht et al. are reported several times in literature to be problematic and present large deviations from further lattice and geometry optimizations of the structure. 21,22 In 2016, Miclaus et al. reported the presence of two weak and highly unstable methanol and ethanol solvates, which result from quercetin recrystallization from the respective solvents and occur in mixtures with the anhydrous form. 23 It is reported that, in the two solvates, the solvent molecules are weakly hydrogen-bonded to the quercetin molecules and serve as intermediates in the transformation to an anhydrous quercetin form. The transformation was studied using SSNMR, and the PXRD patterns of the weak solvates and their resulting anhydrous forms were obtained. 23 However, the anhydrous quercetin PXRD pattern does not match with that of anhydrous quercetin previously solved by Vasisht et al. Finally, in 2020, another anhydrous quercetin structure was determined (CCDC identifier: NAFZEC01) from PXRD data by Maciołek et al. 24 This anhydrous quercetin form was an intermediate product of the thermal degradation of sodium 3,3′,4′,5,7-pentahydroxyflavon-5′-sulfonate tetrahydrate at 285°C and was recrystallized from its molten phase. The structure was solved in the C2/c space group, with four symmetrically independent quercetin molecules in the unit cell. The PXRD pattern of the C2/c anhydrous form is different from the previously reported structure by Vasisht et al. In this work, we explored and clarified the structure of anhydrous quercetin and its polymorphs. Furthermore, we studied the relative stability of such crystal forms and the possible crystallization pathways. As part of this investigation, we have found and experimentally characterized two elusive solvates of methanol and ethanol, which were obtained as intermediates during the crystallization of anhydrous quercetin. In summary, a wide range of solid-state characterization techniques were used to determine the solid-form landscape of quercetin, understanding relative stability and kinetics of polymorphic transformation in the solid state. A good knowledge of this information can guide the choice of crystallization parameters to target a particular form of quercetin and ultimately lead to faster product and process development. Slurrying of QDH in Ethanol-Water Solvent Mixtures (QE). Slurries of quercetin in ethanol-water solvent mixtures were prepared by adding 4.0 g of QDH in 100 g of 100, 90, 85, 75, 60, and 15% (w/ w) ethanol-water solvent mixtures. The temperature of the slurry was kept constant at 20°C using a Tamson TLC2 recirculating chiller. The slurry was stirred using magnetic stirring at approximately 300 rpm for 48 h. The solid samples removed from the slurry were filtered using a Buchner flask, funnel, and filter paper to remove the solvent. The samples were allowed approximately 24 h to dry completely. Approximately 10 mL of supernatant solution in the slurry experiments carried out in 100% ethanol was transferred to several Petri dishes together with seed crystals from the same slurry. This was done to promote growth of the seeds by evaporation and study more in detail the morphology of the obtained crystals, which we named QE. The Petri dishes were covered with Parafilm with holes to allow slow, controlled evaporation of the ethanol. Slurrying of QDH in Methanol (QME). QDH was slurried in 100% methanol following the same methodology as for 100% ethanol. The solid samples were filtered and allowed approximately 24 h to dry. Scanning Electron Microscopy (SEM). The crystal morphology of the QE crystals was determined using SEM. The dry samples were imaged using a Carl Zeiss EVO MA15 scanning electron microscope. Samples were arranged on Leit tabs attached to SEM specimen stubs, and an iridium coating was applied before measurement. Samples from the 100% ethanol slurry and from the growth experiments on the Petri dishes were imaged. Thermogravimetric Analysis Coupled with Differential Scanning Calorimetry (DSC/TGA). TGA and DSC experiments were performed on a Mettler Toledo TGA/DSC 3+ Stare System equipment. The samples (around 10−15 mg) were placed in 70 μL aluminum pans, covered with a lid, and heated from 20 to 500°C at a heating rate of 10°C min −1 . Nitrogen was used as purge gas at 50 mL min −1 . Measurements were repeated three times. The samples were filtered the day before the analysis and left to dry overnight. X-Ray Diffraction (SAXS/WAXS, PXRD, VT-PXRD). The small and wide-angle X-ray scattering (SAXS/WAXS) data were collected on a SAXSpace instrument (Anton Paar GmbH, Graz, Austria) equipped with a Cu anode that operates at 40 kV and 50 mA (λ = 0.154 nm). The PXRD data were collected on: (1) a Panalytical X'Pert PRO, which was set up in Bragg-Brentano mode, using Cu Kα radiation (λ = 1.54184 Å), in a scan between 5°and 50°2θ with a step size of 0.032°(2θ) and time per step of 25 s; (2) a Rigaku Rint2500 rotating Cu anode source, working at 50 kV and 200 mA in Debye−Scherrer geometry. The latter diffractometer is equipped with an asymmetric Johansson Ge (111) crystal to select the monochromatic Cu Kα1 radiation (λ = 1.54056 Å) and the silicon strip Rigaku D/teX Ultra detector. Data were collected from 5°to 80°(2θ) with a 0.02°(2θ) step size and counting time of 6 s/step. The powder was introduced in a glass capillary of 0.5 mm in diameter and mounted on the axis of the goniometer. The capillary was rotated during the measurement to improve the randomization of the orientation of the individual crystallites to reduce the effect of possible preferred orientation. The Variable Temperature PXRD (VT-PXRD) data were collected on the Panalytical X'Pert PRO, and the temperature was increased from 20 to 90°C at a rate of 10°C min −1 . The analysis of the crystal packing was performed using the program Mercury, version 2022.3.0. 25 QE Mass Loss over Time Experiments. Dynamic Vapor Sorption (DVS) Experiment. A 50 mg sample of QE in 100% ethanol slurry was placed on a DVS pan, and the mass change over a period of 20 h was monitored, at a constant temperature of 20°C and a relative humidity (RH) of 20%. The DVS experiments were performed on a Surface Measurement Systems DVS Resolution equipment. Monitoring Sample Mass of QE over Time. A sample of QE was filtered, and the solid was placed on a plastic Petri dish and left uncovered at ambient conditions. The mass of the sample was measured once a day for 6 days with a lab balance to observe any mass changes. Stability Studies for the QE Crystals. The thermodynamic stability of QE was determined by measuring the SAXS/WAXS patterns of QE samples treated under different conditions. The samples tested include: 4-week-old and 16-month-old samples of QE left at room-temperature conditions in the laboratory, a sample of QE slurried in pure water for 24 h and magnetically stirred at 300 rpm, and a sample of QE that was treated in a vacuum oven at 0 mbar for 24 h. Solid-State NMR Spectroscopy. Solid-state 13 C CPMAS NMR spectra were acquired with a Bruker Avance II 400 Ultra Shield instrument, operating at 400.23 and 100.63 MHz, for 1 H and 13 C nuclei, respectively. The powder sample was packed into cylindrical zirconia rotors with a 4 mm o.d. and 80 μL volume. A certain amount of sample was collected from each batch and used without further preparations to fill the rotor. The 13 C CPMAS spectra were acquired at a spinning speed of 12 kHz, using a ramp cross-polarization pulse sequence with a 90°1H pulse of 3.60 μs, a contact time of 3 ms, a recycle delay ranging from 1 to 10 s, and a number of scans between 100 and 820, depending on the sample. A two-pulse phase modulation (TPPM) decoupling scheme was used, with a radiofrequency field of 69.4 kHz. The 13 C chemical shift scale was calibrated through the methylenic signal of external standard αglycine (at 43.7 ppm). Calculations. Crystal structures were optimized using the Quantum Espresso suite (v. 6.4.1), 26 employing the projectoraugmented wave (PAW) approach, with the nonlocal vdW-df2 method 27 and B86r functional 28 with the SSSP set of pseudopotentials. 29 An energy cut-off of 60 Ry was used. The experimental and optimized crystal structures were visualized and compared using the CSD program Mercury. The RMSD20 was calculated with the "crystal packing similarity" utility, considering a cluster of 20 molecules and a tolerance value of 20% on angles and distances. Crystal Structure Solution via Powder X-Ray Diffraction (PXRD). The ab initio solution and structure refinement process were automatically performed by the EXPO software, 30 a package capable of carrying out the following steps: (a) determination of unit cell parameters and identification of space group, (b) structure solution by direct methods and/or real-space approach, and (c) structure model refinement by the Rietveld method. 31 The first low-angle well-defined peaks in the experimental diffraction pattern were selected using a graphical peak selection tool and actively used for indexing via N-TREOR09 32 and DICVOL04 33 programs embedded in EXPO. The space group determination was determined on the evaluation of the systematic absences. Each structure was solved with a real-space method based on the simulated annealing algorithm implemented in EXPO. The starting model was derived from the crystal structure of the anhydrous quercetin polymorph, refcode NAFZEC, 6 obtained from CSD, 34 and the geometry optimization was achieved by the program MOPAC. 35 The simulated annealing algorithm was run 20 times under Linux workstation in default mode and in parallel calculations over 20 CPUs. The best solution with the lowest cost function value was selected. The criterion to accept the solution was also based on the soundness of crystal packing. The solution obtained by the directspace method was also confirmed by direct methods. Density-functional theory (DFT) geometry optimization with Quantum ESPRESSO 36 was only performed on hydrogen atoms to improve their positions. The derived structure was refined by the Rietveld method. Restraints were applied to bond distances to stabilize the refinement. All H atoms bonded to C atoms were treated as riding under the constraint on atomic displacement parameters U iso (H) = 1.2 · U iso (C). Peak shape was modeled using the Pearson VII function. The atomic displacement parameters were refined isotropically and constrained to have the same value for atoms of the same chemical species. Solubility Measurements of Quercetin Anhydrous (QA), Quercetin-Methanol (QME), Quercetin-Ethanol (QE), Quercetin Dihydrate (QDH), and Quercetin Anhydrous from Desolvation of QDMSO (QA2). The Crystal16 equipment (Technobis) was used to determine the solubility of quercetin solid forms. Isopropanol was used as a reference solvent. In the Crystal16, clear points of eight 1 mL stirred vials can be measured in parallel and automatically, based on the value of the turbidity. The bottom stirring speed was set at 800 rpm. The temperature at which the suspensions become clear solutions by heating (rate of 0.3°C min −1 ) was taken as the saturation temperature of the measured samples. Slurrying of QDH in Ethanol-Water Solvent Mixtures and Methanol. The solid crystals from the various ethanolwater solvent mixtures after slurrying were tested using SAXS/ WAXS to identify the solid form (Supporting Information, Figure S1). The SAXS/WAXS patterns for the solid samples from 15 to 90% (w/w) ethanol slurries were identical to that of quercetin dihydrate. This means that the stable solid form of quercetin for those ethanol-water solvent mixtures is the dihydrate. QDH as purchased was also tested using SAXS/ WAXS and is shown in Figure S1 for comparison. The solid taken from the 100% ethanol slurry exhibits a different PXRD pattern, which does not correspond to either QDH or its desolvated form or any other deposited quercetin structure. 1,6,14,19,24 However, the pattern looks identical to the pattern previously reported by Miclaus et al., who described it as a weak quercetin-ethanol solvate. 23 This could be a case of a solvent-mediated polymorphic transformation, in which QDH dissolves in pure ethanol and recrystallizes as a weak solvate crystal structure that seems to form interactions with ethanol itself. This ethanol-slurried sample will be referred to as QE in the manuscript. The quercetin sample obtained from slurrying QDH in pure methanol (QME) was also studied and its PXRD pattern collected. In Figure 2, it can be observed that the QE and QME patterns are almost identical. For better resolution, PXRD data of QE and QME were collected on the Rigaku Rint2500 instrument. The QE pattern exhibits main peaks at 2θ angles of 4.54°, 8.92°, 9.80°, 13.08°, 24.84°, 26.06°, and 28.00°. The fact that QE and QME display the same diffraction peaks is particularly interesting, as it could indicate that these two quercetin forms are isostructural. Isostructural crystal structures have been previously shown in the literature to share very similar XRD patterns resulting from similar crystal structures and packing patterns, but different cell dimensions and chemical composition. 37 This type of behavior would not be a surprise as the methanol and ethanol molecules are very similar, each containing a hydroxyl group of very similar electronegativity, and ethanol only being slightly bigger in size just by a methyl group. It is, therefore, expected that the type and strength of intermolecular interactions that they would form with the quercetin molecules would not differ greatly, and this should result in similar packing arrangements in the lattice. It is worth noticing that slurrying QDH in isopropanol did not result in the formation of a hypothesized solvate structure, perhaps due to the larger size of this molecule. It is interesting to notice that QE is only obtained from slurrying QDH in pure ethanol, and above 10% (w/w) of water in the solvent resulted in QDH being the most stable form. It seems that the interaction with the ethanol molecules in solution is weaker than with the water molecules, and this could possibly be due to the bulkier size of the ethanol molecule compared to water, which is impacting the strength of the hydrogen-bond interactions with the quercetin molecules and, thus, making it unable to offer the same degree of stabilization of the lattice as water molecules. 14 In our previous publication, it was shown how the water molecules in the QDH lattice satisfy Kitajgorodskij's rule for the hydrogenbond interactions, leading to a close-packed structure of higher relative stability compared to the previously known monohydrate and anhydrous quercetin forms. 38 Therefore, it is not a surprise that, even at a low ratio of water in the solvent mixture, QDH is the stable form. This phenomenon has been observed and reported in literature before. 39,40 When the water activity of the solvent mixture exceeds a critical water activity value, the hydrate form of the crystal is the thermodynamically stable form. However, when the water activity of the solvent mixture is below the critical value (in our system, the critical water activity corresponds to a water concentration less than 10%(w/w)), then the solute molecules interact primarily with the other solvent molecules (i.e., ethanol in our system), excluding the water molecules from the lattice. 39,40 Similar studies have been conducted in the past for carbamazepine and theophylline, to understand the relation between solvent water activity and the hydration state of the solid phase that crystallizes, and generating three-component phase diagrams for those systems. 41,42 It is also reported that the critical water activity depends on factors such as the temperature, pressure, and the nature of the solute and solvent. Overall, the slurrying experiments demonstrate that, for applications where QDH is desired, the use of mixtures of ethanol and water to increase the solubility of quercetin in solution is safe, as long as the ethanol ratio in solution is 90% (w/w) or lower. Pure ethanol will result in the formation of a different quercetin structure. Scanning Electron Microscopy (SEM). Images of QE crystals are shown in Figure 3. The SEM images show a needle morphology for the QE crystals, although the crystals from the 100% ethanol slurry are flakier and smaller in size compared to those grown on the Petri dishes. These latter ones appear bigger in size, between 20 and 40 μm, and have a higher aspect ratio compared to the ones obtained directly from the slurry. It should be noted that this morphology is very similar to that of the QDH crystals, which also exhibit a needle-like shape. For comparison, SEM images of the morphology of the QDH crystals are shown in the Supporting Information, Figure S2. Thermal Stability of QE and QME. Thermogravimetric Analysis Coupled with Differential Scanning Calorimetry (TGA/DSC). The thermal stability of the QE structure was studied to assess under what conditions of temperature the sample undergoes changes in mass or heat flow. The results for the TGA coupled with DSC are shown in Figure 4. Observing the TGA curve, there is a loss in mass of about 6.2%, starting at an onset temperature of 28.5°C and finishing at approximately 70°C. This loss in mass is accompanied by an endotherm as seen on the DSC curve. The loss could be attributed either to free ethanol evaporating from the wet solid (e.g., if the sample was not completely dry after being left to dry overnight), or to a desolvation process, where the ethanol molecules leave the crystal lattice. To confirm which of the two was the reason for the weight loss, the mass of ethanol was monitored in two different experiments: a DVS experiment, where the mass was monitored for 24 h under controlled relative humidity and temperature conditions, and a mass-loss over time experiment, where the mass of a QE sample left at ambient conditions (approximately 20°C) was monitored for several days. The data for these experiments are shown in the Supporting Information, Figures S3 and S4. Both experiments confirmed that the mass of QE does not change considerably after the first day of drying. More specifically, the mass of a sample of QE after one day of drying to the sixth day just decreased by 0.8%. This confirms that, during the TGA/DSC experiment, it is very unlikely that the sample lost 6.2% of its mass due to an incomplete drying process. Hence, the thermal event observed in the DSC should be associated to a desolvation event. The theoretical mass loss for a stoichiometry of one molecule of ethanol to one molecule of quercetin is calculated to be 13.2%. The observed loss was much less than that, almost half, and there was also significant variability in the mass loss between the different measurements of crystals from the same batch. This further suggests that the measured QE sample was highly unstable, and that what was actually measured with the TGA/DSC is a mixture of QE and an anhydrous form of quercetin. Miclaus et al. also emphasized in their paper the difficulty in obtaining a pure form of QE due to its low stability. 23 There is no further loss in mass after the endset temperature of 70°C and before the quercetin chemical decomposition at 335.3°C. The melting point of QE occurs at a sharp temperature of 317°C, which agrees with the melting point of quercetin, starting either from QDH nor QDMSO forms. 14 However, it is interesting to note that a small endotherm occurs just before melting, at an onset temperature of 247.3°C. This endotherm is obtained neither for QDH or QDMSO, and it is probably due to a structural rearrangement that occurs in the crystal lattice before melting. If the ethanol molecules are weakly hydrogen-bonded to the quercetin molecules, they might have escaped the lattice during the thermal desolvation event with a conformational rearrangement of the whole structure. Therefore, it is possible that, during that small endothermic event, the quercetin molecules rearrange to attain a more stable conformation with a melting point equivalent to that of the structure obtained from heating both QDH and QDMSO. The quantitative data from the TGA/DSC measurements are summarized in Table 1. The thermal behavior of QME was also studied by TGA/ DSC, and the results are illustrated in Figure 5. The thermal events observed for QME are very similar to those of QE. The structure exhibits an endothermic event at an onset temperature of 31°C, accompanied by a loss of 2.3% of its mass, which could be linked to a desolvation step. Furthermore, a similar small endotherm, possibly due to a structural rearrangement just before melting, is observed at approximately 248°C, similarly to QE. Finally, the structure melts at a temperature of 317°C, agreeing with the melting temperature of quercetin. For both QE and QME, the thermal analysis seems to highlight the formation of a weak solvate with the loss of a nonstoichiometric amount of ethanol and methanol molecules. Variable Temperature Powder X-Ray Diffraction (VT-PXRD). To verify the presence of structural changes in QE between 28 and 70°C, the PXRD pattern of this sample was measured at 90°C. The results are shown in Figure 6. From the PXRD data, it is evident that there is a change in the structure, as the main peaks are different before and after heating. The two main peaks of QE (20°C) at 8.9°and 9.8°d isappear, and two new peaks appear for QE (90°C) at 10.2°a nd 10.9°. Furthermore, the main peak of QE (20°C) at 13.0°d isappears and another one at 13.6°appears for QE (90°C). Combining the data from the TGA curve and QE (90°C) pattern, it can be confirmed that this pattern belongs to an anhydrous form of quercetin, as no further loss in mass appears to be occurring at any higher temperature before decomposition. From now on, we will refer to this anhydrous form as QA. Moreover, these data suggest that the initial sample of QE at 20°C could already contain a small amount of desolvated form, as the QE (20°C) pattern contains small peaks at 2θ angles of 10.2°and 13.6°, which increase in intensity in the QE (90°C) pattern. This further highlights the difficulty of obtaining a pure sample of QE due to the very weak stability of the form when heated and explains why the mass loss in the desolvation step from the TGA data does not meet the theoretical loss of a stochiometric solvate. Crystal Structure Solution of the Quercetin Anhydrous Form (QA). The anhydrous quercetin structure (QA) that resulted from the thermal desolvation of QE was solved from PXRD data collected in transmission mode, Figure 7, on the Rigaku Rint2500 diffractometer, using the EXPO software outlined in the methodology section. The crystallographic information and final Rietveld plot are reported in the Supporting Information, Table S1 and Figure S5, respectively. It has been frequently reported in the literature that the desolvation of a solvated crystal form can provide an alternative pathway to the formation of polymorphic forms that would otherwise be difficult or impossible to crystallize by conventional crystallization techniques. 37 It should be noted that the diffraction profile of QA does not match the dehydrated QDH or desolvated QDMSO (QA2) patterns previously obtained, nor to the PXRD patterns of the two anhydrous quercetin structures deposited in the literature. 6 anhydrous quercetin patterns reported in the literature and the one obtained within this work is shown in Figure 8. However, the QA2 crystal structure remains unsolved. The QA structure solved in this work is instead illustrated in the Supporting Information, Figure S6, and its structure packing is represented in Figure 9. The unit cell contains four quercetin molecules, each molecule forming one asymmetric unit (Figure 9a). Along the b-axis, the quercetin molecules are π-π stacked ( Figure 9b) and the molecules are arranged in a zigzag motif along the a + c direction held by strong hydrogenbond interactions between O-H donor groups and oxygen acceptor atoms of hydroxyl and carbonyl groups (Figure 9c). The dihedral angle between the phenyl and pyrone ring was found to be τ = 25.44(2)°, which makes the quercetin molecule more planar compared to the quercetin conformation of the previously reported anhydrous quercetin structure (τ NAFZEC = 28.82°) and perhaps this facilitates the π-π stacking interactions along the b-direction. Solid-State NMR Spectroscopy Studies. The use of SSNMR plays an important role in unraveling structural features of solid crystalline materials, especially when such information cannot be obtained through diffraction analyses. Indeed, it allows assessing the purity of the samples and number of resonances in the 13 C and 15 N CPMAS SSNMR spectra and provides insights into the number of independent molecules in the unit cell (i.e., Z′). Additionally, the average full width at half maximum value of 13 C signals is indicative of the degree of crystallinity of the material and the chemical shifts of the most significant resonances are able to suggest the protonation state of ionizable moieties and their involvement in hydrogen bonds. 43,44 In this paper, we used 13 C CPMAS SSNMR to characterize all the obtained quercetin samples, i.e., QE, QME, QA, and QA2. Figure 10 shows the 13 C CPMAS spectra of QE and QME, with the assignment of the resonances, adopting the atom numbering presented in Scheme 1. An overlay of the QE and QME spectra is reported in the Supporting Information, Figure S7. All 13 C chemical shifts are listed in Table S2 in the Supporting Information. The two spectra are fully superposable, suggesting that the two samples contain the very same phase, or that they represent isomorphous phases. Moreover, they very well agree with those collected by Miclaus' group for the two unstable solvates that they studied. 22 The only difference in our spectra is the absence of any resonance ascribable to the presence of ethanol or methanol in QE or QME, respectively. In this sense, Miclaus' spectra are characterized by two peculiarities: (a) in their QE spectrum, only the methyl signal appears but not the CH 2 one; (b) the intensity of the methylic signal in QME does not fit that of the other signals suggesting the presence of a nonstoichiometric solvate or a very inefficient polarization transfer. We made several attempts, even with freshly prepared samples, to detect the 13 C peak of methanol in QME also by means of a 13 C direct excitation experiment ( 13 C MAS) (not shown), which were unsuccessful. This led us to hypothesize that the two solvates are possibly nonstoichiometric and so unstable that they lose any trace of solvent during sample handling, leading to the obtainment of the same phase upon desolvation. This agrees with the nonstoichiometric relative intensity of the CH 3 signal in the QME spectrum obtained by Miclaus and with the lower, than stoichiometrically expected, weight loss observed by TGA (see above). Nonetheless, the spectra clearly indicate that, in both cases, one independent molecule of quercetin in the unit cell is present. QE (taken as representative of both samples) was then compared to QA and to commercial QDH. The corresponding spectra are displayed in Figure 11 (the stacked spectra are reported in the Supporting Information, Figure S8). The overlay of Figure 11 clarifies how the QE phase is different from both QDH and QA. Additionally, the spectra of QDH and QA well agree with those previously reported for the same commercial crystal forms. Earlier studies performed by Olejniczak and Potrzebowski 20 and Filip et al. 21 suggest that, while in QDH quercetin displays an anti conformation, in QE, QME, and QA, it adopts the syn one. This information can be mainly assessed by the chemical shifts of C2′ and C6′, which, in the case of the syn conformer, tend to converge to about 120 ppm, while, in the anti one, are well separated, falling at about 127 (C6′) and 113 (C2′) ppm. Regarding QA, we compared our results with those obtained by Vasisht's group; 6 despite many crystallization attempts, we were never able to obtain the same PXRD pattern as theirs, while consistently achieving the one shown in this paper, which perfectly reproduces that of QA, previously studied by Filip's group. 21 Moreover, the 13 C CPMAS spectrum that Vasisht proposes does not coincide with that of QA. This leads to hypothesize that the crystal structure deposited in the CSD (refcode: NAFZEC) by Vasisht either represents a "disappearing polymorph" of anhydrous quercetin, or that it was solved starting from a physical mixture of several phases, as the corresponding 13 C CPMAS SSNMR spectrum seems to suggest. On the contrary, the structure presented in this work is representative of the anhydrous quercetin reported by Filip, which was never deposited in the CSD. This is further endorsed by the DFT optimization of the QA and NAFZEC crystal structures, which confirmed the higher stability of our structure with respect to Vasisht's. As known, the maximum energy difference between polymorphs is usually of 10 kJ/mol, while unexpectedly, the energy of NAFZEC resulted to be +46.65 kJ/mol higher than QA. Then, we proceeded to the comparison of the experimental and optimized structures of QA and NAFZEC ( Table 2). As can be seen from Table 2, for the QA structure the volume, the experimental and computed unit cell parameters are in perfect agreement. The %Error on the cell volume of −5.2% is consistent with the average %Error typically found between experimental and computed cell volumes, i.e., the computed structure is subjected to cell-shrinkage as the optimization is performed at 0 K. The overlay of the experimental and optimized crystal structure of QA is reported in the Supporting Information, Figure S9. On the other hand, the optimization of the NAFZEC crystal structure shows a dramatic reduction of the cell volume (−26.96%) with a retention of the space group. To check the goodness of the optimization result, the experimental structure was optimized using also the VASP package with the PBE-pseudopotentials. Two attempts were made, using the TS and MBD methods for dispersion correction. Both optimizations converged to the same result of the first one. The predicted molecular volume for quercetin, using the formula by Hofmann 45 is about 339 Å 3 . Thus, the estimated cell volume for quercetin, in a Pn21a space group (Z, Z′ = 4, 1), should be of 1354 Å 3 . Indeed, the experimental structure shows some voids in the unit cell ( Figure S10), which are not consistent with the close packing usually found in organic molecular crystal structures. Finally, a 13 C CPMAS measurement was performed on the QA2 solid form, obtained by desolvating QDMSO crystals, 13 too. Figure 12 shows a comparison between its 13 C CPMAS SSNMR spectrum and that of the solved quercetin anhydrous phase (QA). The peak assignments are included in the Supporting Information , Table S2. The QA2 phase appears different from the QA phase. Unfortunately, due to strong peak overlapping, no definitive insight into the number of independent molecules in the unit cell can be reached, but by a process of peak fitting and integration of the carboxylic region (180−170 ppm), we hypothesize that the new phase contains four quercetin molecules in the asymmetric unit. Stability Studies. Samples of QE of different age and processing history were compared to study the stability of the crystal structure in atmospheric conditions. The SAXS/WAXS data of the analyzed samples are shown in Figure 13. The QE samples that were 1-day, 4-weeks, and 16-months old were left in open vials in the laboratory at room temperature (20°C) and pressure. The results show that the patterns for the 1-day and 4-weeks old samples were identical; therefore, QE is unlikely to transform over such period of time. However, the 16-months old sample exhibited some extra peaks at 10.1°and 10.7°(2θ), and the peak around 13.0°a ppears to be slightly shifted to the right compared to the other patterns. These extra peaks match with those of the desolvated form of QE shown earlier. As the pattern appears to confirm a mixture of QE and of its desolvated form, it shows that, over the period of 16 months, QE is likely to slowly desolvate when the samples are stored at room temperature (20°C). The pattern of a QE sample that was slurried in pure water for 24 h contained peaks that are characteristic of QE, but also some extra peaks at 10.8°, 12.9°, 13.9°, and 14.2°(2θ), which are characteristic peaks of QDH. This suggests that, when the QE form is slurried in water, it can transform back into the QDH form, which aligns with the observation that, in such a high water activity, the QDH is the thermodynamically stable form. However, the pattern suggests that the transformation is incomplete in 24 h and that the sample is a mixture of both QE and QDH, as it contains characteristic peaks of both forms. This shows that, for applications of quercetin in water, QE would not be a stable solid form as it would transform to QDH. The pattern of the QE sample treated in vacuum for 24 h exhibits peaks at 5.5°, 10.2°, 10.9°, and 13.5°(2θ) that completely match the peaks of the desolvated QE obtained by heating the sample to 90°C. The pattern does not contain any peaks from the original QE form; therefore, the desolvation in vacuum appears to be complete and gives a pure desolvated QE form (QA). Therefore, it can be concluded that the desolvation of QE can be accelerated either by heating the sample at a temperature above 28°C, as this was the onset of desolvation from the TGA/DSC experiments, or by treating the solid in vacuum for 24 h. Solubility Data of Quercetin Solid Forms. Figure 14 shows a plot of the solubility of the different quercetin solid forms in isopropanol. The solubility of pure compounds was determined for temperatures between 20 and 70°C. The results could be well correlated with the van't Hoff equation (eq 1) as shown from the linear fit in Figure 14 x From the obtained solubility data, we can observe that all the quercetin solid forms studied in this work have different solubility values, indicating that they all represent different crystal structures. Interestingly QE and QME have different solubilities despite presenting similar PXRD patterns. The two anhydrous forms, QA and QA2, show very different solubility values, confirming that these are two different polymorphs of pure quercetin. As QA has a lower solubility than QA2, it can be assumed that this form is the more stable anhydrous polymorph. It is worth noticing that, in isopropanol, the less soluble form is the QME, while QA2 is the most soluble one. When using solid forms of quercetin for various applications in the nutraceutical or food industry, it is of critical importance to have a knowledge of the solid-form landscape of the substance. Scheme 2 summarizes the solid-form landscape of quercetin, showing the different structures of quercetin and transformations between them, based on the work done in this paper and in our previous publications. 14,26 ■ CONCLUSIONS Recrystallization of QDH by slurrying in pure ethanol and pure methanol resulted in the formation of the unstable intermediates QE and QME, which are probably crystal structures including ethanol and methanol, respectively, bound to the quercetin molecules in their structures. The two forms are characterized by weak solvate stability and slow desolvation at room temperature (20°C). The thermal analyses showed that the QE form loses all the ethanol at an onset temperature of 28.5°C to form its desolvated structure, which is a stable structure of anhydrous quercetin (QA). The QA crystal structure was solved from PXRD data using the EXPO software in the P2 1 /c space group and was found to contain four quercetin molecules in the unit cell. This quercetin anhydrous form (QA) is different from the desolvated quercetin structure (QA2) that can be obtained by desolvation of QDMSO solvate, and from any anhydrous quercetin structure previously reported. However, the crystal structure of QA2 is still unknown. The SSNMR and computational studies confirm the goodness of the structure of the anhydrous polymorph. Stability studies on QE revealed that other pathways to the formation of the QA include treating the QE form in vacuum for 24 h, or slow solid-state transformation over a period of more than 16 months at ambient temperature and pressure. These experimental findings enhance the knowledge around the different solid forms of this important bioflavonoid substance. A comprehensive understanding of the physicochemical properties, crystallization conditions, and transformation between the various forms is essential when designing processes and optimal solid forms for specific applications using quercetin. ■ ASSOCIATED CONTENT SAXS/WAXS patterns of QDH ethanol slurry experiments; SEM image of QDH crystals from the 70/ 30%(w/w) ethanol/water solvent ratio; QE mass-loss over time experiments; crystallographic data collection and structure refinement parameters of the QA crystal structure; Rietveld plot of the QA cystal form; QA CIF and checkCIF files; SSNMR data of QME, QE, QDH, QA, and QA2 crystal forms; DFT crystal structure optimization of the QA crystal structure; and void calculation figure of the NAFZEC crystal structure (PDF)
2023-07-15T15:05:31.344Z
2023-07-13T00:00:00.000
{ "year": 2023, "sha1": "dd7b3ba0968c743525f5479546d221a0050a34f9", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.cgd.3c00584", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9480aed3a2f600d548c282abec25f4a141077c2c", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
266899854
pes2o/s2orc
v3-fos-license
Algebraic curves as a source of separable multi-Hamiltonian systems In this paper we systematically consider various ways of generating integrable and separable Hamiltonian systems in canonical and in non-canonical representations from algebraic curves on the plane. In particular, we consider St¨ackel transform between two pairs of St¨ackel systems, generated by 2 n -parameter algebraic curves on the plane, as well as Miura maps between St¨ackel systems generated by ( n + N )-parameter algebraic curves, leading to multi-Hamiltonian representation of these systems Introduction This paper is devoted to a systematic (in the sense explained below) construction of various types of Liouville integrable and separable Hamiltonian systems from algebraic curves. In [16] Sklyanin noted that any Liouville integrable system (that is a set of n Hamiltonians in involution on a 2n-dimensional manifold M ) separates in a given canonical coordinate system (λ, µ) ≡ (λ 1 , . . ., λ n , µ 1 , . . ., µ n ) if and only if there exists n separation relations of the form ]ocnmp[ M B laszak and K Marciniak (see also [11]).Alternatively, one can treat the relations (1) as an algebraic definition of n commuting, by construction, Hamiltonians h i on M .The canonical variables (λ, µ) are then by construction separation variables for all the Hamilton-Jacobi equations associated with the Hamiltonians h i .This shift of view yields a powerful way of generating many (in fact, all known in literature) separable Hamiltonian systems from scratch.This approach has been initiated in [3] and then fruitfully developed in many papers, see for example [4,7,8] as well as the review of the subject in [9]. In this paper we restrict ourselves to the important subclass of separations relations (1) where all ϕ i are the same, ϕ i = ϕ for all i.In such a case the relations (1) can be interpreted as n copies of the algebraic curve on the λ-µ plane ϕ(λ, µ, h 1 , ..., h n ) = 0, (2) when (λ, µ) are consecutively substituted by the pair of variables (λ i , µ i ).One reason for restricting our attention to separation curves (2) rather than the general separation relations (1) is that in the general setting there arise problems with finding the multi-Hamiltonian formulation of the generated separable systems.Another reason for this restriction is that it allows us to skip the assumption that the corresponding coordinates (λ, µ) on M are canonical.Using this approach, in this article we systematize and develop the idea of constructing various types of finite-dimensional integrable and separable Hamiltonian systems from parameter-dependent planar algebraic curves.To our knowledge this is the first systematic (albeit certainly not complete) investigation of separation curves depending on more than n parameters.Relations between integrable systems and n-parameter hyperelliptic curves were extensively investigated for example in [17] (see also references there). Below we present the structure of the article and highlight all the new results. In Section 2 we establish a number of facts for Poisson structures in 2 dimensions and of monomial type.We first establish Lemma 2 where we find all Darboux coordinates associated with the Poisson tensor (4) on the plane with c of the monomial form c = λ α µ β and then we prove Lemma 3 where we establish canonical maps between arbitrary pair of Darboux coordinates for π.These results will be necessary for establishing results of Section 3. In the first subsection of Section 3 we prove (Theorem 4) that the Hamiltonians h i obtained by algebraically solving n copies of (2) constitute a Liouville integrable system not only if the corresponding coordinates (λ, µ) on M are canonical, but in a more general case when the Poisson operator π has in the variables (λ, µ) the form (21).This is a simple generalization of the previously known result (see for example [9]).The second subsection of Section 3 contains basic information on separable Hamiltonian systems, in particular of Stäckel type.This subsection is necessary for the self-consistency of the article. In Section 4 we use the results of Section 2 to show that each Liouville integrable Hamiltonian system generated by an algebraic curve (2) and by the non-canonical Poisson tensor (4) can also be generated by a one-parameter family of algebraic curves and the corresponding Poisson tensors in canonical form.Each class represents thus the same dynamical system written in different Darboux coordinates, related with each other by appropriate canonical transformations.We further specify these results for the case of monomial Poisson structures (with c(λ i , µ i ) = λ α i µ β i ), see formulas (37) yielding explicit transformation to Darboux coordinates in this case. In Section 5 we consider separable systems generated by algebraic curves depending on a set of n + n rather than n parameters.Each such curve leads then to two distinct integrable Hamiltonian systems.Using the known theory ( [15,8]) we prove that these systems are related by a Stäckel transform and we also show how solutions of these two systems are related by a reciprocal (multi-time) transformations.We also specify these results to the case of Stäckel systems.These results are new. In the final Section 6 we investigate yet another possibility of generating integrable and separable Hamiltonian systems from algebraic curves.We consider algebraic curves (70) with the block-type structure given by ( 71) and (72), leading to families of integrable and separable Hamiltonian systems that can be related (due to Theorem 11) with each other by a finite-dimensional analogue of Miura maps.These finite-dimensional Miura maps yield in turn multi-Hamiltonian formulation of the obtained integrable systems, as presented in Theorem 12.These are new results that generalize the particular results obtained earlier in [6] and in [14].The section is concluded by some examples.Theorem 11 is proven in Appendix, due to a rather technical nature of the proof. Poisson tensors on 2-dimensional phase space In this section we consider Poisson structures in 2 dimensions (on a λ-µ plane) of a monomial type and find all their Darboux coordinates that can be obtained from the coordinates (λ, µ) by monomial transformations.We also find a general map between arbitrary Darboux coordinate systems of monomial type. Let us thus consider a (λ, µ) plane P .A Poisson tensor π on P is a bi-vector with vanishing Schouten-Nijenhuis bracket.The Poisson tensor π must be of co-rank zero since dim P = 2 .It defines a Poisson bracket on the plane: Lemma 1.The most general Poisson tensor π on P is of the form Proof.The necessary and sufficient condition for being Poisson tensor is a Jacobi identity By a direct computation one can verify the identity (5) for tensor (4), where Now, let us change the parametrization of the plane (λ, µ) −→ ( λ, μ) given by λ = a(λ, µ), μ = b(λ, µ) ]ocnmp[ has to be fulfilled for a pair of unknown functions a and b.Consider a particular but relevant subclass of Poisson tensors (6) defined by functions c in the monomial form c = λ α µ β for fixed α,β ∈ R. Let us now search for transformations to Darboux coordinates of (4) within the following ansatz where ᾱ1 , ᾱ2 , β1 , β2 ∈ R.This ansatz has the following inverse A direct calculation using the condition (8) shows that the transformation ( 9) turns π into canonical form if and only if which leads to the following lemma. Notice that the solutions (12) are one-parameter, parametrized by ᾱ1 ; similarly, the solutions (13) are one-parameter, and parametrized by β2 .Note also that the last equation in (11) means that not only the map (9) but also its inverse (10) are in this case polynomial maps. In the special case that α = β = 0, (that is if the original variables (λ, µ) are already canonical for π) the transformation (9) given by ( 12) represents a one-parameter family of canonical transformations (parametrized by ᾱ1 ) with the inverse Applying Lemma 2 to two different sets of Darboux coordinates: λ, μ and ( λ, μ), given by ( 12) with ᾱ1 and α1 respectively, we arrive at the following lemma. Lemma 3. Assume, that ( λ, μ) and ( λ, μ) are two different sets of Darboux coordinates for the same Poisson tensor (4) with c = λ α µ β related to (λ, µ) by two different solutions (12) given by ᾱ1 and by α1 respectively.Then the map ( λ, μ) −→ ( λ, μ) is canonical and takes the form The proof is again obtained by direct calculations.The inverse of ( 16) is of the form while the inverse of ( 17) is 3 From algebraic curves to Liouville integrable and separable Hamiltonian systems Liouville integrability Here we show how to construct n-dimensional Liouville integrable Hamiltonian systems starting from a single n-parameter algebraic curve on a (λ, µ)-plane of the form Taking n copies of (19) with (λ, µ) consecutively substituted by the pair of variables (λ i , µ i ) we obtain the system of n equations that is assumed to be solvable with respect to the parameters a k (at least in some open domain).In result, we obtain n functions (Hamiltonians) In order to turn manifold M into Poisson manifold we take n copies of the Poisson operator (4) on the plane and construct the Poisson tensor π on M as follows: so that its matrix representation is Below we prove h i generated by (2.2) commute with respect to π given by (21), which is a natural generalization of the result with c = 1 that can be found for example in [9]. ]ocnmp[ Proof.The Hamiltonian functions h i (λ, µ) Poisson commute as a consequence of relations (20).Indeed, differentiating equations (20) with respect to (λ, µ) coordinates we get where (A r s ) is a matrix being the inverse of the matrix (∂ϕ s /∂a r ).In consequence In result, the system of n evolution equations on M where ξ = (λ, µ) T , is Liouville integrable. Separability Liouville integrable systems generated by algebraic curves (19) and the Poisson tensor (4) with c = 1 are separable in the sense of Hamilton-Jacobi theory, (λ, µ) are then their separation coordinates and equations (20) are called separation relations.Indeed, the Hamiltonian system (24) is in this case linearized through a canonical transformation generated by a generating function W (λ, α), such that it satisfies all the Hamilton-Jacobi equations h i = α i of the system.Then, the transformation (25) is given implicitly by In the variables (β, α) the n evolution equations ( 24) linearize so that The existence of separation relations (20) means that there always exists an additively separable solution for the generating function W (λ, α), where functions W i (λ i , α) are solutions of the system of n decoupled ordinary differential equations In literature, Liouville integrable systems, linearizable according to Hamilton-Jacobi theory by additively separable generating function (29) are known as (generalized) Stäckel systems.A particularly interesting case of such systems (a proper Stäckel system) is generated by ϕ being of hyperelliptic type where σ(λ) and f (λ) are Laurent polynomials in λ, γ k ∈ N and are such that γ 1 > ... > γ n = 0.Then, and some additional geometric structure can be related with the dynamical systems (24).The Hamiltonians h k are considered as functions on the phase space M = T * Q, where λ are local coordinates on a n-dimensional configuration space Q and µ are the (fibre) momentum coordinates, G f is treated as a contravariant metric on Q, defined by the first Hamiltonian h 1 , A k (A 1 = I) are (1, 1)-Killing tensors for the metric G f (for any f ) and V If we further assume that γ k = n − k then the Hamiltonians h k in (32) become the so-called Stäckel Hamiltonians of Benenti type [1,2,5] and in this case ]ocnmp[ M B laszak and K Marciniak Further, A k are given by They all are (1, 1)-Killing tensors for all the metrics G f .The function s k is the elementary symmetric polynomial in λ i of degree k.The potential functions V (σ) k are given by and for σ(λ k , where the so-called elementary separable potentials k can be explicitly constructed from the recursion formula [7] where Equivalence classes of algebraic curves From results of Section 2 it follows that without loss of generality we can restrict our construction to Poisson tensors for which coordinates (λ, µ) are canonical coordinates, i.e. when c(λ, µ) = I, where I is n-dimensional identity matrix.It follows from the fact that for fixed c(λ, µ) we have the whole family of transformations where ( λ, μ) are canonical coordinates for Poisson tensor (21), i.e. they fulfil the condition (8) for each pair (λ i , µ i ) of coordinates.Thus, each Liouville integrable Hamiltonian system (24), generated by an algebraic curve (19) and by the Poisson tensor (21) with a given c(λ, µ), can in fact be generated by a whole family (equivalence class) of algebraic curves ϕ( λ, μ, h1 , ..., hn ) = 0 and the corresponding Poisson tensors with c( λ, μ) = 1.Each class represents thus the same dynamical system written in different Darboux coordinates, related by appropriate canonical transformations.Let us specify these considerations to the monomial case, following the case considered in Section 2. Consider thus the 2n-dimensional Hamiltonian system (24) generated by the algebraic curve (19) and by the Poisson tensor (21) with c(λ i , µ) = λ α i µ β i , with fixed real α and β.Then, the transformation to its canonical (Darboux) coordinates on M and its inverse are of the form (cf. ( 14) and ( 15) where ᾱ1 , ᾱ2 , β1 , β2 are given either by (12) (for α = 1, parameterized by ᾱ1 ) or by (13) (for β = 1, parametrized by β2 ).As a result, our system can be equivalently obtained by either one of the equivalent curves ϕ( λ, μ, h1 , ..., hn ) = 0 (38) expressed in coordinates ( λ, μ) parametrized by ᾱ1 (in case α = 1) or by β2 (in case β = 1).A canonical transformation and its inverse between coordinates ( λ, μ) and ( λ, μ) associated with two curves from the class (38) parametrized by ᾱ1 and by α1 respectively, is given by (cf.( 16) and ( 18) where Example 5. Let us consider dynamical system (24), generated by the algebraic curve (31) but in the more general case when the Poisson tensor ( 4) is given by the monomial c(λ, µ) = λ m , m ∈ Z (so that α = m and β = 0) on the (λ, µ)-plane.Such system has one-parameter family of canonical representations in new coordinates, induced by the transformation with the inverse (where we now denote ᾱ1 by a).Notice that for the distinguished choice a = 1 we have λ = λ, μ = λ −m µ (so that λ = λ, µ = λm μ) and the algebraic curve (31) in the new variables ( λ, μ) is still of hyperelliptic type where f ( λ) = f ( λ) λ2m , with the canonical Poisson tensor as c( λ, μ) = 1 and thus generates a Stäckel system with all Hamiltonians that are again quadratic in momenta For the choice a = 0 λ = µ so for the particular case m = 2 ]ocnmp[ M B laszak and K Marciniak we again obtained the hyperelliptic type curve, this time with interchanged roles of position and momenta variables: where and with the normalization 0 = γ1 < ... < γn . Stäckel transform and reciprocal link In this section we will assume that the algebraic curve defining Hamiltonian system depends on a set of n+n, instead of just n, parameters.We show that solving this curve with respect to either the first set of n parameters or the second set of n parameters leads to two integrable systems that can be related by a Stäckel transform.We further show that solutions of these two systems are related by a reciprocal (multi-time) transformation.We further specify our results to Stäckel systems.Consider thus a 2n-parameter algebraic curve and the corresponding separation relations Solving these relations with respect to a k (we assume it is possible at least in some open domain) we obtain n functions (Hamiltonians) considered on a 2n-dimensional manifold M (parametrized by coordinates ξ = (λ, µ)) and depending on n parameters b 1 , ..., b n .These Hamiltonians define n Hamiltonians systems on M of the form where π is the canonical Poisson tensor of co-rank zero given by and where t 1 , . . ., t n are respective evolution parameters.The system (47) (or equivalently (48)) is assumed to be also solvable (at least in some open domain) with respect to the parameters b k yielding i.e. new Hamiltonian functions hk depending on n parameters a 1 , ..., a n .The related Hamiltonian systems take the form the following relations hold where dh = (dh 1 , ..., dh n ) T , d h = (d h1 , ..., d hn ) T , X = (X 1 , ..., X n ) T , X = ( X1 , ..., Xn ) T . Note that M a,b can equivalently be defined as and that through each point of M there pass infinitely many manifolds M a,b .In fact, fixing all the parameters b k in M a,b and letting a k vary we obtain a particular foliation of M and, likewise, fixing all the parameters a k in M a,b and letting b k vary we obtain another particular foliation of M .Note also that each of the manifolds M a,b is invariant with respect to all n systems (49) and all n systems (51) which also means that all the vector fields X i and all the vector fields Xi are tangent to each manifold M a,b .Note that no relation exists between the vector fields X and X on the whole manifold M .Let us also remark that the transformations (55) on M a,b can be inverted, yielding ]ocnmp[ Definition 8.The transformation (57) between dynamical systems (49) and (51) on M a,b is called a n-parameter reciprocal transform. In the remaining part of this section, we will restrict ourselves to the case of curves (46) that are affine in all the parameters a i and b i .In this case the relations (48) take the form while the relations (50) attain the form The Stäckel transform between the Hamiltonians h k and hk takes the explicit matrix form where i .Note that after setting all the a i and b i equal to zero we obtain the following matrix formula relating Hamiltonians H k and Hk where H = ( H1 , ..., Hn ) T , valid on the whole M .Formula (61) is the parameter-independent part of the Stäckel transform between h k and hk .Consider now a specification of the above affine case when the separation curve (46) attains the following hyperelliptic-type form where σ(λ) and f (λ) are Laurent polynomials in λ, γ 1 > ... > γ n are natural numbers.Solving the corresponding separation relations with respect to a k yields the separable systems belonging to the Benenti subclass of Stäckel systems.Explicitly, we obtain n quadratic in momenta Stäckel Hamiltonians The structure and geometric meaning of the Hamiltonians h k is as those described in subsection 3.2. Performing the Stäckel transform on the set of n Hamiltonians (63) we obtain the set of n Hamiltonians hk of the form (where Ḡf is defined by H1 with Ā1 = I) generated by the separation curve The Hamiltonians hk define the Hamiltonian evolution equations (51).Then, on ndimensional submanifold (54) the relations (55) hold with and the relations (56) hold with From the above considerations it follows that systems generated by algebraic curves (65) can always be transformed, by an appropriate reciprocal transformation, to systems from Benenti class, generated by algebraic curves (62). The Stäckel systems generated by curves of the type (65) have been thoroughly studied in [4]. Example 10.Consider the algebraic curve (62) of the form Solving the corresponding separation relations with respect to a k we obtain Hamiltonians h 1 and h 2 as in (58).In Viéte coordinates (q, p), associated with separable coordinates (λ, µ) through the point transformation ]ocnmp[ M B laszak and K Marciniak the Hamiltonians h k attain the form Passing to flat coordinates [5] (x, y) defined through the point transformation we obtain h k in the form Note that for b 1 = b 2 = 0 the Hamiltonians h 1 and h 2 represent one of the integrable cases of Hénon-Heiles systems [12].The matrix A in (66) and its inverse attain in the (x, y)-variables the form . Solving the separation relations corresponding to (67) with respect to b k yields the Hamiltonians h1 , h2 that in the flat coordinates (68) attain the form The Hamiltonians h1 , h2 and the Hamiltonians h 1 , h 2 are Stäckel conjugate.Note that the variables (68) are only conformally flat for the Hamiltonians h1 , h2 .The manifolds M a,b are given by or by and one can verify by a direct computation that on M a,b we have X = A X as well as X = A −1 X.The corresponding reciprocal transformation (57) between the evolution parameters takes the form Stäckel transform was first described by J. Hietarinta et al in [13] (where it was called the coupling-constant metamorphosis) and in [10].In this early approach this transform was only one-parameter.In its most general form Stäckel transform has been introduced in [15] and then intensively studied in [8,9]. Miura maps In this section we investigate yet another possibility of generating integrable and separable Hamiltonian systems from algebraic curves.We will consider algebraic curves depending on n + N parameters having a certain block-type structure.These curves generate integrable and separable Hamiltonian systems that can be connected by a finite-dimensional analogue of Miura maps, known from soliton theory (Theorem 11, see also its proof in the Appendix).These finite-dimensional Miura maps yield in turn multi-Hamiltonian formulation of the obtained integrable systems (Theorem 12).Results of this section generalize the results for the one-block case, obtained earlier in [14] as well as the results obtained in [6]. Consider thus the (n + N )-parameter algebraic curve with 1 ≤ N ≤ n, in the following form where 1 ≤ α ≤ min(n k ) and where with the normalization ϕ m (λ, µ) = 1.The curve (71) consists thus of m blocks of Benenti type.Solving the related separation relations with respect to a (k) i we obtain n Hamiltonian functions where ξ ∈ M, π 0 is the canonical Poisson tensor ]ocnmp[ M B laszak and K Marciniak The goal of this section is to construct a Miura map between the Stäckel system (74), generated by the curve (71), (72), and the Stäckel system where generated by the curve with s is an integer such that 1 ≤ s ≤ α and where the coordinates ( λ, μ, c) = ( λi , μi , c ...,n on M are some functions of coordinates (λ, µ, c).Consider the following map in R 2 : This map transforms (algebraically) the curve (71) into the curve (77), provided that for all k = 1, ..., m The relations (80) can be inverted to The maps (79) and (80) induce the following Miura maps M : with the inverse M −1 : M → M Let us now present the main theorem of this section. Theorem 11.For any s ∈ {1, . . ., α} the n Hamiltonian vector fields X (k) i in (74) and the n Hamiltonian vector fields X(k) i in (75) pairwise coincide, provided that the coordinates ( λ, μ, c) and (λ, µ, c) are connected by the Miura map (82): The proof of this theorem can be found in Appendix.This theorem means that all the Stäckel systems (75), generated by the curves (77) (one for each value of s between 1 and α), represent on the extended phase space M the same Stäckel system as the Stäckel system (74), (the one generated by the curve (71)), written in different coordinates, connected by the corresponding invertible Miura maps (82).We can thus call the Stäckel system (75) an s-representation of the Stäckel system (74), with s = 0, . . ., α (where s = 0-representation means simply the original Stäckel system (74)).Since all the Miura maps (82) are invertible it also means that there exists direct Miura maps (appropriate compositions of ( 82) and ( 83)) between different s-representations of our Stäckel system, see [14]. An important consequence of the above construction is the following theorem, that generalizes the corresponding one-block theorem from [14]. Using the notation 1−j for j = 0, −1, . . ., −α + 1, we can write the formulas (84) in the more compact form Using this notation, we can formulate the following corollary. There are two limit cases of the above construction.The first one is the case when m = n (so that all blocks have length one: n k = 1 for all k = 1, . . ., m).Then, the vector fields X (k) i in (74) on M are only bi-Hamiltonian, forming n one-field chains where . This particular situation was considered in [6].The opposite limit case takes place when m = 1 (i.e. when there is only one block in the curve (71)).Then, the considered Stäckel system is (n + 1)-Hamiltonian , s = 0, ..., n (and with the notation X (1) i ≡ X i , i = 1, ..., n).In this case there is in total N +1 2 bi-Hamiltonian chains of the form (88), where 1 ≤ N ≤ n.This situation was considered in [14]. Example 14.Consider the special case of curve (67) from Example 10 with b 1 = b 2 = 0 but in the space extended by the coordinates c 1 and c 2 : This curve has a form of ( 71)-( 72) with n = 2, m = 1 (so it is a one-block case), α = 2 (so that N = 2), ϕ 0 = − 1 2 λµ 2 − λ 4 , ϕ 1 = 1.The Miura map (82) for s = 1 attains the form and it transforms the Stäckel system generated by the curve (89) to the Stäckel system generated by the curve Further, for s = 2 the Miura map (82) attains the form (in this example we have to distinguish between the two set of "bar" variables, one for s = 1 and for s = 2 so in the latter case we use the variables λ, µ, c ) and it transforms the Stäckel system generated by the curve (89) to the Stäckel system generated by the curve All these three Stäckel systems are three different s-representations (with s = 0, 1 and 2) of the same three-Hamiltonian Stäckel system, with its three Poisson tensors (86) having in the (λ, µ, c)-variables the form ]ocnmp[ M B laszak and K Marciniak where I = diag(1, 1), Λ = diag(λ 1 , λ 2 ).Let us write these objects explicitly in the flat coordinates (68).The two Stäckel Hamiltonians h 1 and h 2 have in these coordinates the form while the matrix representations of the Poisson operators π 0 , π 1 and π 2 attain the explicit form , where the components of the vector fields X 1 and X 2 are given by while the corresponding bi-Hamiltonian chains are Example 15.Consider now another specification of the curve (67) from Example 10, this time with a 1 = a 2 = 0 and in the space extended by the coordinates c (1) and c (2) .More specifically, we consider the curve This curve has the form of ( 71)-( 72) with n = 2, m = 2. Thus, it is a two-block case, with 1 so that the curve is The Miura map (82) for s = 1 attains the form and it transforms the Stäckel system generated by the curve (92) to the Stäckel system generated by the curve Both Stäckel systems are two different s-representations (with s = 0,1) of the same bi-Hamiltonian Stäckel system.Two Poisson tensors (86) have in the (λ, µ, c)-variables the form where X i denote here the components of the vector fields π 0 dh i .Let us write down these objects explicitly in the conformally flat coordinates (68).The Stäckel Hamiltonians h 1 and h 2 are x 1 , while the matrix representations of the Poisson operators π 0 and π 1 attain the explicit form while * represent some N × N matrix.Thus, (A.1) has the form where Therefore, X i = 1, . . ., n, where ψ k ( . . . ) is the short-hand notation for (A.9).A simple identification through (81) shows that ψ k (...) in (A.5) and ψ k ( . . . ) in (A.9) coincide as functions on M. Thus, the Miura map (83) maps the identities (A.10) exactly onto the corresponding (i.e. on Q.The quadratic in momenta µ Hamiltonians (32) are in involution with respect to the Poisson operator π = n i=1 ∂ ∂λ i ∧ ∂ ∂µ i , in accordance with the general Theorem 4 in the subsection above.By construction, the variables (λ, µ) are separation variables for all the Stäckel Hamiltonians h k in (32). (k) i in the coordinates (λ, µ, c) and where * denotes transpositions of the
2024-01-10T18:17:47.413Z
2024-01-09T00:00:00.000
{ "year": 2024, "sha1": "549ed9e10344080e5f58783d7f5eec923524c4c7", "oa_license": "CCBY", "oa_url": "https://ocnmp.episciences.org/13176/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "dd1904f50c2caa79490e64fada021173fe49729f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
116124086
pes2o/s2orc
v3-fos-license
A Pilot Study of the Scanning Beam Quality Assurance Using Machine Log Files in Proton Beam Therapy Corresponding author Kwangzoo Chung (kchung@skku.edu) Tel: 82-2-3410-1353 Fax: 82-2-3410-2619 The machine log files recorded by a scanning control unit in proton beam therapy system have been studied to be used as a quality assurance method of scanning beam deliveries. The accuracy of the data in the log files have been evaluated with a standard calibration beam scan pattern. The proton beam scan pattern has been delivered on a gafchromic film located at the isocenter plane of the proton beam treatment nozzle and found to agree within ±1.0 mm. The machine data accumulated for the scanning beam proton therapy of five different cases have been analyzed using a statistical method to estimate any systematic error in the data. The high-precision scanning beam log files in line scanning proton therapy system have been validated to be used for off-line scanning beam monitoring and thus as a patient-specific quality assurance method. The use of the machine log files for patient-specific quality assurance would simplify the quality assurance procedure with accurate scanning beam data. Introduction The particle beam therapy including proton beam therapy is a promising treatment modality in radiation oncology.Compared to conventional photon-based radiation therapy, particle beam therapy can achieve a relatively low entrance dose and a negligible exit dose distribution in radiation therapy treatment in general. 1)sed on these superior dosimetric properties of the particle beam, an escalated or focused radiation can be irradiated on cancer cells while minimizing unwanted damage on healthy tissues in nearby.The evolution of the particle beam therapy can be seen not only in the rapidly increasing population of particle therapy facilities world wide, 2,3) but also in the wide adaptation of the new scanning beam technology in replacement of the scattering beam technology.In scanning beam particle therapy, highly conformal dose distribution can be accomplished by virtue of its capability of beam intensity modulation.With a highly modulated particle therapy treatment plan, the monitoring and control of the position of scanning particle beam in a treatment nozzle becomes an indispensable part of therapeutic particle beam delivery. 4,5)In this study, I report on the accuracy of the beam position monitor installed on a scanning beam nozzle and the feasibility of using its machine log files for patient-specific quality assurance. Materials and Methods I have used the dedicated scanning nozzle at Samsung Proton Therapy Center 6) and the beam position monitor device in the treatment nozzle.The proton beam therapy system at Samsung Proton Therapy Center is provided by Sumitomo Heavy Industries Ltd. and the technical details of the dedicated scanning nozzle can be found elsewhere. 7,8)The scanning control unit (SCU) controls and monitors proton beam scan speed and beam position.SCU records magnet currents, beam positions, dose counters, and beam-on status in every 60 micro-second and transfers the records to Treatment Control System Console (TCSC) at the end of the beam delivery of a given treatment field. At the same time, the plan information (in machine parameter format) used by SCU is also transferred to TCSC. In this study, I have used this plan information and recorded log files stored in TCSC.As the raw data saved in the log files are machine parameters, the data had to be processed with nozzle specific calibration specifications. Results In the verification of BPM accuracy, the resulting beam positions have been compared with the panned values.The positions found to agree within ± 1.0 mm that combines additional contributions from setup uncertainties.In addition, I have checked the machine log file recorded in the control unit of BPM and the deviations of the beam position from the plan were found to be less than 0.2 mm. Based on the validation of BPM calibration, the recorded machine log files of pre-treatment QA have been analyzed as shown in Table 1.The BPM log files of the selected five cases with various energy layers have been analyzed statistically.I did not find any significant systematic uncertainty in the data and none of the beam position was deviated more than 0.2 mm from the planned value.This can be expected as the tolerance of BPM for beam position is 0.2 mm. Discussion In normal operation of the proton beam therapy system, BPM monitors the beam position in the scanning nozzle, and at the same time uses the position information as inputs for the position feedback system and the safety interlock system.The position feed-back system controls the scanning magnets to keep the beam position on the planned beam path and if the beam position is off by a distance greater than the tolerance, the safety interlock Table 1.The summary of pre-treatment QA log file analysis.The "ds" stands for the displacement of the recorded position from the planned position.For a given field (case), the entire energy layers (number of layers) have been analyzed.www.ksmp.or.kr system will stop the beam. 5)As BPM has such an important role in proton beam therapy, it has a high precision and accuracy by design.Furthermore, the stability of the system must be guaranteed. Cases The log files of a given treatment field include all the energy layers of the field.As the energy modulates the depth of the proton beam, number of energy layers in a given field would imply a measurement in the same number of depths.One of the distinguishing characteristics of line scanning beam from spot scanning beam is that the monitor unit (MU) of the beam is given for a certain length of line segment not for a certain spot of the beam. Furthermore, the MU of a line segment must be decided by the combination of scan speed and dose rate at the moment of delivery.In this study, an absolute dose of each line segment cannot be determined from the BPM log file information.However, as the scan speed has been implicitly included in the position information, the accuracy of the absolute dose could be inferred from the result of positional accuracy.In addition, if an additional information from the Irradiation Control Unit (ICU) would be correlated to the BPM data, the absolute dose could be reconstructed.This possibility will be investigated in a future study. I have cross-checked the accuracy of BPM in scanning beam delivery and checked out any possible systematic uncertainty of the system.Overall, I have verified that the use machine log file for patient-specific quality assurance is feasible and robust.Nevertheless, as this study has performed at gantry angle zero, there is still a possibility of propagations of gantry angle dependency of the nozzle properties.In general, any gantry angle dependency must be corrected at the early stage of commissioning and validation of the nozzles.Although there was no significant gantry angle dependency found at Samsung PTC, the log file analysis can be a useful measure for a future investigation on gantry angle dependency. Conclusion In conclusion, the high precision scanning beam log files in line scanning proton therapy system have been validated to be used for a patient-specific quality assurance method. The use of the machine log files in scanning beam related quality assurance will simplify the quality assurance procedure while keeping high precision. Fig. 2 . Fig. 2.An example of scanning beam information recorded in the scanning control unit (Top left: x-position in time, top right: y-position in time, bottom left: monitor count in time, bottom right: x-position vs. y-position.Raw data without pedestal/noise reduction). Fig. 3 . Fig. 3.A scanning beam information recorded in the scanning control unit (one sample layer of a patient beam delivery plan).
2019-04-16T13:26:29.239Z
2017-09-30T00:00:00.000
{ "year": 2017, "sha1": "1cdd4217afe5867f8e850c7d8e671e276c2e3e3d", "oa_license": "CCBYNC", "oa_url": "http://synapse.koreamed.org/Synapse/Data/PDFData/2129PMP/pmp-28-129.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d9a2a98ad5202d5b7cd4d75dc62e6ad2523b63d0", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
244957916
pes2o/s2orc
v3-fos-license
Launching a comparative effectiveness adaptive platform trial of monoclonal antibodies for COVID-19 in 21 days Outpatient treatments that limit progression to severe coronavirus disease 2019 (COVID-19) are of vital importance to optimise patient outcomes and public health. Monoclonal antibodies (mAb) demonstrated ability to decrease hospitalizations in randomized, clinical trials. However, there are many barriers to mAb treatment such as patient access and clinician education. There are no data comparing efficacy or safety of available mAbs. We sought to rapidly launch an adaptive platform trial with the goals of enhancing access to treatment, regardless of geography and socioeconomic status, and evaluating comparative efficacy and safety of available mAbs. Within 21 days from idea genesis, we allocated mAb treatment to all patients within the context of this clinical trial. Within 2 months, we closed the gap of the likelihood of receiving mAb, conditional on background positivity rate, between Black and White patients (Black patients 0.238; White patients 0.241). We describe trial infrastructure, lessons learned, and future directions for a culture of learning while doing. Outpatient treatments that limit progression to severe coronavirus disease 2019 (COVID-19) are of vital importance to optimise patient outcomes and public health. Monoclonal antibodies (mAb) demonstrated ability to decrease hospitalizations in randomized, clinical trials. However, there are many barriers to mAb treatment such as patient access and clinician education. There are no data comparing efficacy or safety of available mAbs. We sought to rapidly launch an adaptive platform trial with the goals of enhancing access to treatment, regardless of geography and socioeconomic status, and evaluating comparative efficacy and safety of available mAbs. Within 21 days from idea genesis, we allocated mAb treatment to all patients within the context of this clinical trial. Within 2 months, we closed the gap of the likelihood of receiving mAb, conditional on background positivity rate, between Black and White patients (Black patients 0.238; White patients 0.241). We describe trial infrastructure, lessons learned, and future directions for a culture of learning while doing. Monoclonal antibody treatment for COVID-19 Millions worldwide have died from severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. The pandemic required health systems to rapidly adopt new therapies and promote social and health equity [1]. Monoclonal antibody (mAb) treatment is associated with decreased hospitalization and death in outpatients with mild to moderate coronavirus disease 2019 (COVID-19) [2][3][4][5]. Outpatient treatments that limit progression to severe disease are of vital importance to optimise patient outcomes and public health. Monoclonal antibodies bind to and neutralize SARS-CoV-2, blocking entry of the virus into human cells. A form of passive immunity, mAb are most effective if given early after SARS-CoV-2 infection [2,3]. Randomized, clinical trials in patients with mild to moderate COVID-19 demonstrated reductions in hospitalizations and deaths with mAb treatment compared to placebo [2,3]. Subsequently, the United States Food and Drug Administration (US FDA) issued Emergency Use Authorizations (EUA) for bamlanivimab, bamlanivimab and etesevimab, casirivimab and imdevimab, and sotrovimab for use within 10 days of symptom onset in outpatients with risk factor(s) for progression to severe disease. Do we need an adaptive platform trial to evaluate monoclonal antibody treatment? There are many unanswered questions about mAb treatment. First, the use of mAb therapy remains low. Is this due to patient access barriers, operational challenges with outpatient infusions, limited awareness of efficacy data among referring clinicians, supply chain issues, or a combination of reasons? [6][7][8] Second, the published clinical trial data generated hypotheses regarding the optimal patient population for treatment, but many seek added evidence to inform mAb deployment, especially when resources are scarce. Third, while other platform trials are evaluating multiple mAb therapies in various settings (e.g., RE-COVERY, ACTIV-2, ACTIV-3), there are no outpatient trials directly comparing all 3 EUA-available mAbs. Fourth, the spread of SARS-CoV-2 variants may impact antibody and vaccine effectiveness, and the emergence of mAb-resistant SARS-CoV-2 variants is a major concern [9]. The EUA for bamlanivimab monotherapy was revoked due to increased frequency of resistant variants and concern of decreased bamlanivimab monotherapy efficacy in this setting, and bamlanivimab and etesevimab distribution was temporarily paused and then resumed based on changing prevalence of variants of concern [10]. An adaptive platform trial could evaluate all available mAbs across subgroups of patients and generate answers to pivotal and evolving clinical questions in a rapid fashion to address these knowledge gaps. Collecting data quickly, and with rigor, would enable clinicians to rapidly adapt to the changing therapeutics landscape and pathogen evolution. A culture of learning while doing When confronted with complex patients, clinicians often perceive a conflict between the need to "learn" (i.e., randomize patients into clinical trials) versus "do something" (i.e., provide a therapeutic agent that may or may not be helpful or harmful) [11]. Traditionally, research efforts seek to create insights using highly structured settings with careful conditions to limit threats to causal inference. This approach is often costly, slow, and it may not resemble how the intervention will be used in practice. Such studies are often performed in larger hospitals with existing research infrastructure, limiting access of most patients to new treatment options and hindering the external validity of the results. We propose shifting from the traditional research model into one of care with ongoing discoveryproviding new therapies to each patient while simultaneously advancing standard practicethat is "learning while doing" [11]. In this model, we optimise the trade-off between learning and doing where little to no sacrifice is made to the conditions of highquality research yet priority care is ensured to all patients within the system. Indeed, this approach expands the reach of robust learning while doing to many hospitals and healthcare settings often excluded from randomized trials. During the pandemic, our large, integrated healthcare system in the US approached treatment of patients with COVID-19 with two goals: i.) enhancing access to treatment, regardless of geography and socioeconomic status, and ii.) coordinating treatment through an integrated, adaptive platform trial. To accomplish these goals, clinician engagement was paramount. In addition, success required leadership investment, a robust data and analytics infrastructure, and therapeutics oversight via system-level treatment guidelines with local collaboration. Preliminary experience with monoclonal antibody treatment and expanding patient access Prior to the launch of the adaptive platform trial, we developed a robust outpatient infusion infrastructure across a large geographical region in western Pennsylvania and New York. The supply of mAb changed over time. Initially, treatment was only bamlanivimab monotherapy and mAb was only available at outpatient infusion centers [5,12]. After evaluating the evidence, the System COVID-19 Therapeutics Committee determined equivalence existed among the available mAbs. The Committee adopted a therapeutic interchange policy for mAb distribution in December 2020 and updated it with evolving data and federal guidance, first including bamlanivimab monotherapy, bamlanivimab and etesevimab, and casirivimab and imdevimab. Bamlanivimab monotherapy was removed from the policy on March 31, 2021.Bamlanivimab and etesevimab enrollment was paused from June 25 through September 16, 2021 due to federal decisions to temporarily halt distribution based on prevalence of variants of concern (i.e., Beta and Gamma) during that time period. Sotrovimab was added to the platform on July 13, 2021 after initial supply was donated to the system by the manufacturer specifically for use in the trial; this was later transitioned to government-purchased supply allocated via federal Health and Human Services distribution channels. All pharmacies supplying all infusion sites had equal opportunity to order any EUA-available mAb from a central supply facility. Launching a platform trial of monoclonal antibodies for COVID-19 in 21 days We held a collaborative discussion with the US government on February 17th, 2021 about increasing mAb access ("Do") while simultaneously addressing the knowledge gaps surrounding mAb treatment ("Learn"). Subsequently, we designed and implemented a pragmatic, open-label, adaptive platform trial integrated with our ongoing, systemwide mAb efforts. This comparative effectiveness evaluation was possible due to preceding placebo-controlled trials confirming the benefits and safety of mAb therapy. The goal was to design an entire program, including outreach, learning, and rapid implementation. The first patient was allocated mAb within the trial on March 10th, 2021, just 21 days later. Monoclonal antibody access was expanded from outpatient infusion centers to all system EDs on March 23, 2021. We undertook several additional steps to expand awareness and increase the use of mAb therapy across our region. A paper/fax referral process existed for clinicians outside of the health system, including rural clinicians in neighboring states. To reach disadvantaged neighborhoods and patients with limited access to health care, we created a telephone hotline staffed by nurses for patient self-referrals. Using a collaborative relationship with a home infusion company, patients without transportation could receive treatment at home or be transported to an infusion center (at no cost to the patient). Proactively, we identified all patients with a positive SARS-CoV-2 polymerase chain reaction (PCR), or rapid antigen test performed within the system who also met EUA criteria using an EHRderived screening dashboard. These patients were subsequently called at home by a member of the centralized mAb operations team to assess for symptoms, offer mAb information, and place a mAb referral order (if appropriate). We also educated patients about vaccination during these phone encounters. To further alleviate patient access issues, we created a team of Revenue Cycle analysts to work with all major payors to improve patient cost transparency. The OPTIMISE-C19 platform trial The OPTIMISE-C19 (Optimizing Treatment and Impact of Monocolonal Antibodies through Evaluation for COVID-19) trial launched on March 10, 2021. This trial was approved by the University of Pittsburgh IRB (STUDY21020179) and UPMC Quality Improvement Review Committee (Project ID 3280) and is listed on ClinicalTrials.gov (NCT04790786). OPTIMISE-C19 evaluates the mAb therapeutic interchange policy and patient outcomes. It is an open-label, pragmatic, randomized platform trial evaluating the comparative effectiveness of COVID-19 specific mAbs with EUA status. The primary outcome was hospital-free days up to day 28 after mAb treatment; secondary outcomes included 28-day mortality and rates of adverse events. First results from this trial are available as of September 9, 2021 [13]. The primary analysis model was a Bayesian cumulative logistic model that adjusted for treatment location, age, sex, and time (2-week epochs); comparisons between individual mAb were based on the relative odds ratio between a given two arms for the primary outcome. Sample size is determined by case volume throughout the course of the pandemic; Bayesian adaptive designs allow for statistical inference despite variable sample size. Interim analyses are performed every two weeks and the trial continues to enroll until pre-determined statistical thresholds for superiority or inferiority are met. Patients were eligible for OPTIMISE-C19 per FDA EUA criteria [13]. Clinicians ordered a generic mAb referral order, which triggered randomization via a monoclonal antibody assignment application (app) engineered alongside the EHR to link local infusion site mAb inventory to the current patient encounter (Fig. 1). Once the patient's medical record number is entered into the app and the "randomize" button is clicked (by a centralized mAb operations team member for outpatient centers and by a pharmacist for EDs), the app assigns a mAb. Odds of receiving a mAb were equivalent (i.e., 50% when 2 mAb were available, 33.33% when 3 mAb were available). The patient's assignment is automatically recorded by the data and analytics team database. Clinicians in both outpatient settings and EDs review all EUA Fact Sheets with the patient upon referral and inform the patient they may receive any available and authorized mAb treatment. All patients to be infused must agree to treatment within the confines of the EUA prior to treatment. Importantly, clinicians and/or patients can request a specific mAb at any time. The randomization of which mAb is recommended occurs solely within the therapeutic interchange (wherein mAbs are considered equivalent and any mAb can be dispensed based on local inventory). The ultimate decision on what is best for the patient is made jointly by the physician and her patient. To date, no clinician or patient has requested a mAb different from the allocation. Clinicians, pharmacists, and nurses throughout the system were supported by extensive education and training before the launch of OPTIMISE-C19 and consistently throughout enrollment. External and internal online resources were developed, town halls are held via on-line formats, and direct patient outreach continues. The trial team also works closely with a team of communications experts to share patient stories, engage with local media to raise awareness, and create patient-facing educational materials (Fig. 2). For outcome ascertainment, the data and analytics team built a system for automated data extraction from the UPMC Clinical Data Warehouse to synthesize data feeds from various EHRs across the inpatient and outpatient care continuum. All extracted data undergo validation by a clinical pharmacist and are reviewed by a system Quality Center nurse to ensure patients are appropriately captured. For patients who have a positive SARS-CoV-2 nasal swab test within the system, we developed infrastructure to retrieve remnant, de-identified samples and perform next-generation sequencing of the Spike gene. This will allow us to assess variant trends of SARS-CoV-2 in our region. Variants in the Pennsylvania catchment are also assessed over time using Global Initiative on Sharing All Influenza Data, to analyze effectiveness based on geographic region while patient-specific variant data is pending [14]. An infusion reaction management guide was created and distributed to all mAb treatment sites for guidance on treatment of any kind of mAbrelated adverse event [15]. An attending physician on the Therapeutics Committee oversees all required adverse event reporting to the FDA. Additionally, infusion center representatives document acute reactions on the day of treatment in a secure, electronic file sharing application. Nursing and physician staff also utilize an internal, nonpunitive, patient safety reporting system ("Risk Master") for adverse reactions and medication errors. Finally, a hospital quality policy was enacted to contact all patients treated with any mAb via telephone at day 28. Priorities for the future of learning health systems and lessons learned Traditional clinical trial enrollment is cumbersome, and a dramatic shift in the clinical research enterprise is needed [16]. The regulatory and financial requirements of traditional trials hinder the ability of clinicians and researchers to adapt quickly, test multiple therapies simultaneously, collaborate across care areas, and expand trial access to patients outside of academic tertiary centers. While government programs during the COVID-19 pandemic provided institutions with free mAb supply, there are limited resources provided to administer the drug to patients. OPTIMISE-C19 was designed to overcome these barriers and to rapidly evaluate drug effectiveness, alongside system efforts to deliver safe, effective, comprehensive care in a rapid and equitable fashion. While the study has limitations, including lack of patient-level variant data in real time and missing data if patients seek care outside of our EHR, it is the one of the largest comparative effectiveness trial of mAb in the world and spans 49 treatment sites across a wide geographic area, increasing external validity. In creating this platform trial for mAb, we propose three broad priorities for the future of learning health systems and address our lessons learned. Create an institutional culture of learning while doing A collaboration and durable commitment between clinical care teams and researchers, leadership, administrative staff, and data and analytics is essential for a culture of learning while doing. The COVID-19 pandemic has fortunately brought global efforts to decentralize trial enrollment and generate partnerships in research trials across broader populations and healthcare networks, with equity as a core goal. New ways to ask and answer questions may alleviate clinician burden, support clinician autonomy, and expand patient access to new therapies. Routine clinical care and system therapeutic guidance must be intertwined with research endeavors to optimise access and outcomesin other words, discovery happens during care. This requires extensive efforts in ongoing education, clinician outreach, multimodal communication, and response to feedback. In a "learning while doing" environment, all must embrace the philosophy of decreasing knowledge transfer time from trial results reporting to incorporation into routine care to a very short interval. We used this philosophy in OPTIMISE-C19. For example, bamlanivimab monotherapy exited the trial and clinical care as a treatment option the same day the Therapeutics Committee determined it inferior to combination products, even prior to the FDA decision [10]. Invest in patient outreach, access, and education Extensive resources are needed to expand patient outreach, and human resources are the greatest barrier. Leadership commitment is required to dedicate these resources to patient care access. The goal is to address health disparities and improve population outcomes across the entire health system. Similar priorities include broader patient access to mAb infusion and to ensure ongoing clinician and patient education. This investment pays off. Prior to trial launch, 5% of mAb-treated patients identified as Black. Seven weeks later, 18% of mAb-treated patients identified as Black. The likelihood of receiving mAb, conditional on background positivity rate, was one-third lower among Black patients compared to White patients when the mAb program launched. After treating over 1500 patients, the likelihood was the same (mAb treatment to positive test ratio for Black patients 0.238; White patients 0.241, Fig. 3). Now, more than 6000 patients have received mAb treatment since trial launch, and the ratio of EUA-eligible patients receiving mAb increased 7.5-fold, even during periods of reduced prevalence of COVID-19 [13]. From March 10 through June 25, 2021, the proportions of eligible White patients receiving mAb increased from 3.1 to 21.6% and eligible Black patients receiving mAb increased from 2.6 to 29.9% [13]. Continued communication of therapeutic guidance and treatment logistics across all locations, key stakeholders, clinicians, and general community members helps support this platform trial success. Establish a data infrastructure Data and analytics teams are key to sustainable and scalable implementation of multicenter platform trials. The embedding of the trial next to the EHR and routine care enhanced enrollment and allowed for rapid translation of new knowledge to the bedside. The continued evaluation of the platform read-outs according to pre-trial simulations requires alignment of bioinformatics teams and statistical analysts. These steps help to coordinate interim analyses, stopping rules, outcome ascertainment, and safety reporting. Conclusions Efficient, scalable, and inclusive approaches to clinical trial design facilitate a culture of learning while doing. We developed an embedded platform trial of monoclonal antibody treatment in 3 weeks, and expanded trial access to patients from disadvantaged neighborhoods. This approach engages the health care community in a first-of-its-kind trial to learn about novel treatments during a pandemic.
2021-12-09T14:09:44.171Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "0439ab82a4a531cd6f21ca225ac6b2244cc2afd0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.cct.2021.106652", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7d470f97eabf7c141ad56535bfc2f9251da57f34", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255185284
pes2o/s2orc
v3-fos-license
Effect of the Nordmøre grid bar spacing on size selectivity, catch efficiency and bycatch of the Barents Sea Northern shrimp fishery The introduction of the Nordmøre grid in shrimp trawls has reduced the bycatch of non-target species. In the Norwegian Northern shrimp (Pandalus borealis) fishery, the mandatory selective gear consists of a Nordmøre grid with 19 mm bar spacing combined with a 35 mm mesh size diamond mesh codend. However, fish bycatch in shrimp trawls remains a challenge and further modifications of the gear that can improve selectivity are still sought. Therefore, this study estimated and compared the size selectivity of Nordmøre grids with bar spacings of 17 and 21 mm. Further, the effect of applying these two grids on trawl size selectivity was predicted and compared to the legislated gear configuration. Experimental fishing trials were conducted in the Barents Sea where the bottom trawl fleet targets Northern shrimp. Results were obtained for the target species and two by-catch species: cod (Gadus morhua) and American plaice (Hippoglossoides platessoides). This study demonstrated that reducing bar spacing can significantly reduce fish bycatch while only marginally affecting catch efficiency of Northern shrimp. This is a potentially important finding from a management perspective that could be applicable to other shrimp fisheries where flexibility in the use of different grid bar spacings may be beneficial to maximize the reduction of unwanted bycatch while minimizing the loss of target species. Introduction Northern shrimp (Pandalus borealis) is a commercially important species in bottom trawl fisheries in the North Atlantic and the Barents Sea [1]. This species has gained more interest over the last decade as a result of increased demand, increased product prices and a recommended annual quota of 140.000 tons in the Northeast Arctic [2]. Due to the small meshes used in the trawl body and codends, Northern shrimp fisheries have struggled for many years with large quantities of fish bycatch [3], which implied additional sorting labour onboard and an undesired impact on the stocks of commercially and ecologically important fish species. For most Northern shrimp fisheries, the introduction of the Nordmøre grid in the early 1990's was a breakthrough regarding species selectivity and bycatch reduction (e.g., [4][5][6][7][8]). The working principle of the Nordmøre grid as legislated for the Norwegian and Russian areas of the Northeast Atlantic is shown in Fig 1. Except for a few minor fisheries in the north Atlantic where the use of the Nordmøre grid is not compulsory (e.g., fisheries in the fjords of Iceland [9], the selectivity in the majority of trawl fisheries for Northern shrimp is based upon a dual sorting system consisting of a sorting grid combined with a size selective codend. In this complex size selection process, the combined size-dependent selectivity curves for both Northern shrimp and the bycatch species often exhibit a bell-shaped signature [10] (Fig 1). This is the case for the Northern shrimp fishery in the Barents Sea. This is one of the largest fisheries for this species with annual landings of 20.000 to~80.000 tons in the last 20 years [2], where the use of a 19 mm bar spacing Nordmøre grid combined with a codend with a minimum mesh size of 35 mm is required [11]. The sorting grid is installed in the extension piece adjacent to the codend at an angle of approximately 45˚, and a guiding funnel (or guiding panel) directs shrimp and fish towards the lower part of the grid section. The sorting system allows all organisms that pass between the bars of the grid to continue towards the codend, while those that move along the grid or cannot pass between the bars, are led out of the trawl through an escape opening in the upper panel. Consequently, larger-sized fish and other marine organisms (e.g., jellyfish, crabs, etc.) that are unable to pass through the specific grid bar spacing, are removed from the trawl making the shrimp catches in the fishery cleaner [12]. However, despite the success of the grid removing larger-sized individuals, the device does not solve the issue to the same extent for the smaller-sized individuals, especially those of sizes that are similar to shrimp which can pass between the grid bars. Thus, the management authorities of Norway and the industry are still searching for technical solutions that can further improve the performance of the Nordmøre grid. In the Barents Sea Northern shrimp fishery, the fish bycatch such as cod (Gadus morhua) below minimum landing size (MLS) and American plaice (Hippoglossoides platessoides) has remained a problem because a fraction of small individuals that pass through the grid (i.e., up to a certain size depending on the bar spacing used in the grid) are too large to escape through the codend meshes. The regulations in the Barents Sea and all other shrimp fishing grounds under Norwegian jurisdiction regarding bycatch of specimens from quota regulated species below MLS are strict and vary between species [11]. Bycatches are controlled by the authorities during routine inspections at sea. If the catch of at least one quota regulated (i.e. commercially important) species (i.e., cod, haddock (Melanogrammus aeglefinus), Greenland halibut (Reinhardtius hippoglossoides) and two redfish species (Sebastes spp.)) exceeds a species-specific number of individuals per 10 kg of shrimp or more than 15% by numbers for shrimp below MLS (< 15 mm carapace length (CL)) [1], the fishing area is closed by the authorities for periods that can last weeks or months. From five defined choke species in the Barents Sea Northern shrimp fishery [11], the recruitment for cod is of great value. Cod is the most important commercial species in the Barents Sea [13][14][15]. The allowed bycatch amounts are determined based on the biological status of cod, as well as economic considerations concerning both the shrimp fishery and fishery for cod [13]. Specifically, when the catches of cod below MLS in this area exceed eight individuals per 10 kg of shrimp, the fishing area is closed [11]. Area closures can imply significant operational challenges and increased costs for the fishing fleet (i.e., loss of income due to access limitations to good fishing areas), and larger distances between the potential fishing grounds. Therefore, efficient selectivity measures that reduce the bycatch of species like cod are still sought. Unlike cod, American plaice is not a regulated species in the Barents Sea and, therefore, it is not subject to the same regulations. However, it is an important bycatch species to consider because it is caught in great numbers and then discarded by the fleet, which contributes to the negative ecological impact of the fishery. In addition, due to its morphology, it can block large areas of the grid reducing its sorting capacity and can cause practical problems with sorting the catch on board [15,16]. Due to increased focus on discard reduction in fisheries [17], the efforts to improve both species and size selectivity in Northern shrimp fisheries in general (e.g., [9,[18][19][20]) and in the Barents Sea in particular (e.g., [21][22][23]) have incremented in the last decade. Many of these studies have looked on changes in the grid section design that could potentially lead to better selective properties of the grid (e.g., [24,25]). Changing bar spacing in the Nordmøre grid is an obvious modification to change the selection characteristics of the grid section because it potentially affects the sizes of fish and Northern shrimp that can pass between the bars of the grid. However, no published study has evaluated if or how the selection properties in the Northern shrimp fishery in the Barents Sea change by modifying the Nordmøre grid bar spacing, except a brief notice of initial experiments during 1989-1990 with various bar distances in the grid [4] with few studies examining the effect of grid bar spacing in other fisheries [18,[26][27][28][29]. Thus, the aim of the present study was to investigate the effect of modifying grid bar spacing on the selectivity and catch size distribution of Northern shrimp and two relevant bycatch species in the Barents Sea Northern shrimp fishery: cod and American plaice. Specifically, the present study aimed at answering the following research questions: • To what extent does grid passage probability change for Northern shrimp, cod and American plaice by changing bar spacing in the Nordmøre grid? • Is it possible to reduce the retention of cod and American plaice without changing the catch size distribution for Northern shrimp by modifying the grid bar spacing? • What are the potential implications of changing bar spacing in the compulsory grid and codend configuration in the Barents Sea Northern shrimp fishery? Ethics statement This study did not involve endangered or protected species. Experimental fishing was done on board a research vessel in accordance with the fishing permit granted by the Norwegian Directorate of Fisheries (18/18249). This fishing permit allows catches of shrimp and fish to be landed. No other permit was required to conduct this study. Experimental design The Two experimental Nordmøre grids were tested, one with 16.75 ± 0.66 mm (mean ± SD) bar spacing and the other one with 20.65±0.61 mm bar spacing. The mean bar spacing of each of the grids was measured using calipers. Both grid sections were four-panel constructions in 40 mm mesh of 2.5 mm PE twine and were equivalent in circumference and length to the two-panel standard Nordmøre grid section used by the Norwegian coastal and inshore fleets targeting Northern shrimp. The Nordmøre grids were made of stainless steel and were 1510 mm high and 1330 mm wide. The grids in both sections were mounted so that they would maintain an angle of 45 ± 2.5˚while fishing. The fishing trials were carried out with a selection system composed by a Nordmøre grid followed by a codend blinded with a 10 mm mesh size liner. The escape outlet on the top panel of the grid section was cut as a 1.5 m long and 1.33 m wide triangle. A small-meshed (18.9 ± 1.2 mm) cover was mounted over the outlet in front of the grid for the collection of the escaping fish and shrimp. During the sea trials, 11 and 21 hauls were conducted with the Nordmøre grid with 17 and 21 mm bar spacing, respectively. After each tow, the catches in the codend and the cover were kept separated. All fish caught in each compartment were sorted by species and all individuals under 40 cm in total length were measured to the nearest cm below. For the fish species no subsampling was performed. However, the shrimp in each compartment were subsampled by taking a random sample of approximately 1 kg. The carapace length (CL) of all shrimp in the subsample were measured to the nearest mm below using a caliper. Grid passage probability For a shrimp or fish to pass through the grid, two conditions need to be fulfilled: 1) it needs to contact the grid; and 2) it needs to be morphologically able to pass through the grid. Thus, the escape probability for each individual will depend on their size and orientation when they contact the grid and their swimming performance and capability of avoiding the grid face. It must be considered that some shrimp and fish may not contact the Nordmøre grid at all, or that they do so with such a poor orientation that they will not have any length-dependent chance of passing through the grid. For an individual contacting the grid with sufficiently good orientation for a length-dependent probability of passing through the grid (pc(l)), the following logit model was used [30]: Model 1 considers that the probability for an individual to be able to pass through the grid, under the condition that it contacts the grid, is length-dependent and decreases for larger individuals. Parameter l denotes the length of the individual, L50 grid denotes the length with a 50% probability of being prevented from passing through the grid, and the selection range (SR grid ) describes the difference in length between individuals with a 75% and 25% probability of being prevented from passing through the grid. Based on the above, model 2 was used for the size-dependent probability of a shrimp or fish to pass through the Nordmøre grid and enter the codend (p(l)): Three parameters need to be estimated to be able to describe the size selection in the Nordmøre grid, C grid , L50 grid , and SR grid . Parameter C grid , which is known as selectivity contact [31], loosely models the contact probability with the grid for modes of orientation that result in a length-dependent probability for an individual to pass through the grid. If all individuals contact the grid with a reasonable mode of contact, then the value for C grid should be 1.0. However, this is not necessarily the case, and some individuals may even escape through the escape exit of the Nordmøre grid section (Fig 1) without contacting the Nordmøre grid first. Other individuals may be so poorly orientated when they meet the grid that the probability of them passing through will be similar to those not contacting the grid at all. This means that they would not have made selectivity contact [31], which will also be reflected in th e value of C grid . For the shrimp or fish that contact the grid with a reasonable mode of orientation, L50 grid and SR grid describe the probability of their passage through the grid based on model 1. Shifting to larger bar spacing will result in higher L50 grid and altered SR grid values. For small individuals that would pass through if they contacted the grid, the value 1 -C grid can loosely be interpreted as the fraction of individuals that would be able to seek and subsequently escape through the escape outlet without contacting the grid. A small SR grid value would indicate a well-defined grid contact orientation with all those individuals making contact doing it with a similar orientation. In contrast, a large SR grid value would indicate that the contact is more disordered with many different orientations involved. As different species have different morphologies and behaviors, values of the parameters C grid , L50 grid , and SR grid will, for the same selective system, be species specific. Therefore, the analysis needs to be applied separately for Northern shrimp and bycatch species. Since the aim of our study was to investigate how each of the tested Nordmøre grid configurations performed on average over the hauls conducted, the analysis included data summed over hauls j. The analyses were conducted separately for each Nordmøre grid configuration based on the data from the hauls with each specific grid and separately for each species. Therefore, expression (3) was minimized, which is equivalent to maximizing the likelihood for the observed data in form of the length-dependent number of individuals retained in the codend (nC jl ) versus those collected in the Nordmøre grid cover (nGC jl ): where qC j and qGC j are the sampling factors for the fraction of individuals measured in the catches from codend and grid cover, respectively. The sampling factors comprise a value in the range 0.0 to 1.0, with 1.0 indicating all individuals being length measured. The outer summation in expression (3) comprises the hauls conducted with the specific Nordmøre grid configuration and the inner summation over length classes in the data. Evaluating the ability of model (2) to describe the data sufficiently well was based on calculating the corresponding p-value. In case of poor fit statistics (p-value < 0.05), the residuals were inspected to determine whether the poor result was due to structural problems when modelling the experimental data (model 2), or due to over-dispersion in the data [32]. To account for both within-and between-haul variations in selectivity [33] when estimating the uncertainty for the average size dependent grid passage probability (model 2), we applied a double bootstrap method using the software tool SELNET [34]. For each species analyzed, 1000 bootstrap repetitions were conducted to estimate the 95% confidence intervals (CI's) (Efron percentile [35]) for the model parameters (C grid , L50 grid , and SR grid ) [34]. The effect on grid passage probability by changing grid bar spacing from x to y was evaluated based on plotting the estimated grid passage probability curve with CI's for each of these configurations against the equivalent baseline configuration curve for the 19 mm bar spacing design applied in this fishery since 1991 [12]. Further, the difference in the length-dependent passage probability (Δp(l)) was estimated: The 95% CI's for Δp(l) were obtained based on the two bootstrap population results for p x (l) and p y (l), respectively. As they are obtained independently of each other, a new bootstrap population of results for Δp(l) was created using the methods described in [10]: Based on this final bootstrap population, Efron 95% percentile CI's were obtained for Δp(l) as described above. Predicting the effect of grid bar spacing on gear size selectivity To investigate the potential for improving species and size selectivity in the shrimp trawl by changing the bar spacing in the Nordmøre grid, the combined size selectivity (Fig 1) r comb (l) of a 35 mm codend preceded by different bar spacing Nordmøre grids was estimated. Using estimates for grid passage probability p x (l) where subscript x stands for 17, 19 or 21 mm bar distance, we predicted the combined selectivity for each grid and codend configuration. For the 17 and 21 mm bar spacing grids, these estimates were obtained from this study, while for the 19 mm bar spacing grid, results from [10] were used. For codend size selection r codend (l) the estimates used for the standard 35 mm diamond mesh codend were obtained from [10,36]. Specifically, using these results the combined selectivity was predicted by: This prediction approach is similar to the one applied by [37]. Uncertainties in terms of 95% percentile CI's for r comb (l) were obtained based on the individual bootstrap population for p x (l) and r codend (l) previously used to estimate uncertainties for these curves individually. Thus, we estimated the uncertainty for a dual sequential process based on bootstrap populations used to estimate the uncertainty estimation of the individual processes [36]: where i denotes the bootstrap repetition index. As resampling was random and independent for both groups of results, it is valid to generate the bootstrap population of results for the product based on (7) using two independently generated bootstrap files [36]. The difference in r comb (l) for using different Nordmøre grid bar spacings was estimated from the two bootstrap populations using the approach described in Eq (5). Predicting the effect of grid bar spacing on catch size distribution in the shrimp trawl To investigate how the application of different Nordmøre grid bar spacings would affect the catch size distribution in the Northern shrimp fishery, we estimated the size-dependent fraction nrPop l of the populations of shrimp and bycatch species that would be retained in the trawl by: where nPop l is the population entering the trawl in front of the Nordmøre grid section in terms of number of individuals. For this population, we used the sum for all hauls in this study (i.e., those obtained with the 17 and 21 mm Nordmøre grid bar spacings). Parameter n is the number of individuals in length class l. Uncertainties in terms of 95% percentile CI's for nrPop l were obtained based on the individual bootstrap population for r comb (l) and for nPop l following the approach in [38]. The value of a catch efficiency indicator nP+ was estimated for Northern shrimp and bycatch species. This indicator is often used in fishing gear size selectivity studies to supplement selectivity curve assessment [37,[39][40][41][42]. It quantifies the retention efficiency of the catch above a specified length L for the population entering the fishing gear [43]. Ideally nP+ should be high (close to 100) for the target species and low for the bycatch species (close to 0). Parameter nP+ was estimated by: For the Northern shrimp, Eq (9) was used with CLs L set at 15 mm (for industrial shrimp above the MLS) and 20 mm (for commercial size individuals of higher value), respectively. For bycatch species, Eq (9) was used summed over all lengths of measured fish (up to 40 cm length). Further, we quantified the fraction of the catch which consists of undersized Northern shrimp retained by grids with 17, 19 and 21 mm bar spacings, respectively (nDiscardRatio). The nDiscardRatio is estimated as follows: where h equals number of hauls and i is the specific haul with 17 mm, 19 and 21 mm grid, respectively. Parameter l is the CL class and MLS is the minimum landing size of shrimp corresponding to 15 mm CL. The discard ratios are given in percentage and the values should ideally be as low as possible. Grid passage probability 3.1.1. Sea trials and collected data. The sampling effort and catch data for Northern shrimp, cod, and American plaice is presented in Table 1 (S1 and S2 Files). Northern shrimp. Comparing the grid passage probability for Northern shrimp demonstrated no significant difference between the grids with three bar spacings (i.e., 17 mm vs 19 mm, 21 mm vs 19 mm and 21 mm vs 17 mm) (Fig 2). The contact probability for Northern shrimp was nearly 100% for all three grids ( Table 2), meaning that very few shrimp are released through the escape outlet without contacting the grid. The high L50 grid and SR grid values estimated are caused by the extrapolation from the model and are biologically meaningless considering the size range of Northern shrimp. Therefore, they should be considered only as parameters necessary for model estimation. The model used to represent the experimental data for shrimp fitted well. Thus, the low values obtained were most likely a consequence of PLOS ONE Effect of the Nordmøre grid bar spacing in the Barents Sea Northern shrimp fishery over-dispersion in the experimental catch portioning data that resulted from working with pooled and subsampled data with low sampling rates ( Table 2). Cod. Comparisons of the grid passage probability for cod through the three grid bar spacings demonstrated significant differences between them, i.e., 17 mm vs 19 mm, 21 mm vs 19 mm and 21 mm vs. 17 mm (Fig 3). The delta plots illustrate that the difference between the grid passage probability is largest between the 17 mm and 21 mm bar spacing grids for cod ( Fig 3F). The grid contact probability estimated for cod with the 17 and 21 mm grids was similar and did not differ significantly from the value for C grid estimated by Larsen et al. [10] (Table 3). (Table 3). American plaice. Comparing the grid passage probability for American plaice for the 17, 19 and 21 mm grids demonstrated significant differences in two out of the three comparisons (e.g., 17 mm vs 19 mm and 21 mm vs 17 mm). The delta plot for the grid passage probability between the 19 and the 21 mm grids showed no significant differences (Fig 4). The C grid value estimated for the three grids was 100% in all three cases ( Table 4). The L50 grid estimated for the 21 mm grid was 18.9 cm and differed significantly from that estimated for the 17 mm grid, which was 16.8 cm ( Table 4). The L50 grid value for the 19 mm differed significantly from the value estimated for the 17 mm grid but did not differ from that estimated for the 21 mm grid (Table 4). Northern shrimp. The predicted retention probability curves for the combined effect of the grid and codend demonstrated no significant difference between the three grids (i.e., bar spacings 17 mm vs 19 mm, 21 mm vs 19 mm, and 21 mm vs 17 mm grids combined with a 35 mm codend) (delta plots in Fig 5). This is also corroborated by the length distributions retained by the three gear configurations with entry populations of shrimp, which were nearly identical (Fig 6). The catch efficiency indicators (nP+) show that the capture probability for both shrimp > 15 mm CL (i.e., larger than the MLS [11]) and shrimp > 20 mm CL was only slightly different between the different bar spacings considering the size structure of the population caught. These differences were not significant between any of the three grids (Table 5). Changing the grid bar spacing from 19 mm to 17 mm only decreased the retention probability by 1.56% for shrimp > 15 mm CL, and by 1.64% for shrimp > 20 mm CL. On the other hand, increasing the grid bar spacing from 19 mm to 21 mm only increased the retention probability for shrimp > 15 mm CL by 0.10%, and 0.16% for shrimp > 20 mm CL (Table 6). Further, the discard ratio which here is the estimated proportion of shrimp below the MLS in the population encountered, was not significantly different for the three grids considered in the study (Table 5). Cod. For cod, the predicted retention probability curves for the combined effect of the grid and codend demonstrated a significant difference between the three different grid and codend configurations (e.g., 17 mm vs 19 mm, 21 mm vs 19 mm, and 21 mm vs 17 mm grids combined with a 35 mm codend) (Fig 7). The configuration with the 17 mm bar spacing retained significantly less cod than the configuration with 19 mm bar spacing in the grid. The configuration with the 21 mm bar spacing grid on the other hand retained significantly more cod compared to the configuration with the 19 mm bar spacing grid, a significance that was even more prominent when compared to this configuration with the 17 mm bar spacing (Fig 7). This is also corroborated by the length distributions retained by the three different configurations, which show an increased retention for cod in the configurations with smaller bar spacing (Fig 8). The estimated catch efficiency indicator (nP+) showed that the catches of cod doubles when the grid bar spacing is increased from 17 mm to 19 mm, and again when the bar spacing is increased from 19 mm to 21 mm. The difference in catches is significant between the 17 mm grid and the 21 mm grid considering the size structure of the population caught ( Table 5). The results also show a significant decrease in the retention probability for cod of 53.06% when grid bar spacing is decreased from 19 mm to 17 mm, and contrary, a significant increase of 95.85% when grid bar spacing is increased from 19 to 21 mm (Table 6). American plaice. For American plaice, the predicted retention probability curves for the combined effect of the grid and codend demonstrated a significant difference between the configuration with the 17 mm grid and the 19 mm grid, and the configurations with the 21 mm grid and 17 mm grid (Fig 9). No difference was observed between the configurations with the 21 mm and the 19 mm grid (Fig 9). The configuration with the 17 mm bar spacing grid retained significantly less American plaice than that with the 19 mm bar spacing grid (Fig 9). The configuration with the 21 mm bar spacing grid on the other hand, retained significantly more American plaice compared to the configuration with 17 mm bar spacing grid (Fig 9). This is also corroborated by the length distributions retained, which show a similar length Table 5. Catch efficiency indicators nP+ (in %) for Northern shrimp with carapace lengths >15 mm and >20 mm, respectively, and bycatch species (cod and American plaice) and fraction of undersized shrimp (nDiscardRatio) (in %) when using the 17, 19 and 21 mm bar spacing grids combined with the 35 mm codend for the population encountered. distribution for the configurations with the 19 mm and 21 mm grids, whereas the retention for the configuration with the 17 mm bar spacing grid was lower (Fig 10). The catch efficiency indicator (nP+) for American plaice shows that the catches significantly increase when grid bar spacing is increased from 17 mm to 19 mm, while it barely changes when the bar spacing is increased from 19 to 21 mm considering the size structure of the population caught ( Table 5). The differences in catches are also reflected in the change in capture probability for American plaice, which significantly decreases by 32.76% when decreasing grid bar spacing from 19 mm to 17 mm, but barely increases when increasing grid bar spacing from 19 mm to 21 mm (Table 6). Discussion In the Barents Sea Northern shrimp trawl fishery, the use of a Nordmøre grid with 19 mm bar spacing to supplement codend size selectivity is mandatory to reduce bycatch. In the present study, we investigated the effect of changing grid bar spacing on the size selectivity of Northern shrimp and two bycatch species: cod, and American plaice. The results obtained for Northern shrimp show that increasing grid bar spacing from the compulsory 19 mm to 21 mm or, on the contrary, reducing it to 17 mm did not have significant consequences regarding the passage probability of shrimp (Fig 2D-2F). Further, the Table 6. Change in the catch efficiency indicator nP+ (in %) for Northern shrimp with carapace lengths >15 mm and >20 mm, respectively and bycatch species (cod and American plaice) and fraction of undersized shrimp (nDiscardRatio) (in %) when using the 17 and 21 mm bar spacing with respect to the 19 mm bar spacing (baseline) for the population encountered. Change in nP+ (%) Change in nDiscardRatio (%) retention probability curves for the combined effect of the grid and codend did not show significant differences between the three different grid and codend configurations considered (Fig 5D-5F). Northern shrimp have limited swimming capability and, therefore, they are able to pass through the grid if they contact the device with optimal orientation [23]. The results of this study show that the grid passage probability estimated for shrimp is close to 100% for the 17, 19 and 21 mm grid (Table 2). However, the results presented in this study need to be interpreted correctly considering the population size structure, especially regarding grid passage probability since it is dependent on the size structure of the shrimp population entering the trawl. The recorded sizes of the Northern shrimp reaches up to 32 mm [44]. If shrimp size structure in a fishing ground were larger compared to the size structure in this study and if the shrimp were present in sufficient numbers, the portion of the curve with lower passage probabilities could be visible. This would most likely show differences between the grids with different bar spacings. Therefore, the results obtained for shrimp grid passage probability in this study should be interpreted keeping in mind that in areas with larger shrimp than those caught in our study, there could potentially be differences in passage probability for the largest individuals. However, the size structure of Northern shrimp and the bycatch composition obtained in this study is similar to those obtained in several earlier experiments that have been carried out in the Barents Sea during the last two decades [10,16,[21][22][23][24][25]45]. This shows that there is likely little variation in the shrimp population size structure in the Barents Sea. Therefore, it makes the results presented here applicable for the whole area and the comparisons carried out with the 19 mm grid from Larsen et al. [10] appropriate. Further, the data for the 19 mm grid were collected under similar environmental conditions and by the same crew, using the same vessel and trawl configuration, and following the same sampling protocols. Regarding the selectivity results obtained for the two bycatch species, the grid passage probability of both cod and American plaice increase with increased bar spacing. The difference between the grids is largest when the grids with 17 and 21 mm bar spacing are compared (Figs 3D-3F and 4D-4F). When the predicted retention probability curves for the combined grid and codend configurations are compared for the 17, 19 and 21 mm grids, the results show a pattern that is similar to that observed for the differences in grid passage probability alone. Specifically, it shows that the sorting grid has a major role in the overall selection process of the combined gear (Figs 8D-8F and 9D-9F). The catch efficiency indicator (nP+) for the shrimp > 15 mm is ca. 24% lower than for the larger shrimp (>20 mm) for the three grid and codend configurations included in this study. However, the difference is not significant in any case. The difference observed is probably due to the size sorting process in the codend, which becomes more relevant for the smaller sizes of shrimp. The nP+ and discard ratio values were similar for Northern shrimp >15 mm individuals above the MLS) and >20 mm (commercially important sized individuals) for the selective gear configurations tested, showing that the size distribution of shrimp catches were similar in the three grids. There were significant differences in catch efficiency indicator values for cod and American plaice. For both species, the nP+ values significantly increased with increased bar spacing. This implies that larger number of fish were retained with respect to the numbers that entered the gear with increased bar spacing. The results of this study show that with the shrimp populations encountered, changing grid bar spacing from 17 to 21 mm does not result in significant changes in shrimp catch, whereas the bycatch of cod and American plaice changes significantly. Therefore, in areas where cod and American plaice populations are abundant and exhibit a similar size structure as in the current study, changing the compulsory grid of 19 mm to a grid of 17 mm would reduce the catches of cod by half and the catches of American plaice by one third while catching the same amount of shrimp. These results show the importance of considering the population structure in this and other shrimp fisheries at the time the fishing operations are conducted. Further, it shows the relevance and potential impact of flexibility in gear choice and supports the use of more dynamic management systems based on new monitoring systems, which is a growing trend in modern fisheries [46]. Several different gear modifications have been tested to improve the selectivity of shrimp trawls. These include, for example, additional grids or sieve panels [10,24], modified codend designs regarding codend meshes [16,22], and use of artificial lights [21]. However, the results obtained during those trials were variable. Therefore, more research is required to determine a design that reduces fish and undersized Northern shrimp bycatch while minimizing the loss of target individuals. Therefore, we designed this study to examine the effect of the grid bar distance. Different bar spacing grids have been tested in other shrimp fisheries around the world (e.g., [18, 26-29, 47, 48]). For example, Hickey et al. [26] tested 22, 25 and 28 mm grids along the fishing grounds of Eastern Canada. The results showed while the different grids were effective at reducing bycatch, there was no significant difference in the mean Northern shrimp length captured with either of the grids. In a recent study from the same area (the shrimp grounds off Newfoundland), similar results, i.e., no difference, was obtained for shrimp retention when comparing 19 and 22 mm bar spacings in a Nordmøre grid [29]. Furthermore, it was recorded that both grids retained more than half of all redfish (Sebastes spp.) by numbers. The authors explained the observed behaviour of redfish during escape and retention (mainly fish <15 cm) between the grids as a possible effect of rejected waterflow directed towards the escape opening. As in the present study, this result is likely a consequence of the size distribution of Northern shrimp when the study was carried out. The length distributions presented did not contain shrimp �30 mm, which is probably necessary to potentially detect differences between grids with grid bar spacings �21 mm. The technical regulations in Norwegian and Russian sectors of the Barents Sea and other Norwegian shrimp fishing grounds are a trade-off between maximizing the capture of the target species and minimizing retention of small fish. A reduction of bycatch of juveniles is generally believed to have a positive effect on the stock recruitment [13]. The industry can on a voluntary basis use grids with smaller bar spacings than the regulated maximum 19 mm to improve selectivity. However, from a practical point of view, it is rational for the management to have one selective system, e.g., the 19 mm Nordmøre grid combined with a 35 mm codend, covering all areas within Norwegian (and Russian) jurisdiction. However, species and size composition can change between fishing areas and time of the year. Having the flexibility to reduce grid bar spacing from 19 to 17 mm could provide fishermen access to closed areas where the established bycatch limits [11] would else be exceeded. On the other hand, in areas with large sizes of shrimp and low densities of bycatch species, increasing bar spacing could theoretically be beneficial. However, this scenario might be unlikely in the Barents Sea and could not be corroborated by the results in the present study due to recorded size distribution of shrimp. The present study provides an insight of the effects of changing grid bar spacing on the size selectivity of Northern shrimp and two common bycatch species in the Barents Sea fishery. However, the results and findings could be applicable to other shrimp fisheries where flexibility in the use of different grid bar spacings may be beneficial to maximize the reduction of unwanted bycatch awhile minimizing the loss of target species.
2022-12-29T05:05:16.607Z
2022-12-27T00:00:00.000
{ "year": 2022, "sha1": "f16cb6dafb785a74fa4c08eaadd7a85e10e72778", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f16cb6dafb785a74fa4c08eaadd7a85e10e72778", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
271364005
pes2o/s2orc
v3-fos-license
Effect of Drying on the Quantity and Composition of Artemisia monosperma Essential Oil and Exploring the Bronchodilator Effect Using Guinea Pig Tracheal Muscles : Artemisia monosperma is a plant with many traditional uses including some affecting smooth muscles. The A. momosperma essential oil (AMEO) prepared by hydrodistillation from fresh and dry aerial parts was compared qualitatively and quantitatively using GC/MS. The drying process affected the yield and composition of AMEO. The bronchodilator potential was explored using isolated guinea-pig trachea in ex-vivo organ bath setup. In the tracheal contractions induced by different spasmogens, AMEO was able to completely relax contractions induced by carbachol (CCh; 1 µM) and high K+ (80 mM) at closely related doses (p > 0.05) indicating a dual inhibition of phosphodiesterase enzyme (PDE) and voltage-mediated L-type Ca ++ channels blocker (CCB) as papaverine. The current study provides scientific support to the medicinal use A. monosperma in respiratory disorders. Previous Studies In Saudi Arabia deserts A. monosperma grows up to 1 meter in height [1]. A. monosperma is reputed to have antispasmodic effect.In Jordan, the leaves are applied to induce abortion [2].The plant is also used traditionally to treat diabetes, rheumatic pain and fever [2,3]. Present Study A sample of 100 g of the fresh aerial parts and 100 g of dried aerial parts obtained from 250 g of fresh sample after drying for two weeks under controlled lab conditions were used for A. monosperma essential oil (AMEO) preparation by hydro-distillation using Clevenger apparatus for 6 hours.The yield based on the sample weight used for oil preparation was 0.77% and 0.5% w/w of fresh and dry aerial parts, respectively.The components of the AMEO were determined using GC/MS analysis as well as comparison of the retention indices with the values of the National Institute of Standards and Technology database (Table 1, Table S1, Figures S1 and S2).The AMEO of the fresh aerial parts was rich in monoterpenes.-pinene (48.7%), -terpinene (25.3 %) and L-limonene (6.2 %).Previous analysis of AMEO from fresh aerial parts, leaves stems all contains -pinene as the major component.Many components were also in common especially shyobunone [4][5][6][7].However, differences exist due to environmental factors.The drying process took place gradually where enzymatic activity can keep going till the moisture contents reached a low critical level to stop it.The enzyme activity optimally requires 45 % moisture contents or more.Both enzymatic activity and the more volatility of the lighter monoterpenes are accounted for the changes in the oil composition of the oil derived from dry aerial parts [8].The most dramatic loss was in the -pinene % that decreased to 9.7 while the percentage of heaviour components such as the sesquiterpenes (+)-Bicyclogermacrene, -Muurolene, -Cadinol and α-Cadinol increased.The percentage of -Elemene increase may be attributed to both less volatility and enzymatic activity during the drying process.The antispasmodic effect of A. monosperma encourage us to study the bronchodilator effect of its oil as we were interested in studying such activity in many traditional plants [2].Smooth muscle relaxant effect of many essential oils correlated to their terpene contents such as -pinene and -Cadinol [9][10][11][12].The AMEO was evaluated against CCh and high K+ evoked bronchospasm using the wellestablished guinea-pig tracheal muscles model [13].CCh stimulates the muscarinic (M3) receptors leading to induced bronchoconstriction [14].Solutions with K + concentration more than 25 mM could open the voltage-gated L-Type Ca ++ channels causing depolarization leads to tracheal contractions [15].AMEO was able to suppress both CCh and high K + initiated tracheal muscles contractions in a concentration-dependent manner with EC50 = 0.24 mg/mL (0.21 -0.28, n=4) and 0.28 mg/mL (0.24 -0.32, n=4), respectively (Figure 1).Papaverine is an inhibitor of both Ca ++ channels and PDE expressed similar behaviour with EC50 = of 11 µM (0.86 -13.42, n=5) and 12.20 µM (10.42 -14.86, n=5), respectively (Figure 1) [16].The standard Ca ++ channel blocker verapamil [17], was highly selective in blocking K + contractions resulted from the opening of the voltage-gated L-Type Ca ++ channels with EC50 = 0.86 µM (0.74 -0.98, n=5) and 16.14 µM (14.24 -18.56, n=5), respectively (Figure 1) [18].These finding indicated that AMEO expressed the airways relaxant activity via CCB and PDE inhibitory mechanisms in a fashion resembles that of papaverine.AMEO along with the two standards verapamil and papaverine were challenged against Ca ++ induced bronchospasm (Figure 2).The three tested entities were able to markedly attenuate the contraction as well as reduce maximum response indicating their effect on the Ca ++ channels.The three entities were again tried for their effect on isoprenaline relaxant effect against CCh bronchoconstriction. Both AMEO and papaverine expressed potentiation of the iosprenaline relaxation proving the co-existence of PDE inhibitory like activity (Figure 3).Verapamil did not show any potentiation to isoprenaline relaxation (Figure 3).Isoprenaline is a nonselective β-adrenoceptor agonist resulted in airways relaxation by raising the intra-cellular cAMP concentration.Respiratory tract increase in cAMP concentration can result from two possible mechanisms: β2-agonistic activity and PDE inhibition [19].The demonstrated enhancement of isoprenaline inhibitor action by AMEO indicate the presence of PDE inhibitor mechanism on the airways relaxant mechanism.It is reported that PDE inhibitors potentiate the isoprenaline relaxant effect [20].Based on these findings the presence of β2agonistic activity cannot be excluded.The beneficial role of PDE inhibitors in the management of asthma is well established [21].Their major drawback is the cardiac stimulation effect [22].Interestingly, Ca ++ antagonists expressed beneficial action in the treatment of bronchoconstriction [23] and in contrary to PDE inhibitors they exhibit suppressant action on the cardiac muscle [24].The combination of Ca ++ channel blocker as well as PDE inhibitor(s) components in AMEO is perhaps implied by the Nature to oppose the tachycardia accompanying with the use of PDE inhibitors alone.This finding is very supportive for the concept that natural remedies known to possess synergistic and/or side effect neutralizing potential.This is added to the cost effectiveness offering merit in evidence-based studies [25].The presence of the binary inhibition effect on both PDE and Ca ++ channels is most probably responsible for the medicinal application of AMEO as spasmolytic agent. Supporting Information Supporting Information accompanies this paper on http://www.acgpubs.org/journal/recordsof-natural-products Artemisia monosperma was collected in April, 2023 from Al-Jubail region (26°56'26.2"N49°30'22.8"E)eastern part of Saudi Arabia.The plants were authenticated by Dr. Mona Alwahibi, Botany and Microbiology Department, College of Science at KSU.A voucher specimen #MSA 11723 was preserved at the Department of Pharmacognosy, College of Pharmacy, PSAU. Figure 2 . Figure 2. Concentration-response curves of Ca ++ with or without increasing concentrations of the AMEO, verapamil and papaverine using guinea-pig tracheal muscle preparations.Values shown are mean ± SEM, n=4-5. Figure 3 . Figure 3. Concentration-response curves of isoprenaline relaxant effect against carbachol (CCh)mediated contractions with or without different concentrations of AMEO, papaverine and verapamil using guinea-pig tracheal muscles preparations.Values shown are mean ± SEM, n=4-5. Table 1 . Composition of AMEO of fresh and dry aerial parts
2024-07-24T15:33:57.483Z
2024-07-20T00:00:00.000
{ "year": 2024, "sha1": "674b01c09778efd764a5219e83996497c550fa12", "oa_license": null, "oa_url": "https://doi.org/10.25135/rnp.470.2405.3233", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "77fa7750fac90c87699dc186163db7365031d4f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
3109211
pes2o/s2orc
v3-fos-license
Preparation and measurement: two independent sources of uncertainty in quantum mechanics In the Copenhagen interpretation the Heisenberg uncertainty relation is interpreted as the mathematical expression of the concept of complementarity, quantifying the mutual disturbance necessarily taking place in a simultaneous or joint measurement of incompatible observables. This interpretation has already been criticized by Ballentine a long time ago, and has recently been challenged in an experimental way. These criticisms can be substantiated by using the generalized formalism of positive operator-valued measures, from which a new inequality can be derived, precisely illustrating the Copenhagen concept of complementarity. The different roles of preparation and measurement in creating uncertainty in quantum mechanics are discussed. I. INTRODUCTION The Copenhagen view on the meaning of quantum mechanics largely originated from the consideration of so-called "thought experiments", like the double-slit experiment and the γ microscope. These experiments demonstrate that there is a mutual disturbance of the measurement results in a joint measurement of two incompatible observables A and B (like position Q and momentum P ). The Heisenberg-Kennard-Robertson uncertainty relation in which ∆A and ∆B are standard deviations, has often been interpreted as the mathematical expression of this disturbance (in Heisenberg's paper 1 only position Q and momentum P are considered). However, as noted by Ballentine 2 , this uncertainty relation does not seem to have any bearing on the issue of joint measurement, because it can be experimentally tested by measuring each of the observables separately, subsequently multiplying the standard deviations thus obtained. Moreover, such an interpretation is at variance with the standard formalism developed by Dirac and von Neumann, which only allows the joint measurement of compatible observables. According to Ballentine 2 quantum mechanics is silent about the joint measurement of incompatible observables. If this were true, however, what would this mean for the disturbance idea originating from the "thought experiments"? How could these experiments be useful in clarifying the meaning of a mathematical formalism that is not capable of yielding a description of such experiments? Nowadays measurements like the double-slit experiment no longer are "thought" experiments 3−9 , and complementarity, in the sense of mutual disturbance, has been experimentally demonstrated in an unequivocal way. However, in agreement with Ballentine's observation the relation of these experiments with the Heisenberg-Kennard-Robertson inequality (1) has proved controversial 10,11 . Whereas Storey et al. 10 conclude that "the principle of complementarity is a consequence of the Heisenberg uncertainty relation," Scully et al. 11 observe that "The principle of complementarity is manifest although the position-momentum uncertainty relation plays no role." Duerr et al. 9 stress that quantum correlations due to the interaction of object and detector, rather than "classical" momentum transfer, enforces the loss of interference in a which-way measurement. In their experiment momentum disturbance is not large enough to account for the loss of interference if the measurement arrangement is changed so as to yield 'which-way' information. Actually, two different questions are at stake here. First, the question might be posed whether the Heisenberg inequality of position and momentum is the relevant one for interference experiments. Second, there is the problem observed by Ballentine, which is the more fundamental question whether the Heisenberg inequality is applicable at all. Contrary to the latter question, the former might be thought to have a relatively simple answer. In general, interference experiments like the one of Ref. 9 are not joint measurements of position and momentum but of a different pair of observables A and B (see section IV for an example). Hence, rather than the inequality ∆Q∆P ≥h/2 relation (1) for observables A and B seems to be relevant to the experiment. Although position and momentum may also be disturbed by the interaction with the detector, this need not be related to complementarity because A and B rather than Q and P are involved in the correlations between object and detector. Hence, the controversy could be resolved by pointing out which (incompatible) observables are measured jointly in the experiment. However, we would then have to deal with quantum mechanics' alleged silence with respect to such experiments. It seems that Ballentine's problem with respect to the applicability of (1) to the joint measurement of incompatible observables A and B has more far-reaching consequences because it points to a fundamental confusion regarding complementarity within the Copenhagen interpretation. This is due to the poor distinction made between the different aspects of preparation and measurement involved in physical experimentation. As a matter of fact, in the Copenhagen interpretation a measurement is not perceived as a means of obtaining information about the initial (pre-measurement) state of the object, but as a way of preparing the object in some final (post-measurement) state. Due to this view on the meaning of "measurement" there is insufficient awareness that both the preparation of the initial state as well as the measurement may contribute to the dispersion of an observable. The Copenhagen issue of complementarity actually has two different aspects, viz. the aspects of preparation and measurement, which are not distinguished clearly enough. If such a distinction is duly made, it is not difficult to realize that the notion of "measurement disturbance" should apply to the latter aspect, whereas the Heisenberg-Kennard-Robertson uncertainty relation refers to the former. With no proper distinction between preparation and measurement the Copenhagen interpretation was bound to amalgamate the two forms of complementarity, thus interpreting the Heisenberg-Kennard-Robertson uncertainty relation as a property of (joint) measurement. Unfortunately, remnants of this view are still abundant in the quantum mechanical literature. The purpose of the present paper is to demonstrate that the Copenhagen confusion of preparation and measurement largely is a consequence of the inadequateness of the standard formalism for the purpose of yielding a description of certain quantum mechanical experiments, and joint measurements of incompatible observables in particular. To describe such measurements it is necessary to generalize the quantum mechanical formalism so as to encompass positive operator-valued measures 12 (POVMs); the standard formalism is restricted to the projection-valued measures corresponding to the spectral representations of selfadjoint operators. The gen-eralized formalism will briefly be discussed in sect III. In sect. IV the generalized formalism will be applied to neutron interference experiments that can be seen as realizations of the double-slit experiment. By employing the generalized formalism of POVMs it is possible to interpret such experiments as joint non-ideal measurements of incompatible observables like the ones considered in the "thought experiments". An inequality, derived from the generalized theory by Martens 13 , yields an adequate expression of the mutual disturbance of the information obtained on the initial probability distributions of two incompatible observables in a joint measurement of these observables. How both contributions to complementarity can be distinguished in the measurement results obtained in such experiments will be discussed in sect. V. A proof of the Martens inequality is given in Appendix B. II. CONFUSION OF PREPARATION AND MEASUREMENT The confusion of preparation and measurement is already present in the Copenhagen thesis that quantum mechanics is a complete theory. As a consequence of this thesis a physical quantity cannot have a well-defined value preceding the measurement (because this would correspond to an "element of physical reality" as employed by Einstein, Podolsky and Rosen 14 to demonstrate the incompleteness of quantum mechanics). For this reason a quantum mechanical measurement cannot serve to ascertain this value in the way customary in classical mechanics. Heisenberg 15 proposed an alternative for quantum mechanics, to the effect that the value of an observable is well-defined immediately after the measurement, and, hence, is more or less created by the measurement 16 . For Heisenberg his uncertainty relation did not refer to the past (i.e. to the initial state), but to the future (i.e. the final state): it was seen as a consequence of the disturbing influence of the measurement on observables that are incompatible with the measured one. Hence, for Heisenberg a quantum mechanical measurement was a preparation (of the final state of the object), rather than a determination of certain properties of the initial state. As emphasized by Ballentine, the interpretation of the Heisenberg-Kennard-Robertson uncertainty relation usually found in quantum mechanics textbooks, is in disagreement with Heisenberg's views, because in the textbook view this relation is not considered a property of the measurement process but, rather, of the initial object state. Also Bohr 17,18 did not draw a clear distinction between preparation and measurement. He always referred to the complete experimental arrangement (often indicated as "the measuring instrument") serving to define the measured observable. For Bohr the uncertainty relation (1) was an expression of our limitations in jointly defining complementary quantities (like position and momentum) within the context of a measurement. He did not distinguish different phases of the measurement. More particularly he did not distinguish different contributions to complementarity from the preparation of the initial state and from the disturbance by the measurement. According to Bohr the uncertainty relation refers to the "latitudes" of the definition of incompatible observables within the context of a well-defined measurement arrangement, deemed valid for the measurement as a whole. Incidentally, we see a manifest difference here with Heisenberg's views, a difference that may have confused anyone trying to understand the Copenhagen interpretation as a consistent way of looking at quantum mechanics. Moreover, the discrepancy between the Copenhagen interpretations of the uncertainty relation (viz. as a property of the measurement, either during this measurement (Bohr), or afterwards (Heisenberg)) and the textbook interpretation (viz. as a property of the preparation preceding the Obviously, two completely different issues are at stake here, corresponding to different forms of complementarity. As stressed by Ballentine, the Heisenberg-Kennard-Robertson uncertainty relation (1), in which ∆A and ∆B are standard deviations in separately performed measurements, should be taken, in agreement with textbook interpretation, as referring to the preparation of the initial state. On the other hand, the Copenhagen idea of complementarity in the sense of mutual disturbance in a joint measurement of incompatible observables, is certainly not without a physical basis. Thus, in the double-slit experiment (cf. figure 1) Bohr demonstrated that, if the quantum mechanical character of screen S is taken into account, our possibility to define the position and momentum of a particle passing the slits is limited by the Heisenberg uncertainty relation of the screen observables z S and p z S . As a matter of fact 19 , the lower bounds with which the latitudes δz and δp z of particle position and momentum are defined, are equal to the standard deviations ∆z S and ∆p z S , respectively. Hence, these latitudes must satisfy the inequality δzδp z > ∼h /2. In Heisenberg's terminology this inequality can be interpreted as expressing a lower bound for the disturbing influence exerted by the measuring instrument on the particle, thus causing the post-measurement state of the object to satisfy an uncertainty relation. Inequality (3) should be distinguished from the uncertainty relation satisfied by the standard deviations ∆z and ∆p z of position and momentum of the particle in its initial state. Whereas inequality (4), being an instance of inequality (1), does not refer in any way to joint measurement of position and momentum, but can be interpreted as a property of the preparation of the object preceding the measurement, inequality (3) does refer to the measurement process, since it is derived from a relation (viz. (2)) satisfied by a part of the measurement arrangement (screen S). Unfortunately, in discussions of the double-slit experiment such a distinction usually is not made. On the contrary, equating the quantities δz and δp z from (3) with the standard deviations ∆z and ∆p z , the derivation of (3) is generally interpreted as an illustration of the relation (4). As a consequence it is not sufficiently realized that preparation of the initial state and (joint) measurement are two distinct physical sources of uncertainty, yielding similar but physically distinct uncertainty relations that express different forms of complementarity. Only the former one is represented by a relation (viz. (1)), which can straightforwardly be derived from the standard formalism. Bohr's analysis of the double-slit experiment demonstrates that there is a second form of complementarity, which is not a property of the preparation of the initial state as represented by the Heisenberg-Kennard-Robertson relation, but which is due to mutual disturbance in a joint measurement of position and momentum. One important cause of the mixing up of the two forms of complementarity is the fact, as stressed by Ballentine, that the quantum mechanical formalism as axiomatized by von Neumann and Dirac defies a description of joint measurement of incompatible observables. In particular, such a measurement would have to yield joint probability distributions of the incompatible observables. However, within the standard formalism no mathematical quantities can be found that are able to play such a role. Thus, according to Wigner's theorem 20 no positive phase space distribution functions f (q, p) exist that are linear functionals of the density operator ρ such that dp f (q, p) = q|ρ|q and dq f (q, p) = p|ρ|p . Also von Neumann's projection postulate is often interpreted as prohibiting the joint measurement of incompatible observables, since there is no unambiguous eigenstate that can serve as the final state of such a measurement. For this reason only measurements of one single observable, for which the Heisenberg-Kennard-Robertson relation has an unambiguous significance, are usually considered in axiomatic treatments. On the other hand, Ballentine's judgment with respect to the inability of the quantum mechanical formalism to deal with the second kind of complementarity seems to be too pessimistic. Thus, for specific measurement procedures generalized Heisenberg uncertainty relations have been derived 7,8,21,22 , different from the Heisenberg-Kennard-Robertson relation, in which the uncertainties seem to contain contributions from both sources. Moreover, in the following it will be demonstrated that the generalized quantum mechanical formalism is able to deal with the two forms of complementarity separately, thus distinguishing the contributions due to preparation and (joint) measurement. III. GENERALIZED MEASUREMENTS In the generalized quantum mechanical formalism the notion of a quantum mechanical measurement is generalized so as to encompass measurement procedures that can be interpreted as joint measurements of incompatible observables of the type considered in the "thought experiments". A possibility to do so is offered by the so-called operational approach 12 , in which the interaction between object and measuring instrument is treated quantum-mechanically, and measurement results are associated with pointer positions of the latter. If ρ and ρ a are the initial density operators of object and measuring instrument, respectively, then the probability of a measurement result is obtained as the expectation value of the spectral representation {E (a) m } of some observable of the measuring instrument in the final state m . This quantity can be interpreted as a property of the initial object state, The quantum mechanical formalism is generalized to a certain extent by the operational approach. Whereas in the standard formalism quantum mechanical probabilities p m are represented by the expectation values of mutually commuting projection operators ( After having generalized the notion of a quantum mechanical observable it is possible to define a relation of partial ordering between observables, expressing that the measurement represented by one POVM can be interpreted as a non-ideal measurement of another 13 . Thus, we say that a POVM {R m } represents a non-ideal measurement of the (generalized or standard) observable {M m ′ } if the following relation holds between the elements of the POVMs: The matrix (λ mm ′ ) is the non-ideality matrix. It is a so-called stochastic matrix 23 . Its elements λ mm ′ can be interpreted as conditional probabilities of finding measurement result a m if an ideal measurement had yielded measurement result a m ′ . In the case of an ideal measurement the non-ideality matrix (λ mm ′ ) reduces to the unit matrix (δ mm ′ ). As an example we mention photon counting using an inefficient photon detector (quantum efficiency η < 1), for which the probability of detecting m photons during a time interval T can be found (cf. Kelley and Kleiner 24 ) as: (in which a † and a are photon creation and annihilation operators, and N is the normal ordering operator). Defining the POVM {R m } of the inefficient measurement by means of the equality p m (T ) = T rρR m , it is not difficult to prove that R m can be written in the form with |n n| the projection operator projecting on number state |n , and For η = 1 the non-ideality matrix is seen to reduce to the unit matrix, and the POVM (7) to coincide with the projection-valued measure corresponding to the spectral representation of the photon number observable N = ∞ n=0 n|n n|. Non-ideality relations of the type (5) are well-known from the theory of transmission channels in the classical theory of stochastic processes 25 , where the non-ideality matrix describes the crossing of signals between subchannels. It should be noted, however, that, notwithstanding the classical origin of the latter subject, the nonideality relation (5) may be of a quantum mechanical nature. Thus, the interaction of the electromagnetic field with the inefficient detector is a quantum mechanical process just like the interaction with an ideal photon detector is. Relations of the type (5) are abundant in the quantum theory of measurement. They can be employed to characterize the quantum mechanical idea of mutual disturbance in a joint measurement of incompatible observables. Generalizing the notion of quantum mechanical measurement to the joint measurement of two (generalized) observables, it seems reasonable to require that such a measurement should yield a bivariate joint probability distribution p mn , satisfying p mn ≥ 0, mn p mn = 1. Here m and n label the possible values of the two observables measured jointly, corresponding to pointer positions of two different pointers (one for each observable) being jointly read for each individual preparation of an object. It is assumed that, analogous to the case of single measurement, the probabilities p mn of finding the pair (m, n) are represented in the formalism by the expectation values R mn of a bivariate POVM {R mn }, R mn ≥ O, mn R mn = I in the initial state of the object. Then the marginal probabilities { n p mn and m p mn } are expectation values of POVMs {M m = n R mn } and {N n = n R mn }, respectively, which correspond to the (generalized) observables jointly measured. In Appendix A it is proven that, if the observables corresponding to the POVMs {M m } and {N n } are standard observables (i.e. if the operators M m and N n are projection operators), then joint measurement is only possible if these observables commute 26 . This result, derived here from the generalized formalism, corroborates the standard formalism for those measurements to which the latter is applicable. Note, however, that in general commutativity of the operators M m and N n is not a necessary condition for joint measurability of generalized observables (see section IV for an example). The notion of joint measurement can be extended in the following way. We say that a measurement, represented by a bivariate POVM {R mn }, can be interpreted as a joint non-ideal measurement of the observables {M m } and {N n } if the marginals { n R mn } and { m R mn } of the bivariate POVM {R mn } describing the joint measurement represent non-ideal measurements of observables {M m } and {N n }. Then, in accordance with (5) two non-ideality matrices (λ mm ′ ) and (µ nn ′ ) should exist, such that It is possible that {M m } and {N n } are standard observables. To demonstrate that the joint measurement scheme, given above, is a useful one, neutron interference experiments will be discussed in the next section as an example, satisfying the definition of a joint non-ideal measurement of two standard observables. It should be noted that this example is not an exceptional one, but can be supplemented by many others 27,28,29,30 . For instance, in analogy "eight-port optical homodyning 5 " can be interpreted as a joint non-ideal measurement of the observables Q = (a + a † )/ √ 2 and P = (a − a † )/i √ 2 of a monochromatic mode of the electromagnetic field. If {M m } and {N n } are standard observables the non-idealities expressed by the non-ideality matrices (λ mm ′ ) and (µ nn ′ ) can be proven 13 to satisfy the characteristic traits of the type of complementarity that is due to mutual disturbance in a joint measurement of incompatible observables as dealt with in the "thought experiment". A measure of the departure of a non-ideality matrix from the unit matrix is required for this. A well-known quantity serving this purpose is Shannon's channel capacity 25 . Here we consider a closely related quantity, viz. the average row entropy of the nonideality matrix (λ mm ′ ), that (restricting to square N × N matrices) satisfies the following properties: Hence, the quantity J (λ) vanishes in the case of an ideal measurement of observable {M m ′ }, and obtains its maximal value if the measurement is uninformative (i.e. does not yield any information on the observable measured non-ideally) due to maximal disturbance of the measurement results. For a joint non-ideal measurement as defined by (9), the non-idealities of both non-ideality matrices (λ mm ′ ) and (µ nn ′ ) can be quantified in a similar way. In Appendix B it is demonstrated that for a joint non-ideal measurement of two standard observables A = m a m M m and B = n b n N n , with eigenvectors |a m and |b n , respectively, the non-ideality measures J (λ) and J (µ) obey the following inequality: It is evident that (11) is a nontrivial inequality (the right-hand side unequal to zero) if the two observables A and B are incompatible in the sense that the operators do not commute. I shall refer to inequality (11) as the Martens inequality. It is important to note that this inequality is derived from relation (9), and, hence, must be satisfied in any measurement procedure that can be interpreted as a joint nonideal measurement of two incompatible standard observables 31 . In relation (9) only the observables (i.e. the measurement procedures) are involved. Contrary to the Heisenberg-Kennard-Robertson inequality (1), the Martens inequality is completely independent of the initial state of the object. Hence, the Martens inequality does not refer to the preparation of the initial state, but to the measurement process. The Martens inequality should be clearly distinguished from the entropic uncertainty relation 32,33 for the standard observables A = m a m M m and B = n b n N n , IV. NEUTRON INTERFERENCE EXPERIMENTS Instead of the classical double-slit experiment we shall consider an interference experiment performed with neutrons 34,35,36 . Due to the simplicity of its mathematical description this experiment yields a better illustration of the problem of Figure 2: Neutron interferometer complementarity due to mutual disturbance in a joint measurement of incompatible observables, than is provided by the "thought experiment". The interferometer consists of a silicon crystal with three parallel slabs (cf. figure 2) in which the neutron can undergo Bragg reflection. A neutron impinging in A at the Bragg angle is then either transmitted in the same direction or Bragg reflected. Hence, the neutron may take one of two possible paths. After reflection in the middle slab (B resp. C) the partial waves of the two paths are brought into interference again in the third slab (D). After that the neutron may be found in one of the two out-going beams by detector D 1 or D 2 . Since it is possible to achieve a separation of the paths by several centimeters in the interferometer, it is possible to influence each of the partial beams separately (cf. figure 3). For instance, we can insert an aluminum plate into one of the paths, causing a phase shift χ of the partial wave, depending on the plate's thickness. By varying the thickness an interference pattern is obtained when registering the number of neutrons detected by detector D 1 (or D 2 ). Summhammer, Rauch and Tuppinger 35 performed experiments in which, apart from a phase shifter, an absorbing medium was also inserted into one of the paths (indicated in figure 3 by its transmission coefficient a), consisting of gold or indium plates. Then the interference pattern also depends on the value of a. The visibility of the interference is maximal if a = 1. In such a case we have a pure interference experiment. If the absorbing plate is very thick (such that a = 0) every neutron taking that path will be absorbed. In that case it is certain that a neutron that is registered by one of the detectors has taken the other path. Then we have a pure "which path" measurement, in which the visibility of the interference pattern completely vanishes. For 0 < a < 1 the situation is an intermediate one. This situation will be considered in the following. Whereas the experiments corresponding to the limiting values a = 1 and a = 0 can be dealt with using the standard formalism, this is not the case for the intermediate values of a. Let |k 1 and |k 2 correspond to the plane waves that impinge at the Bragg angle (cf. figure 3). It is assumed 36 that each Bragg reflection induces a phase shift of π/2 in the wave vector. The phase shifter changes the phase of the wave passing it by χ; the absorber alters its amplitude by a factor of √ a. Thus for an arbitrary incoming state |in = α|k 1 + β|k 2 , |α| 2 + |β| 2 = 1, we find 27 the following out-going state: Here |abs denotes the state of the absorbed neutron, assumed to be orthogonal to With the measured detection probabilities are related to the incoming state, thus yielding an operational definition of the POVM {M 1 , M 2 , M 3 } representing the generalized observable measured in the Summhammer-Rauch-Tuppinger experiment. We first consider the limits a = 1 and a = 0. From (13) and (14) for a = 1 we find that p 3 = 0 and M 1 = Q 1 , M 2 = Q 2 , with Q 1 and Q 2 projection operators, in the two-dimensional representation of the vectors |k 1 and |k 2 being represented by the matrices The standard observable having these operators as its spectral representation will be referred to as the interference observable. For a = 0 we analogously find M 1 = M 2 = P + /2, M 3 = P − , with P + and P − projection operators represented by the matrices Also the operators P + and P − constitute a spectral representation of a standard observable, the path observable, being incompatible with the interference observable. For 0 < a < 1 the operators M m , m = 1, 2, 3 are found in an analogous way, according to It is important to note that in this case the operators M 1 , M 2 and M 3 are not projection operators. Only in the limits a = 1 and a = 0 is there a direct link with a standard observable. For the majority of experiments (0 < a < 1) the detection probabilities are not described by the expectation values of the spectral representation of one single selfadjoint operator (as would be the case within the standard formalism). It is possible, using definition (9), to interpret the neutron interference experiment as a joint non-ideal measurement of the interference and path observables defined by (15) and (16). In order to do so the operators M m of the experiment are ordered in a bivariate form according to Then the marginals { m R m1 , m R m2 } and { n R +n , n R −n } can easily be verified to satisfy the conditions (9) for non-ideal measurements of the path and interference observables, respectively, with non-ideality matrices It is interesting to consider the a dependence of the non-ideality matrices (19). For a = 0 we have λ mm ′ = δ mm ′ , µ nn ′ = 1/2. In this case the path measurement is ideal, whereas the non-ideality of the interference measurement is maximal (the corresponding POVM is given by {I/2, I/2}, implying that the POVM's expectation values do not provide information about the incoming state of the neutron). For a = 1 the situation is just the opposite. Then λ +m ′ = 1, λ −m ′ = 0, µ nn ′ = δ nn ′ . Now the interference measurement is ideal, and the path measurement is uninformative. For 0 < a < 1, in which the standard formalism is not applicable, both measurements are non-ideal. In going from a = 0 to a = 1 the non-ideality of the path measurement increases; that of the interference measurement decreases. For the non-ideality measures J (λ) and J (µ) defined by (10) we obtain From the parametric plot in figure 4 it can be seen that the Martens inequality (11) is satisfied. This illustrates the impossibility that both non-ideality measures J (λ) and J (µ) jointly have a small value. Figure 4 clearly illustrates the idea of complementarity as this arises in the "thought experiments". If a is varied, then the measurement arrangement is altered. For Bohr this would signify a different definition of the path and interference observables for each different value of a; the "latitudes" of the definition of the observables depending on a. For Heisenberg the path observable is disturbed more by the measurement process as a increases, whereas the interference observable is disturbed less. Both would interpret this as an expression of the complementarity of the path and interference observables, due to the fact that the operators P + and P − do not commute with Q 1 and Q 2 . Evidently, the a dependence of the non-ideality matrices (λ mm ′ ) and (µ nn ′ ) precisely expresses the complementarity that is connected with the mutual disturbance in a joint non-ideal measurement of the incompatible interference and path observables. It should be noted, however, that there also is a difference with Heisenberg's disturbance ideas. In the neutron interference experiment the non-idealities do not refer to the final object state, but to the information obtained on the initial state. Hence, these quantities do not refer to the preparative aspect of measurement (as is the case in Heisenberg's interpretation of his uncertainty relation), but to the determinative one. Contrary to the standard formalism the generalized formalism as embodied by (9) is capable of referring to the past (rather than to the future), even if measurements are involved in which measurement disturbance plays an important role. The generalized formalism enables us to consider quantum mechanical measurements in the usual determinative sense, and allows us to distinguish this determinative aspect from the question in which (post-measurement) state the object is prepared by the measurement. Incidentally it is noted that the marginals { m R m1 , m R m2 } and { n R +n , n R −n } of the bivariate POVM (18) constitute a non-commuting pair of generalized observables jointly measured. V. DISCUSSION For unbiased non-ideal measurements, i.e. measurements for which the nonideal and the ideal versions in (5) yield the same expectation values for operators m a m M m and m a m R m , the non-ideality matrix (λ mm ′ ) should satisfy the equality a m ′ = m a m λ mm ′ . If we restrict to unbiased non-ideal measurements it also is possible to demonstrate that there are two sources of uncertainty by using standard deviations. Thus, using the notation r m = T rρR m , p m = T rρM m , the relation r m = m ′ λ mm ′ p m ′ between the probability distributions {p m } (of the ideal measurement) and {r m } (obtained in the non-ideal one) is found from (5). For unbiased measurements the standard deviation of the measurement results a m of observable A = m a m M m , obtained in the non-ideal measurement, can easily be seen to satisfy the equality with The quantity (20) consists of two different contributions, i) the contribution ∆({p m }) 2 obtained in an ideal measurement, which is independent of the parameters of the measurement arrangement, and, for this reason interpretable in the usual way as a property of the initial state of the object, and ii) a contribution m ′ ∆ 2 m ′ p m ′ due to the non-ideality of the measurement procedure. Also it is not difficult to see that If in a joint non-ideal measurement of two incompatible observables A and B both non-ideal measurements are unbiased, then for the joint non-ideal measurement the generalized Heisenberg uncertainty relation ({s n } the non-ideally measured probability distribution of observable B) immediately follows from the Heisenberg-Kennard-Robertson relation (1). A disadvantage of (21) is that not all non-ideal measurements are unbiased. For instance, as easily follows from (8), detector inefficiency will cause the average measured photon number to be smaller than the ideal one. For this reason (21) is not universally valid. Moreover, in the expressions for ∆({r m }) and ∆({s n }) the two contributions to uncertainty are merged into one single quantity. An inequality, analogous to (21), that is valid for biased measurements too might be obtained by combining the entropic uncertainty relation (12) with the Martens inequality (11), thus yielding (H {Mm} + J (λ) ) + (H {Nn} + J (µ) ) ≥ −4 ln{max mn | a m |b n |}. However, it is evident that it is not very meaningful to do this because in (22) the two different contributions are once again merged, thus veiling their different origins. As follows from (12) and (11) both sources satisfy their own inequality. The opportunity entropic quantities offer for exhibiting this seems to be an important advantage of these quantities over the widely used standard deviations. It has occasionally been noted 22 that for specific measurement procedures an uncertainty relation for the joint measurement of incompatible observables can be formulated in terms of standard deviations. It is not at all clear, however, whether a relation exists, that is comparable to the Martens inequality, and valid for all quantum mechanical measurements interpretable as joint measurements of incompatible standard observables. Failure to distinguish the different contributions to uncertainty represented by the different terms in (20) and (22) is at the basis of the Copenhagen confusion with respect to the uncertainty relations originating with the discussion of the doubleslit experiment. Because no clear distinction was drawn between preparation and measurement, these could not be properly distinguished as different sources of "uncertainty", both contributing in their own way. Since inequality (3) refers to the measurement process rather than to the preparation of the initial state, it should be compared to the Martens inequality rather than to the Heisenberg-Kennard-Robertson one. The fact that (3) has the same mathematical form as (4) is caused by the more or less accidental circumstance that the uncertainty induced by the measurement process in the double-slit example is a consequence of the preparation uncertainty of a part of the measurement arrangement (viz. screen S) described by (2). However, as demonstrated by the neutron interference example, the measurement disturbance seems to more generally originate from the quantum mechanical character of the whole interaction process of object and measuring instrument. Fluctuations of the latter may be a part of this, but need not always play an essential role in the complementarity issue. It is important to stress that the Martens inequality is obtained from the generalized formalism, being capable of describing measurements represented by POVMs. The founders of the Copenhagen interpretation did not dispose of this formalism. Indeed, in the "thought experiments" a measurement is always thought to be represented by a selfadjoint operator (i.e. a projection-valued measure). In the example of neutron interferometry this implies a restriction to the extreme values a = 0 and a = 1. The restriction to these extreme values was responsible for the view in which interference is completely disturbed in a 'which-way' measurement (and vice versa). This, indeed, is confirmed by the limiting values of the non-ideality matrices (19), yielding an uninformative marginal for path if interference is measured ideally (and vice versa). In the intermediate region 0 < a < 1 information on both observables is obtained, be it that this information is disturbed in the way described by the Martens inequality. From the generalized formalism it is clear that in the neutron interference experiment complementarity of the interference and path observables (15) and (16) is at stake. Nevertheless, as is evident from the recent discussion referred to above 9,10,11 , this effect is still sometimes associated with the Heisenberg inequality for position and momentum. It seems that in this discussion the confusion between complementarity of preparation and measurement still exists. Of course, since a measurement may also be a preparation procedure for a post-measurement state of the object, the Heisenberg inequality ∆Q∆P ≥h/2 (as well as inequality (1) for any choice of observables A and B) should also hold in the post-measurement object state. This, however, is independent of this procedure being a measurement. As a matter of fact, Q and P must satisfy the Heisenberg inequality in the post-measurement state independently of which observables A and B have been measured jointly. Complementarity in the sense of mutual disturbance in a joint measurement of incompatible observables, as characterized by the Martens inequality, does not refer to the preparation of the post-measurement state, but to a restriction with respect to obtaining information on the initial object state. Apart from this difference, the Martens inequality nevertheless seems to be the mathematical expression of the Copenhagen concept of complementarity, viz. mutual disturbance in a joint (or simultaneous) measurement of incompatible observables. It seems that the physical intuition that was expressed by the "thought experiments" was perfect in this respect. However, confusion had to arise because of the impossibility of dealing with joint measurements of incompatible observables using the standard formalism. Bohr and Heisenberg were led astray by the availability of the uncertainty relation (4) (or, more generally, (1)) following from this latter formalism, unjustifiedly thinking that this relation provided a materialization of their physical intuition. APPENDIX A In this Appendix it is proven that standard observables A and B can be measured jointly if and only if they commute. Thus, let M m and N n be projection operators of the spectral representations of A and B, and M m = n R mn , N n = m R mn , {R mn } a POVM. Then [M m , N n ] − = O, and R mn = M m N n . The proof makes use of a well-known property of positive operators, stating that if B is a positive operator and P a projection operator satisfying B ≤ P , then B = P BP . Since R mn ≤ M m , if M m is a projection operator we have R mn = M m R mn M m . Since R mn ≤ N n , also R mn = N n R mn N n . Hence, M m = n R mn = n N n R mn N n . Because of N n N n ′ = δ nn ′ N n , multiplying this expression from both sides by N n yields: M m N n = N n M m = N n R mn N n . APPENDIX B In this appendix a derivation is given 13 of the Martens inequality (11). We shall restrict ourselves to maximal standard observables for which the operators M m and N n are one-dimensional projection operators. From M m |a m ′ = δ mm ′ |a m ′ , N n |b n ′ = δ nn ′ |b n ′ and n R mn = It is not difficult to see that J (λ) can be written as and, analogously, In these expressions the arguments of the functions H {Mm} and H {Nn} are positive operators with trace equal to 1. Therefore it is possible to use the well-known inequality 37 H {Mm} ( n p n ρ n ) ≥ n p n H {Mm} (ρ n ), O < ρ n < I, T rρ n = 1, 0 ≤ p n ≤ 1, n p n = 1 (23) to find a lower bound to J (λ) (and analogously for J (µ) ). Taking in (23): p n = T rR mn T r n ′ R mn ′ , ρ n = R mn T rR mn we obtain the inequality Analogously we find From (24) and (25) it then follows that Since also R mn /T rR mn is a positive operator with trace 1, we can use inequality (12), with mn R mn = I, T rI = N and T rM m N n = | a m |b n | 2 , to arrive at the Martens inequality (11). ✷
2014-10-01T00:00:00.000Z
1999-01-07T00:00:00.000
{ "year": 1999, "sha1": "7cc4d3cb56e1e85fb09da48ead3647f72c529616", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7cc4d3cb56e1e85fb09da48ead3647f72c529616", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
207863201
pes2o/s2orc
v3-fos-license
Excitonic density wave and spin-valley superfluid in bilayer transition metal dichalcogenide Artificial moiré superlattices in 2d van der Waals heterostructures are a new venue for realizing and controlling correlated electronic phenomena. Recently, twisted bilayer WSe2 emerged as a new robust moiré system hosting a correlated insulator at moiré half-filling over a range of twist angle. In this work, we present a theory of this insulating state as an excitonic density wave due to intervalley electron–hole pairing. We show that exciton condensation is strongly enhanced by a van Hove singularity near the Fermi level. Our theory explains the remarkable sensitivity of the insulating gap to the vertical electric field. In contrast, the gap is weakly reduced by a perpendicular magnetic field, with quadratic dependence at low field. The different responses to electric and magnetic field can be understood in terms of pair-breaking versus non-pair-breaking effects in a BCS analog of the system. We further predict superfluid spin transport in this electrical insulator, which can be detected by optical spin injection and spatial-temporal imaging. 1. The authors claimed that they can explain the magnetic field dependence of the suppression of the insulating gap. However, the authors only considered the Zeeman effect of the perpendicular magnetic field. When the magnetic field is as large as several Teslas, the orbital effects would dominate. Experimentally, Landau levels were observed in Ref.1. Therefore, it is hard to relate the results of the authors to the experiment as the authors only considered the Zeeman effect. 2. Concerning the optical detection of the spin superfluid state, the authors proposed to generate local spin polarization through a circularly polarized light and then use spatial-temporal resolved circular dichroism spectroscopy for the detection of the diffusion of the spin polarization. In monolayer transition metal dichalcogenides, light excite electrons from the valence bands to the conduction bands and the relevant energy scale is in the order of eV. In the current proposal, the relevant energy scale (such as the excitonic insulator gap) is in the order of a few meV. How can ordinary optical method be used without generating a large number of unwanted excitations? 3. For the detection using circular dichroism spectroscopy, the spatial resolution of the detection is limited by the wavelength of light which is at least in the order of several hundreds of nanometers even using visible lights. How can such a low spatial resolution method be used to detect the proposed spin transport? The current work is timely and interesting. However, I believe that the authors have not thought through the details of the problem carefully. In its current form, the work falls short of the standard of Nature Communications. Reviewer #2 (Remarks to the Author): The authors propose an intervalley excitonic condensate as a candidate for the insulating phase recently observed in twisted bilayer WSe2. The authors employ a mean field analysis to study such phase arguing, in particular, that the presence of a van Hove singularity close to half-filling drives the instability leading to excitonic pairing. They also discuss the possible response of this phase to different perturbations such as chemical potential or out-of-plane field. In addition, they propose an experimental setting to test their proposed state. The manuscript is well written and addresses an interesting question in the rapidly expanding field of Moire materials. Intervalley excitonic condenstates of the type proposed have been studied in the context of graphene Moire materials starting from the early works, Refs. 46, 47, 48, and more recently in arxiv:1901.08110 and Response to Reviewers' comments (NCOMMS-20-23082) We sincerely thank all three reviewers for taking their time to carefully review our work and prepare very insightful reports. To summarize, all reviewers found the results in our manuscript interesting and timely but asked for more efforts to clarify detailed applicability of our theoretical approach and the feasibility of our proposed optical experiments. We have carefully addressed all the questions/comments in this reply. We have also made changes in the presentation of the main text to make the story as clear as possible. We believe the reviewers' insightful comments have greatly helped to improve the readability and technical robustness of our manuscript. In the following, we prepared a point by point reply. In the following, the red text are the changes in the main text. The corresponding changes are also marked red in the revised manuscript as required. Referees' comments: Reviewer #1 (Remarks to the Author): This is a very timely work which studies the nature of the insulating state observed in twisted bilayer transition metal dichalcogenides. The authors suggested that the experimentally observed insulating phase near 4 degree of twisted angle, which is very sensitive to displacement field, is indeed an excitonic density wave formed by the pairing of electrons and holes in mini-bands at different valleys. They pointed out that the higher order von Hove singularity of the bands plays an important role in enhancing the excitonic condensation. These results are interesting and timely. However, I have a few potentially serious concerns and reservations as listed below: We are glad that the reviewer finds our results interesting and timely. We also agree that the issues that are raised by the reviewer below are important and should be better addressed. We provide detailed explanations to each specific question. Guided by these questions/comments, we also improve our presentations in the main text. 1. The authors claimed that they can explain the magnetic field dependence of the suppression of the insulating gap. However, the authors only considered the Zeeman effect of the perpendicular magnetic field. When the magnetic field is as large as several Teslas, the orbital effects would dominate. Experimentally, Landau levels were observed in Ref.1. Therefore, it is hard to relate the results of the authors to the experiment as the authors only considered the Zeeman effect. As nicely pointed out by the reviewer, the magnetic field would lead to two major effects, namely the Zeeman splitting of the spins and orbital effect which leads to Landau level physics. We completely agree with the reviewer that the orbital effect will have important consequences in high magnetic field. In our manuscript we have only considered the Zeeman splitting for the following reasons. First of all, due to the angular momentum carried by the valley degree of freedom, the effective g-factor in the TMD system is large. As demonstrated in Ref 49, the effective g-factor can be as large as 10. Therefore, we expect a much larger energy splitting of the states from two valleys than the ordinary spin Zeeman splitting. Secondly, in the experiment of Ref 1, the correlated insulating state survive up to 5T perpendicular magnetic field. In this range of magnetic field, we can estimate the amount of flux going through each moiré unit cell. For twist angle 4 degree, the moiré unit cell has lattice constant of ~5nm and consequently the magnetic flux through the unit cell is 0.052 , which is a small fraction of the magnetic flux quanta. We can do a more quantitative comparison of energy scales. At 5T, the Zeeman splitting is as large as = 2 * ≈ 2 * 10 * 5.79 * 10 * 5 = 5.79 meV due to the enhanced g-factor. We can also estimate the landau level splitting due to the orbital effect of the magnetic field, = ℏ ≃ 1.3 meV, which is rather small due to the large effective mass of WSe2 hole bands ( * ~ 0.44 ). Therefore, the range of magnetic field in experiment can still be viewed as in the weak field regime and we expect the orbital effect to be small compared to that of the Zeeman splitting. We understand that the calculation of the insulating gap in our treatment is not reliable in the high field regime. However, the statement about the non-pair-breaking effect of the magnetic field is made in weak field regime where the Zeeman field approximation is trustworthy. Therefore, we have added a few sentences stressing this caution and weaken our statement in the high field regime. We add on page 3 section C after Eq. (5): "Due to the orbital angular momentum in TMD systems, the holes in the valence band have large renormalized $g$-factor\cite{TMDgfactor}. Therefore, The primary effect of a weak magnetic field is Zeeman spin/valley splitting. In contrast to α and μ, the Zeeman coupling maps to the chemical potential in a superconductor and its effect is non-pair-breaking." On page 4 we add: "In addition, for all α, the quasiparticle gap decreases slowly with B⊥ in the weak field regime, which indicates the effect of out-of-plane magnetic field is non-pair-breaking to the leading order. For larger B⊥, the quasiparticle gap is reduced in an approximately linear way. Furthermore, orbital effect becomes important in high field regime, which is the beyond the scope of this work." We also change the sentence about magnetic field effect in the abstract to the following: "Our theory explains the remarkable sensitivity of the insulating gap to the vertical electric field. In contrast, our theory shows that the insulating gap is reduced mildly by a perpendicular magnetic field, with quadratic dependence at low field. These physics can be understood in terms of pair-breaking versus non-pair-breaking effects in a BCS analog of the system." 2. Concerning the optical detection of the spin superfluid state, the authors proposed to generate local spin polarization through a circularly polarized light and then use spatial-temporal resolved circular dichroism spectroscopy for the detection of the diffusion of the spin polarization. In monolayer transition metal dichalcogenides, light excite electrons from the valence bands to the conduction bands and the relevant energy scale is in the order of eV. In the current proposal, the relevant energy scale (such as the excitonic insulator gap) is in the order of a few meV. How can ordinary optical method be used without generating a large number of unwanted excitations? We thank the referee for raising this excellent question. This helps us to reflect on more details of our proposal and make some improvements which we now include in our main text. We translate the reviewer's question into the following. In order for the proposed experiments to work, we have to achieve the following two goals: 1) the efficiency of conversion from optical excitations to the pure valley imbalance should be as high as possible; 2) the density of generated valley imbalanced hole excitation should be small compared to the full-filling density of the moiré sub-band (in order to not overwhelm the moiré physics). Luckily both of the goals have already been achieved in TMD and TMD based moiré system by our experimental colleagues from Berkeley and Cornell (Ref 23 and 18). Therefore, we do think it is conceivable that our proposal (in the modified version) could be successful. In the following, we briefly discuss how the two issues could be resolved in our system. The first issue about the efficiency of conversion is very critical for the proposed experiment. If one start with a monolayer of WSe2, unfortunately the conversion rate of optical excitations to pure valley imbalanced hole is low (~ 0.1% to 1% depending on the sample quality see Ref. ). Experimentally, this is due to the valley exchange interactions of the excitons which can quickly wash out the valley information of the optically generated excitons before the electrons recombine with the holes. Such an effect indeed is a possible drawback for the proposed experiment in twisted bilayer WSe2 of our paper. However, this problem has a very nice remedy by using WS2/WSe2 heterostructure (Ref 23 and 18). Here we borrow a figure from Ref 23 to explain that the mechanism of enhancing the conversion rate with an addition of WS2 layer. (A) By pump circularly polarized light, excitons in the K valley of WSe2 are selectively excited. Before the intervalley exchange happens, an ultrafast interlayer charge transfer process (between WS2 and WSe2) takes place and efficiently converts the excitons into excess holes within ~100 fs. (B) Electrons in WS2 recombine with K valley and K′ valley holes in WSe2 with almost equal probabilities, resulting in an excess of K valley holes and a deficiency of K′ valley holes in WSe2. This process has been demonstrated in experiments to have almost perfect (~100%) conversion of the optical excitations to valley imbalanced holes (Ref. 23) -which is the key condition for the spin dynamics measurement later. Therefore, in our twisted bilayer WSe2 case, we expect that adding another layer of WS2 (presumably with a large twist angle to avoid an interfering moiré superlattice) can also greatly enhance the conversion efficiency of valleyimbalanced holes. [Redacted] The second issue is easy to fix. With a fixed conversion rate, the density of valley imbalanced hole is essentially determined by the power of light used in the optical pumping. In the existing experiments on WS2-WSe2 heterostructure (Ref 23 without moiré superlattice, Ref 18 with moiré superlattice), the optically generated imbalanced hole density can range from 10^10/cm^2 to 10^12/cm^2. This actually is at a sweet spot for exploring moiré physics. The full-filling of moiré sub-band in our twisted bilayer WSe2 case is ~ 10^13/cm^2 for twist angle ~ 4 degree. The generated valley imbalanced hole can be indeed viewed as a small perturbation to the moiré physics and will not swamp it. Perhaps the "unwanted excitations" mentioned in the reviewer's comment is worrying about that the optically generated valley-imbalanced holes may initially have a broad distribution in energy in the moiré sub-band. We think this will not be the case at least in our setting. Since all the experiments are conducted in at temperature lower than the insulating gap (which is much smaller than the moiré bandwidth ~ 100meV), the distribution of holes will quickly equilibrate within a valley even if the initial distribution is relatively broad. In summary, we think the current twisted bilayer WSe2 system (with an additional WS2 layer) has all the key ingredients to facilitate the measurement of spin dynamics using optical pumpprobe scheme. We have added a few sentences on the mechanism of pure valley polarized hole and the revised proposal (in section III): "Moreover, spin polarizations have been shown to be easily generated and probed via circularly polarized light in TMDs due to the spin-valley locking and valley-selective coupling to chiral photons. In particular, Ref[23] and Ref [18] demonstrate that in TMD heterostructure there is a nearly perfect conversion from optically generated chiral excitons to spin/valley polarized holes, as well as a spin diffusion length on the order of $10\sim20\mu m$, much longer than the wavelength of the pump/probe light. Therefore, we anticipate that twisted TMDs can provide an ideal platform for studying spin superfluid transport with a fully optical setup." We also include a footnote on the experimental proposal, " [57] In the experiment, one can add an additional layer of WS$_2$ intentionally misaligned with the twisted bilayer WSe$_2$ in order to enhance the conversion rate of spin/valley polarization from optical pumping\cite{optical2019,SpinSF3}." We hope these revisions clear things up. 3. For the detection using circular dichroism spectroscopy, the spatial resolution of the detection is limited by the wavelength of light which is at least in the order of several hundreds of nanometers even using visible lights. How can such a low spatial resolution method be used to detect the proposed spin transport? We thank the reviewer's remark on the length scale of the spin dynamics. However, we think this is not an issue in the current situation. We would like to again refer the reviewer to the existing experiments measuring spin dynamics in TMD (Ref 23) and TMD moiré systems (Ref 18). In Ref 23 (WSe2-WS2 heterostructure without moiré physics), measurements of spin diffusion current signal up to the scale of 8 are successfully performed -such length scale are in fact one order of magnitude larger than the wavelength of the light applied to generate spin/valley imbalance (the pump-probe light has energy ~ 1.8eV which has wavelength ~700nm). The fact that one can observe such long scale spin diffusion is mainly attributed to the well-preserved valley conservation symmetry -which is also present in our system. Ref 18 (WSe2-WS2 heterostructure with a long-range moiré superlattice) finds remarkably that in the half-filling correlated insulating state of a moiré sub-band the spin lifetime is actually enhanced (2~4 times longer) compared to the metallic states. This would warrant a diffusion length on the order of 20 -much larger than the wavelength of the light. While the mechanism of enhanced spin lifetime is not fully determined, this gives us further confidence that meaningful measurement could be conducted in the correlated insulating state of our system. Besides, the above-mentioned systems only have diffusive spin dynamics. Our system, in which we predict to host spin-superfluid state, could potentially support even longer coherent spin transport. Therefore, we think the spatial resolution is not an issue for detecting the spin transport. The current work is timely and interesting. However, I believe that the authors have not thought through the details of the problem carefully. In its current form, the work falls short of the standard of Nature Communications. We again thank the reviewer for recognizing our timely and interesting work. The comments/questions helped us greatly improve our manuscript. In the above reply, we address the detailed problems raised by the reviewer point by point and revise the paper accordingly. We hope to convince the reviewer that we have thought through the proposed experiment and the it is indeed feasible in our setting. Hopefully, the revise manuscript can now meet the high standard of Nature Communications. Reviewer #2 (Remarks to the Author): The authors propose an intervalley excitonic condensate as a candidate for the insulating phase recently observed in twisted bilayer WSe2. The authors employ a mean field analysis to study such phase arguing, in particular, that the presence of a van Hove singularity close to half-filling drives the instability leading to excitonic pairing. They also discuss the possible response of this phase to different perturbations such as chemical potential or out-of-plane field. In addition, they propose an experimental setting to test their proposed state. The manuscript is well written and addresses an interesting question in the rapidly expanding field of Moire materials. Intervalley excitonic condensates of the type proposed have been studied in the context of graphene Moire materials starting from the early works, Refs. 46, 47, 48, andmore recently in arxiv:1901.08110 and1911.02045. The main new ingredient here is that, due to spin-orbit coupling, the excitonic intervalley coherent pairing is also associated with a non-trivial spin structure which the authors propose can be detected experimentally. We thank the reviewer for the very nice summary of our work. In the revised version, we have added some new references including 1901.08110 and 1911.02045. Indeed, moiré materials is a rapidly developing field. We feel obligated to credit relevant works and hope this will better serve the community and advance the field of moiré physics. In the following, we will address the questions/comments from the reviewer point by point. Below I provide a more detailed point by point criticism: 1. In the introduction, the authors argue that the smallness of the insulating gap rules out a Mott scenario, yet their weak coupling estimate for the gap scales as the third power of the interaction which will generally be quite large. They argue that in intermediate to strong coupling, the gap scale is instead given by W^2/V but then they mention that in this case the phase is actually a Mott phase. We want to be clear that our claim is in the strongly coupled regime the insulating gap scales linearly with the interaction strength V. However, the ordering temperature of the inter-valley exciton condensate, not the insulating gap, scales as W^2/V. This limit is an analog of the BEC regime in the BCS-BEC crossover. It is also similar to the case of AFM ordering temperature in parent compound of Cuprates. The mott gap is on the order of Hubbard interaction U, however AFM ordering temperature is on the order of the exchange interaction J ~ t^2/U. That being said, we must clarify that the curve of Tc in Fig. 2 (a) is not obtained from actual calculation but from the educated guess based on BCS-BEC crossover physics. In addition, all their calculations are performed in the weak coupling limit V=0.1W which is relatively far from the relevant experimental scales. The authors should clearly explain whether there is a distinction between their proposed phase and a Mott phase at strong coupling and also comment on the compatibility of the gap scale obtained in their calculation with experiments. We thank the reviewer for raising this point. In our current manuscript, we consider approaching the problem of the correlated insulator from the weak coupling limit, which is motivated by the following considerations. First, the energy scale of the interaction V compared to the bandwidth W is indeed small (V/W ~ 0.3 -0.5) for the large twist angles in the experiment -this is quite different from the case of twisted bilayer graphene system where the interaction V is on the same order of or even larger than W. (Even for TBG, Hartree-Fock mean field study is a common practice within the community, see for example in npj Quantum Materials 4.16, Phys. Rev. Lett. 124, 097601, Phys. Rev. Lett. 124, 166601, Phys. Rev. X 10, 031034, 1911.03760.) In addition, the Mott gap observed in transport experiments is rather small ∼ 3meV. Therefore, the system is not deep in the Mott regime, where the Mott gap would be of the same order as V. Next, changing the displacement field or doping the correlated insulator results very quickly in a metallic state, where potential signature of superconductivity is also observed. A weak coupling theory could naturally capture both the correlated insulator and metal phase in the same framework. In our main text we present mostly the result of V=0.1W case where the mean field approximation is reliable. (Similar mean field calculations with V=0.5W has been presented in the supplementary section.) For V=0.1W in the case of perfect nesting our calculation gives a close number of the insulating gap compared to the experimental results. One should also bear in mind that simple mean field treatment usually exaggerates the size of the order parameter. At the same time, transport experiment usually gives a smaller gap due to disorder averaging. In the real experiments, the interaction V is probably bigger than 0.1W, closer to 0.3W. However, one should also take into account the fact that the nesting condition is never perfect which corresponds to finite \alpha (and higher order terms in the dispersion which become important for large momentum away from the van Hove singularities). For example, with V=0.3W and \alpha=0.17, we find the mean field gap is ~ 3meV, close to the experimental value. (We add a new paragraph and a plot about the insulating gap as function of alpha for various interactions in section IV D.) An honest comparison between experiments and theory would demand a more sophisticated study of the band structure using for example large scale DFT which is beyond the scope of the paper. Our simple model is set to provide an intuitive picture for the many-body physics in the system and indeed gives good qualitative results on the insulating gap and other behaviors compared to the experiment. For the last comment about the distinction between the weak coupling insulating state and the strong coupling state, it is deeply connected to the next question about topology of the relevant moiré band. Therefore, we feel it is more appropriate to include the answer together with the next question. 2. The authors do not discuss at all the effects of band topology which may strongly affect the scenario they propose here. In particular, the two valleys are related by time-reversal symmetry so in principle each valley can have a non-vanishing Chern number. This possibility is realized in graphene based Moire materials e.g. twisted bilayer graphene aligned on top of hBN, 1901hBN, .08209, 1901.08110 and twisted double bilayer graphene 1903.08685). It has also been discussed in TMD homobilayers (Ref. 15). Even if the total Chern number in each band vanishes the local Berry curvature maybe large. This may be unimportant at weak coupling, but at intermediate coupling relevant to the realistic system, it may have important ramifications for the scenario proposed here since the excitonic pairing would take place between opposite Chern bands making it very different from bilayer quantum Hall systems. In particular, it means that the "superconducting" analog obtained by performing a particle-hole transformation in one valley now corresponds to superconducting pairing in a band with non-vanishing Chern number which is generally not favored. The effect of band topology on excitonic intervalley coherent pairing was discussed in arxiv:1901.08110 in the context of twisted bilayer graphene on top of hBN substrate. The authors should include a detailed discussion of the global and local band topology and explain how intervalley coherent pairing can survive in the current context. We thank the reviewer's insightful comment. First, with our model parameter, the continuum model calculation shows a nontrivial Chern number for the topmost moiré band. This is consistent with results from Ref. 15. We add the discussion of global topology and the berry curvature distribution in the revised version in the method section as required by the reviewer. In this scenario, the weakly coupling ground state, the intervalley exciton state we proposed, will be different from the ground state in the strongly coupled regime, which is most likely a valley polarized state. The evolution between the two states will be an interesting problem to study in the future. We of course are aware that the band topology plays an important role when the system goes into the flat-band limit. As the reviewer nicely point out, in the flat-band limit, the intervalley exciton order might be suppressed due to the winding of order parameter in the momentum space as a result of the non-trivial Chern number. We totally agree that in the flat-band or strong coupling limit, the most proper ground state of half-filling is valley polarized state. This state will have distinct properties that is somewhat easy to spot in experiments. For example, as shown in the ABC tri-layer graphene and TBG systems, valley polarized states with non-trivial chern number will have spontaneous anomalous hall effect with quantized Hall conductance as well as magnetic hysteresis, none of which are observed in the transport experiment in current system. In addition, while the valley polarized state will be stabilized by the out-of-plane magnetic field, the experiment shows that the correlated insulator is destroyed by increasing magnetic field. Therefore, the experiment clearly speaks for a state that is different from which one will get in the strongly coupled regime -this is another motivation for us to adopt the weak coupling approach. In the weak coupling limit, as the reviewer kindly noted, the most relevant factor in determining the ground state order is the information about fermi surface and van Hove singularities, which is the focus of our paper. We add a new subsection, section IV B "Band topology and Berry curvature", which detailly discusses the global and local topology of our system. In particular, we plot the Chern number of the topmost moiré band as function of twist angle and displacement field based on the continuum model and the distribution of the berry curvature in the moiré brillouin zone (as function of displacement field). We observe that most of the berry curvature are concentrated near the Gamma point of the brillouin zone. This is expected because the smallest gap between the topmost and second moiré bands is located near Gamma. Near the half-filling fermi surface and the van Hove singularities there is essentially zero berry curvature in our continuum model. Therefore, the weak coupling result will not be sensitive to the topological property of the band. We add the following sentences on page 2 paragraph 2: "In the relevant parameter regime, the moiré band has non-zero Chern number. Details of the topological properties of the moiré band are discussed in section IV B. While playing an important role in the flatband limit [44][45][46], the topological property does not appear to play a significant role in the weak coupling scenario considered here." We also add the following sentences on page 3 to clarify the distinction of the weak and strong coupling states: "We note that in the strong-coupling flat band limit, recent works [45,46] have proposed a valley-polarized state that is competitive in energy and distinct from the intervalley exciton condensate proposed here." Finally, we will be remiss if we do not mention some later references on the same platform, for example Phys. Rev. Research 2, 033087, which apply a different set of parameters in the continuum model and suggest that the relevant moiré bands have trivial Chern number. In such a scenario with topological trivial moiré band, the ground state in weak coupling will be smoothly connected with the strongly coupled mott insulator without phase transition. We do think our parameters are more appropriate because the band structure obtained in our model is much closer to the band structure from DFT calculation in Ref. 1. However, the physics of (higher order) van Hove singularities are universally observed regardless of which set of parameters is used. Therefore, the prediction of the half-filling intervalley excitonic state is robust. 3. The analysis of the authors relies on the existence of van Hove singularities close to halffilling which was derived based on the non-interacting band structure. However, this band structure is likely to receive interaction renormalization effects coming from the remote bands which is known to be an important effect in graphene-based Moire systems (see 1907.11723, 1812.04213, 1911.02045). The authors should justify why using the van Hove singularities arising from the non-interacting band structure is expected to yield quantitatively correct result. We are grateful for the reviewer to raise such an excellent point. Indeed, as the reviewer pointed out, for TBG the remote band has important effects in renormalizing dispersions for the moiré bands near charge neutrality. First of all, the remote bands (from both conduction and valence bands) are close in energy with the relevant moiré bands. The energy separations between remote bands and bands near charge neutrality are typically smaller than or on the same order of the interaction scale in TBG. Second, the interaction strength in TBG is mostly larger than the bandwidth. Therefore, when partially filling the moiré bands away from charge neutrality, the flat-band dispersion is wildly modified due to the Hartree correction from charge modulation. For these reasons, it is a bad approximation in TBG to simply consider the middles bands and ignore the remote bands. We are aware of the renormalization effects of the remote bands in TBG. However, we expect the renormalization in the tWSe2 case to be much smaller than the graphene-based moiré systems for the following reasons. First, monolayer TMD has a large energy gap (1~2eV) between the conduction and valence band, much larger than the interaction strength on moiré scale -therefore the renormalization from the conduction bands can be safely ignored. Second, the lower moiré valence bands could in principle generate considerable renormalization to the topmost moiré band, which is an important problem we plan to study in the future. However, the situation here is different from TBG. The moiré bandwidth in tWSe2 is large, up to ~100meV for twist angle 5.1 degrees, while the interaction energy is ~ 35meV. Thus, we expect the renormalization of the band dispersion near half-filling to be small. In summary, although we understand taking the non-interacting dispersion from continuum model as a starting point is an approximation, it is a good one at least in the case of tWSe2 with large twist angle. Assuming the remote bands are properly taken care of i.e. we get some dispersion relation for the relevant moiré bands after projecting out the remote bands, there maybe is a separate issue that the people may worry about. Within the band we care, interactions will also lead to renormalization of quasiparticle dispersion. The question is whether one should take the renormalized band or the non-renormalized band structure as the start point to consider various ordering instabilities. For this, we argue the latter is correct. The former choice will face the problem of double counting the effect of interaction. The modification of quasiparticle dispersion should be a consequence of interaction effect and not treated as a starting point. Is there any symmetry reason to expect the form of the dispersion in Eq. 1 to survive in the presence of interaction renormalization effects? Or are the authors arguing that there exists some displacement field for which a van Hove singularity emerges regardless of the band structure details? Indeed, the dispersion in Eq. 1 is the small momentum expansion near K up to third order in k with respect to the C3 rotational symmetry. In another word, Eq. 1 will be the most general form even including interaction renormalizations from remote bands. In this dispersion, the parameter \alpha (or the ratio between coefficient of the k^2 term and that of the k^3 term) is smooth function of the displacement field due to continuity considerations. As \alpha cross zero our system will go through a higher order van hove singularity with a stronger density of state divergence. 4. The authors argue that out-of-plane magnetic field does not act as pair breaking due to Zeeman effect. However, out-of-plane field is also generally expected to couple to orbital degree of freedom leading to a "valley" Zeeman effect which can have a much larger g-factor (See 1908.05110). The authors should comment on whether this effect will lead to pair breaking. We thank the reviewer for pointing this out. Perhaps our wording here is a bit confusing. In the manuscript, when referring to the Zeeman splitting, we have already taken into account of the valley effect and used the renormalized g-factor. Pair breaking vs non-pair breaking term refers to the form of the term in the basis after particle-hole transformation of one valley (which is the basis where one can make analog to the BCS superconductor). As we listed in table I, the chemical potential term in the original basis maps to precisely to the Zeeman field in BCS language, hence has pair-breaking effect. On the contrary, the Zeeman splitting (including the renormalization) maps to the chemical potential in the BCS basis, hence is non-pair-breaking. Indeed, in our calculation, we observe that the intervalley exciton order is more resilient to the Zeeman splitting than the chemical potential. To clarify, we add the following sentence on Page 3 under Eq. (5): "Due to the orbital angular momentum in TMD systems, the holes in the valence band have large renormalized $g$-factor\cite{TMDgfactor}. Therefore, The primary effect of a weak magnetic field is Zeeman spin/valley splitting. In contrast to α and μ, the Zeeman coupling maps to the chemical potential in a superconductor and its effect is non-pair-breaking." (p.s. 1908.05110 seems to be a reference for algebraic geometry -maybe it's mistyped. We would like to cite it if the reviewer provides the correct reference. We did cite a reference in our manuscript, Ref [49], for the large renormalized g-factor including the valley effects.) We thank the reviewer again for these insightful comments. They help us clarify and evaluate the assumptions in our approach and make our manuscript on a firmer ground. We hope the manuscript is now appropriate for publication in Nature Communication. Reviewer #3 (Remarks to the Author): The authors analyze the properties of AA stacked WSe_2 bilayer moiré crystals with a half-filled hole band. The work was motivated by a recent experiment that reported strong insulating behavior over a surprisingly narrow range of external displacement field vanishes. They interpret this finding as being due to the formation of an excitonic density wave state, arguing that this scenario is more likely since the bands are relatively broad, and predict that it will be possible to perform an all-optical experiment that demonstrates spin-superfluidity -which is an expected property of these states. The density wave is stabilized by a nesting condition that is satisfied over a narrow range of displacement fields. I recommend publication of this manuscript which makes an interesting and testable prediction. The arguments advanced in favor of this prediction are plausible and carefully discussed. I have some suggestions that the author might consider. We are truly grateful for the wonderful summary of our work and the recommendation of publication from the reviewer. We address the reviewer's comments in the following point by point. i) The second last paper in the abstract is awkward and needs to be rewritten. We thank the referee for the valuable suggestion. Indeed the original sentence is confusing. We have revised the sentence to the following hoping it sets things clear. The revised sentence is "Our theory explains the remarkable sensitivity of the insulating gap to the vertical electric field. In contrast, our theory shows that the insulating gap is reduced mildly by a perpendicular magnetic field, with quadratic dependence at low field. These physics can be understood in terms of pair-breaking versus non-pair-breaking effects in a BCS analog of the system." ii) The authors refer to the state they propose as an excitonic density-wave. It may be that this terminology is perfectly standard -but it seems to me that it requires a few comments. I think that they are saying that their state is a spin and charge density wave state of the type that can be viewed as an exciton condensate. Is there at least a reference that could be cited where the meaning of this state name is carefully defined? In any event, because it is a density wave, collective transport is pinned. The authors do account for relaxation between valleys when they consider transport, but the flip side of this is coupling between the valleys in the ground state. I believe that there are strong arguments related to momentum conservation that these effects are weak -but perhaps this should be discussed explicitly. We thank the reviewer's excellent suggestion. In our definition, the exciton density wave refers to an exciton condensate that occurs at finite momentum. This phenomenon indeed existed in the literature before. The earliest one we can find is Phys. Rev. Lett. 67, 895, which discussed the possible appearance of exciton density wave state in double-quantum-well in strong magnetic field (we now add this to the reference list as Ref [49]). The order parameter in our system is very similar to the one in this earlier work by identifying the valley degree of freedom as the layer index. The TMD materials that can exhibit moiré physics are of very high quality. There could be some distortion of the moiré superlattices but the disorder at atomic scales are small. The order parameter is the pairing between the particles and holes from different valleys which have large momentum distance. Therefore, the exciton density wave will have oscillations at atomic scales. Since the disorder on such scale is small, we expect the pinning effect is weak. This is equivalent to the statement that the valley conservation symmetry is a good approximate symmetry in the system.
2019-11-11T19:00:00.000Z
2019-11-11T00:00:00.000
{ "year": 2021, "sha1": "96e8582ad8fbc8786fc46e900aa03e1c58abfb22", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-20802-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07d38dee96e757328dacfcbd552698766c8e3a9a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
235363731
pes2o/s2orc
v3-fos-license
In situ simulation improves perceived self-efficacy of OR nurses and anaesthesiologists during COVID-19 pandemic Introduction Self-efficacy is defined as people’s internal beliefs about their ability to have an impact on events that affect their lives. As part of the COVID-19 pandemic, we carried out in situ simulation for anaesthesiologists and operating room (OR) nurses. Simulation was focused on the recommendations on the use of specific personal protective equipment (PPE) as well as on airway management and intubation. We hypothesised that in situ procedural simulation should increase their perceived self-efficacy. Methods Between 16 March and 20 March 2020, 208 healthcare workers took part in in situ procedural simulation. A questionnaire was sent to participants on 21 April 2020. Six self-efficacy items related to PPE and airway manoeuvres were assessed before and after training on a Numeric Rating Scale from 0 to 10. Results Sixty-seven participants (32%) replied to the questionnaire. The before–after comparison of the six items revealed an increase in perceived self-efficacy for each of them. A before training difference was observed between nurses, board-certified anaesthetists and trainees in anaesthesia in perceived self-efficacy for putting on (6 (3–8) vs 4.5 (2.25–6) vs 2 (0–6), p=0.007) and remove PPE (8 (5–8) vs 4.5 (3.25–6) vs 4 (1–6), p=0.009). No difference in perceived self-efficacy after training was observed between nurses, board-certified anaesthetists and trainees in anaesthesia. Conclusions In situ simulation improves the perceived self-efficacy of OR nurses and anaesthesiologists on specific skills related to the care of patients with COVID-19. In situ simulation improves perceived self-efficacy of OR nurses and anaesthesiologists during COVID-19 pandemic INTRODUCTION COVID-19 is predominantly caused by contact or droplet transmission related to the dispersion of relatively large respiratory particles contaminated by SARS-CoV-2 that are subject to gravitational forces and travel only approximately 1 m from the patient. 1 Airborne transmission occur if patient respiratory activity or medical procedures generate respiratory aerosols. [2][3][4] These aerosols contain particles that may travel much longer distances and remain airborne longer, but their infective potential is uncertain. Contact, droplet and airborne transmission are relevant during airway manoeuvres in infected patients, particularly during tracheal intubation. 5 Therefore, it has been recommended to healthcare professionals to use a specific personal protective equipment (PPE) as well as to have recourse to specific airway management algorithms. Moreover, several authors have recommended specific training to facilitate the transfer of recommendations on the use of specific PPE as well as on airway management and intubation 6 into practical skills and therefore to improve healthcare professional safety. 7 Quickly, the need for such a training appeared in our institution for operating room (OR) healthcare professionals having to take care of SARS-CoV-2 positive patients in the context of surgical and obstetrical emergencies but also as part of the mobilisation of OR professionals to strengthen emergency and intensive care units. We chose to train these specific guidelines with in situ simulation that has been recognised as an educational strategy that might help change systembased risk factors and improve safety, including during COVID-19 pandemic. 8 9 We hypothesised that the use of two in situ simulation-based workshops would increase perceived self-efficacy of the team members regarding PPE and airway management specific procedures in addition to facilitating their transfer in order to improve safety when taking care of What is already known on this subject ► The perception of being in control (self-efficacy), rather than the reality of being in or out of control, is a buffer of negative stress. ► Working under extreme stress may cause healthcare professionals to deviate from clinical guidelines. ► In situ simulation has been recognised as an educational strategy that might help change system-based risk factors and improve safety. What this study adds ► In situ procedural simulation improves the perceived self-efficacy of operating room nurses and anaesthesiologists on specific skills related to the care of patients with COVID-19. ► Self-efficacy is positively related with the level of confidence of these healthcare professionals when taking care COVID-19 infected patients but only partially reduces stress. Original research infected patients with SARS-CoV-2. Self-efficacy is defined as people's internal beliefs about their ability to have an impact on events that affect their lives. 10 Moreover, the perception of being in control, rather than the reality of being in or out of control, is a buffer of negative stress. This stress reduction should not be overlooked as a recent review 11 of the mental health consequences of COVID-19 shows an increase of depression or depressive symptoms and anxiety, a poor quality of sleep or higher levels symptoms of obsessive-compulsive disorder in healthcare professionals. Furthermore, working under extreme stress may cause healthcare professionals to deviate from clinical guidelines. The main objective of this work is to assess the effect of in situ simulation on self-efficacy of nurses and anaesthetists in relation with the use of PPE and airway procedures in the OR in the context of the COVID-19 pandemic. The secondary aims are to assess the perceived value of this in situ simulation training in clinical practice and as compared with other learning methods and to assess self-confidence of OR nurses and anaesthetists when taking care of patients with COVID-19. MATERIAL AND METHODS Study design Retrospective before -after study by questionnaire. A waiver of written consent was granted. Study population Procedural simulation sessions were proposed to all OR nurses and members of the anaesthesia department, staff and trainees of the CHU de Liège. The sessions were set up from 16 March to 20 March 2020. For safety reasons, the members of the team presenting COVID-19 compatible symptoms were excluded. Intervention: simulation sessions The sessions took place in an unoccupied OR during working hours. The simulation sessions consisted of two 20 min workshops. Participants were assigned to groups of maximum four individuals in order to respect physical distancing. These sessions were conducted by two board-certified anaesthetists who are also validated simulation instructors. The first workshop focused on the procedure to put on and remove PPE. Each participant completed the procedure in real working conditions, that is, usual environment and team help. Due to limited availability of disposable material, class 2 filtering facepiece (FFP2) masks were replaced for simulation session by coffee filters fitted with elastic bands placed over the surgical masks, and the disposable protective gown by reusable cloth gown washed after each use. The second workshop focused on the specifics of OR intubation manoeuvres according to the guidelines of the Société Française d'Anesthésie-Réanimation, 5 as well as on tracheal aspiration and extubating procedures. For this workshop, three participants from the group performed the manoeuvres with the instructor, dividing up the roles and positions in the OR workspace according to the guidelines. Namely the most experienced anaesthetist was at the head of the simulated patient, an assistant (nurse) on the left for the management of the intubation material, that is, video laryngoscope, tracheal tube and clamps, and an assistant (anaesthesiologist or nurse) on the right for the management of the narcosis, the anaesthesia machine (eg, gas flow pause) and the monitoring (eg, use of curarisation monitoring). To provide a realistic simulation of managing intubation, an intubation head was use. If present, the remaining participant observed the simulation session. Data collection An e-link to an anonymous questionnaire was sent to the 208 participants via their professional external email address on 21 April 2020. A weekly reminder was sent for the following 3 weeks. In addition to characteristics (gender, age, profession and years of expertise in the OR), the questionnaire assessed several items regarding perceived self-efficacy, usefulness of the simulation session or interest of the simulation tool. The items are listed below. A Numeric Rating Scale between 0 and 10 was used to assess the degree of agreement. Perceived self-efficacy items were assessed before and after training. These assertions were ► 'I feel competent to put on PPE to take care of a COVID-19 patient in the OR'. ► 'I feel competent to check that an FFP2 mask is correctly placed'. ► 'I feel competent to undress PPE without risk of contamination'. ► 'I feel competent to perform an induction and intubation sequence while minimizing the risk of contamination'. ► 'I feel competent to perform a tracheal suction while minimizing the risk of contamination'. ► 'I feel competent to perform an extubation sequence while minimizing the risk of contamination'. The perception of the usefulness of the simulation session was explored using to the following questions: ► 'In general, I found the simulation session on putting on and removing PPE useful for my clinical practice when managing COVID-19 patients?' ► 'In general, I found the simulation session on intubation and extubation useful for my clinical practice when managing COVID-19 patients?' ► 'What I learned during this simulation session modified my clinical practice including for non-suspected patients'. ► 'What I learned during the simulation session on putting on and removing PPE helped reduce my stress about taking care of COVID-19 patients'. ► 'What I learned during the simulation session on intubation and extubation allowed me to reduce my stress about taking care of COVID-19 patients'. The interest of the simulation tool as compared with other education tools was assessed using the relative compliance to the following statements: ► 'Simulation was more useful compared to a written document'. ► 'Simulation was more useful compared to a video'. ► 'I enjoyed the simulation was performed in teams rather than by profession (nurses with nurses, doctors with doctors)'. ► 'I would have preferred the simulation workshop to be carried out by profession' and 'I appreciated that the simulation was carried out in the OR rather than in a training room (eg, simulation center)'. ► 'I would have preferred the simulation workshop being carried out in a training room (eg, simulation center)'. Finally, the questionnaire explored the number of COVID-19 positive patients treated after the simulation session and the level of confidence in relation to the specific skills in this situation (putting on PPE, removing PPE, manipulation of the airways…) during the management of the first patient. Statistical analysis Results were expressed as proportions (percentage), mean±SD or median value (IQR) as specified. According to a Shapiro-Wilk normality test, parametric data between groups were compared by unpaired Student's t-test and non-parametric data by Mann-Whitney rank-sum test. Categorical data were compared using χ 2 test and Fisher's exact test using a two-tailed probability. Paired data were compared using paired Student's t-test. A multivariate logistic regression model (backward stepwise model) was used to determine independent risk factors for level of confidence in relation to the specific skills in this situation (putting on PPE, remove PPE, manoeuvres on airways,…) when taking care of the first patient, keeping in the equation the variables that were found relevant in the univariate analysis. A p value less than 0.05 was considered to be significant. Statistical analysis was performed with JMP V.14.2 (14.0) (SAS Institute Inc). Participants characteristics On 31 May 2020, 67 participants replied to the questionnaire, for a response rate of 32%. Among the respondents, there were 16 board-certified anaesthetists out of 26 participants (61%), 19 trainees in anaesthesia out of 37 participants (51%) and 32 nurses out of 145 participants (22%) (table 1). Perceived self-efficacy The global perceived self-efficacy before and after training is shown in table 2. Their comparison that demonstrates a significant increase in perceived self-efficacy for each of the items is also shown in table 2. The perceived self-efficacy assessment by profession is shown in table 3. A significant difference in perceived self-efficacy before training was observed between nurses and anaesthetists (board certified and trainees) in putting on PPE (p=0.007) and removing PPE without risk of contamination (p=0.009). A difference was also observed between nurses and trainees in anaesthesia regarding extubation (p=0.05). No other difference was observed on the before training items between the nurses and the doctors. No significant difference in perceived self-efficacy after training was observed between the nurses and board-certified or trainee anaesthetists for the different items. A lesser increase in perceived self-efficacy was observed among nurses concerning putting on PPE with an average difference of 2.21±0.47 (95% CI 1.25 to 3.18) compared with 4.69±0.47 (95% CI 3.73 to 5.64) among doctors (p=0.0005). A lesser increase in perceived self-efficacy was also observed concerning removing PPE between nurses with a difference of 1.91±0.47 (95% CI 0.94 to 2.87) and doctors with a difference of 3.82±0.49 (95% CI 2.84 to 4.81) (p=0.006). Perception of the usefulness of the simulation session The participants agreed strongly (with 9 (8-10)) that the simulation session on putting on and removing PPE was useful for their clinical practice when managing patients with COVID-19 patients. Trainees in anaesthesia, however, find this part of the training less useful than board-certified anaesthetists with a score of 8 (8-10) versus 9.5 (9-10) (p=0.02). Participants also agreed at 9 (8-10) that the simulation session on airway maneuvers was useful for their clinical practice when managing patients with COVID-19. Participants agreed with 8 (7-10) that what they learned during the simulation session changed their clinical practice, including for non-suspect patients. Original research Regarding their stress in relation to the care of patients with COVID-19, the participants moderately agreed that the putting on and removing PPE simulation session reduced their stress with a median of 7 (5-9). The feeling is similar for the intubation and extubation session with a median of 7 (6)(7)(8)(9). For participants who managed COVID-19 positive patients, the level of confidence in their specific skills was 7 (6-8) when the first patient was managed. A simple regression analysis showed a significant link between the level of confidence of the participants when taking care of the first patient and the perceived self-efficacy at the end of the training for each of the items analysed, namely putting on PPE (p=0.0005), checking the correct positioning of the FFP2 mask (p=0.003), removing PPE without risk of contamination (p<0.0001), performing an induction and intubation sequence (p<0.0001), performing tracheal aspiration (p<0.0001) and performing extubation (p<0.0001) by limiting the risk of contamination. In a multivariate analysis, only the perceived self-efficacy to perform an extubation while minimising the risks of contamination was found as a significant predictor of the level of confidence (p<0.0001). Interest of the simulation tool as compared with other education tools Participants strongly agree that the simulation had an advantage compared with a written document with a score of 10 (8-10). They also believe that the simulation was more useful than a video with a score of 8 (7)(8)(9)(10). Participants appreciated that the simulation was carried out in a team rather than by profession with a strong degree of agreement at 9 (8)(9)(10). In comparison, the score was 2 (0-5) for a workshop carried out by profession. Participants also appreciated that the simulation was performed in the OR rather than in a simulation centre with a score of 9 (8-10). In comparison, the level of agreement with a simulation workshop carried out in a simulation centre was low with a score of 1 (0-4). Training and patient care Sixty-two respondents (92%) had to care for COVID-19 positive patients between the training and the survey. For half of the respondents, the number of patients treated was less than five patients. When the first patient was considered, they estimated their level of confidence in the procedures at 7 (6)(7)(8). DISCUSSION The salient result of this study is that a session of in situ team procedural simulation improves perceived self-efficacy of OR healthcare professionals about OR-specific procedures for patients with COVID-19 namely the use of PPE and adapted airways manoeuvres. In addition, the participants found the learning made during these simulation sessions very useful for their clinical practice. Lastly, in situ simulation was favoured as compared with learning based on written document or video. Team and in situ sessions were preferred to sessions organised by profession or in a specific simulation centre. The training increased the perception of self-efficacy of all participants, whatever their profession. Nevertheless, the increased self-efficacy was significantly lower for specific items related to putting on and removing PPE among nurses compared with doctors. This difference can be explained by the scarcity of basic hospital hygiene courses in the medical initial education, in particular on PPE, as compared with the nursing education. This is consistent with the higher level of self-efficacy on these items before training among nurses compared with doctors. The lower perceived self-efficacy in physicians after they received written recommendations before their training could also be explained by the following two factors. The first factor is the degree of uncertainty with regard to the situation and the management of patients at the beginning of the crisis, reinforced by the unsolved questions after reading the written information received. 12 This could also explain the trend of the higher level of self-efficacy of nurses who did not receive the written documents before the training. A second factor is the nature of the tool: a written document or a video compared with simulation developing experiential learning. Indeed, experiential learning appears to Putting on personal protective equipment to take care of a COVID-19 patient in the OR 6 (3-8) 8 (7-9) 4.5 (2.25-6) 9 (8-9) 2 (0-6) 8 (8)(9) 0.007* 0.50 Checking that an FFP2 mask is correctly be particularly effective in contexts in which complex information must be processed and contexts in which deeply ingrained behavioural attitudes are challenged, 13 which was the case at the start of the crisis with regard to airway management in the OR. In situ workshops practice might have favourably influenced practices for other patients also. Indeed, it has been shown that the participants in in situ simulations provide more ideas for changes. 14 Regarding the difference in perception between board-certified anaesthetists and trainees on the usefulness of these simulations for putting on and remove PPE, the role of each intervener in the care of patients should be taken into account. Indeed, the management of the airways was attributed to the most experienced anaesthesiologist. Board-certified anaesthetists, generally older, might also have perceived the usefulness of the PPE procedures having in mind that the personal risk of serious illness was higher with age. Behavioural differences have already been observed, for example, among older healthcare workers who are more likely to be vaccinated against seasonal influenza. 15 The moderate effect on the stress reduction associated with the training when managing patients should be seen in view of the other constraints imposed on the OR by the epidemic, such as limited availability of disposable equipment, the organisation of the team or also by the numerous pathophysiological uncertainties concerning the disease and the quick improvement evolution in disease knowledge. Finally, it is interesting to note that the level of confidence when taking care of actual patients with COVID-19 is associated to the level of perceived self-efficacy at the end of the training. Likewise, the only factor predicting the level of confidence in patient care is the perceived self-efficacy for the extubation phase of COVID-19 positive patients. This is remarkable since no written recommendation has developed extubation of patients in the OR. These two points demonstrate the value of in situ procedural simulation training in the context of an emerging health crisis. This study has several limitations. First, the overall response rate 16 was mild: 32%. Although this is an adequate rate in relation to the methodology used, we cannot extrapolate that it is indeed a representative sample of the population. Indeed, if the medical population is well represented with more than 50% of responses, the response rate of nurses is quite low. This low rate is probably explained by the survey methodology chosen via the professional external email address. Indeed, the institution offers an internal email address and an external address. Nurses consult the external e-mail address less often than physicians. In addition, the link to the survey was blocked on some institutional computers by a firewall. On the opposite, doctors more often consult their professional address via personal devices that are prohibited for nurses within the institution during working hours. Second, the survey was an afterthought. The memory nature of the perceived self-efficacy can influence the response. However, as the training was decided and developed in less than 24 hours due to the urgency of the situation, we did not have time to conduct the survey in a prospective manner. Moreover, in this period of crisis, knowledge and protocols have evolved extremely rapidly. A discrepancy may have existed between the learning made during the training and the practices in progress during the survey. Finally, the responses spanned a little over a month and were carried out after the peak of the epidemic, which may have influenced or changed the participant's perceived feeling. Our results relate only to anaesthesiologists and nurses in the OR at the start of the COVID-19 pandemic in Europe, limiting their extrapolation to other target audiences in OR or for other anaesthetists working in, for example, in an intensive care unit, would have had the same impact on the selfefficacy. Likewise, we have not evaluated the impact of the context of an emerging pandemic on the effectiveness of the simulation training system. CONCLUSIONS The in situ procedural simulation has the potential to improve the perceived self-efficacy of OR nurses and anaesthesiologists on specific skills related to the care of patients with COVID-19. The perceived self-efficacy is positively related with the level of confidence of these healthcare professionals when taking care of COVID-19 infected patients in the current course of this pandemic but only partially reduces stress. Contributors FL: designed the study, performed the simulations, interpreted the results and wrote the manuscript. CH and NSS: performed the simulations, interpreted the results and wrote the manuscript. AG: interpreted the results and wrote the manuscript. JFB: designed the study, interpreted the results and wrote the manuscript. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
2021-06-08T13:15:03.876Z
2021-06-07T00:00:00.000
{ "year": 2021, "sha1": "fc2345c5dc444abcd94c60f63a02403c93455c1f", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8189827", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ccfa3bbfc5a0aa9addd49e5e7d152bb3b3e8169a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5581025
pes2o/s2orc
v3-fos-license
Implicit Learning of Stimulus Regularities Increases Cognitive Control In this study we aim to examine how the implicit learning of statistical regularities of successive stimuli affects the ability to exert cognitive control. In three experiments, sequences of flanker stimuli were segregated into pairs, with the second stimulus contingent on the first. Response times were reliably faster for the second stimulus if its congruence tended to match the congruence of the preceding stimulus, even though most participants were not explicitly aware of the statistical regularities (Experiment 1). In contrast, performance was not enhanced if the congruence of the second stimuli tended to mismatch the congruence of the first stimulus (Experiment 2). The lack of improvement appears to result from a failure of learning mismatch contingencies (Experiment 3). The results suggest that implicit learning of inter-stimulus relationships can facilitate cognitive control. Introduction The Eriksen flanker task requires the identification of a central target in the presence of surrounding distractors [1]. Arrowheads are typically used, yielding stimuli like these: v v v v v (correct answer: ''left''). w w w w w (correct answer: ''right''). v v w v v (correct answer: ''right''). w w v w w (correct answer: ''left''). The first two stimuli in are termed congruent, the last two incongruent. We will call two successive stimuli in the flanker task concordant if they are matched for congruence, that is, either each is drawn from the top two rows of (1), or each is drawn from the bottom two rows. Two successive stimuli are discordant if they are not concordant, that is, one is drawn from the top two rows of (1) and one from the bottom. Thus, congruence and incongruence are properties of individual stimuli whereas concordance and discordance are properties of pairs. Note that the two members of a concordant pair may or may not require the same answer, and likewise for discordance. It is well documented that response times (RTs) are lower for congruent compared to incongruent stimuli [1,2]. It has also been found that RTs are lower for the second stimulus of concordant pairs compared to the second stimulus of discordant pairs (the Gratton effect, [2]). One possible mechanism for the latter phenomenon is that congruent stimuli increase attention to surrounding flankers in the subsequent stimulus, thereby offering a more extended visual target in case of congruence but increasing interference in case of incongruence. Likewise, incongruent stimuli would draw attention away from flankers, thereby slowing the response to a following congruent stimulus but limiting interfer-ence in case of incongruence. Concordance would thus enhance performance in both situations, compared to discordance. In another version of the experiment [2,3], an explicit cue signaled the congruence/incongruence of subsequent stimuli. RTs were lower when cues predicted congruent stimuli, but no difference in RT was observed for cues predicting incongruent stimuli. Recent evidence suggests that the Gratton effect hinges on concordant pairs with the same correct answer, that is, on successive stimuli that are identical. RT appears not to decrease for the second member of a concordant pair that requires a different answer than the first [4][5][6]. The Gratton effect may thus reflect mere repetition priming rather than priming for the more abstract property of stimulus congruence or incongruence. Perhaps a more robust Gratton effect can be achieved through learning implicitly the statistical structure of successive flanker stimuli, instead of relying on explicit cuing. This speculation is motivated by findings on preparatory control in task switching. In a predictable alternating-runs paradigm, participants are able to learn to prepare for the upcoming stimulus and reduce switch cost (see, for example, [7], and [8] for a review). In the first two experiments reported below, statistical regularities are implicitly embedded in the stimuli. Specifically, the congruency of the second member of a pair is contingent on that of the first, unbeknownst to the participants. Importantly, the response required by either stimulus in the pair was randomly determined, which ensures that any observed effect is not due to repetition priming, but rather driven by learning of the abstract property of stimulus concordance or discordance. The goal of the current study is to examine how the implicit learning of regularities between successive stimuli influence the control of attention. In all experiments, two flanker stimuli are presented in a pair for every trial. Unbeknownst to the participants, the congruency of the second stimulus is predictable from that of the first. In Experiment 1, the congruency of the second stimulus tends to remain the same as for the first (e.g., a congruent trial tends to follow another congruent trial). In Experiment 2, the congruency of the second stimulus tends to differ (e.g., a congruent trial tends to follow an incongruent trial). In all experiments, the overall percentage of congruent and incongruent trials is roughly the same (i.e., 50%). We hypothesize that the implicit learning of congruency between two stimuli will lead to faster response in the second stimulus in a pair. Methods Participants. Sixty adults from Mercer County, New Jersey, were tested individually in return for $5 compensation (39 female, mean age 22.5 yrs, SD = 2.8). All experiments reported here have been approved by the Princeton University Institutional Review Board. Written consent was obtained from every participant. Materials and Procedure. Each trial consisted of a pair of stimuli presented sequentially. Stimuli were as shown in (1), presented at fixation on a computer monitor, occupying approximately two visual degrees. The trial began with the sign ''Get Ready'' displayed at the center of the screen for 1 second, followed by a blank screen for 500 ms. The first stimulus in a pair was then presented at the center of the screen until the participant responded. After response, a blank screen appeared for 500 ms. followed by the second stimulus which was presented until the participant responded again. Finally, a blank screen appeared for 500 ms. before the onset of the next trial. An example trial is shown in Figure 1. There were two conditions in the experiment, called concordant versus random; they were performed by separate groups of thirty participants each (uninformed of the condition they were in). Each condition was composed of 200 trials, where each trial consisted of a pair of stimuli, as described above. In the concordant condition, 80% of the trials consisted of concordant pairs, 20% discordant. Half of the concordant trials involved congruent pairs, and the other half incongruent. Thus, if the first stimulus in a pair was congruent, there was an 80% probability that the second was also congruent. Likewise, if the first stimulus was incongruent, there was an 80% probability that the second was also incongruent. Within these constraints, all stimuli were chosen randomly. The first two columns of Table 1 lists all concordant pairs. Crucially, although the congruency of the pairs was manipulated, the central arrow direction was always randomly determined for every stimulus. This implies that participants could learn to predict the congruency of the second stimulus of a pair based on the first, but they could not learn to predict the specific arrow direction of the second stimulus. In the random condition, the concordance of every pair was randomly determined. In other words, the congruency of the first stimulus in a pair was not predictive of the congruency of the second stimulus. Just as for the concordant condition, in the random condition the central arrow direction was randomly determined for every stimulus in every pair. Participants performed five practice trials before starting the experiment. They were instructed to indicate the direction of the middle arrow by pressing the ''1'' key or the ''0'' key for left and right, respectively. Participants were required to respond as accurately and quickly as possible. Results and Discussion If participants in the concordant condition learned to exploit its congruency structure then these participants would be better prepared for the second stimulus in a pair, compared to participants in the random condition. Since learning might require several trials, however, we examined performance only on trials 101{200 in each condition. Moreover, only a subset of these latter trials were included in the analysis. Specifically, in the concordant condition, we included only the 80% of trials that exhibited the same congruency for the two stimuli (both congruent or both incongruent). Likewise, in the random condition, only the concordant trials were selected (50% of trials). It was then possible to compare accuracy and RT between the two conditions with respect to the very same stimuli. For example, performance on the second stimulus of the concordant trial v v v v v followed by w w w w w was compared to performance on the second stimulus of the identical trial in the random condition. (Performance on the first stimulus in a pair was ignored.) Finally, for every trial, accuracy was calculated by comparing whether the response matched the central arrow. For every participant and separately for congruent and incongruent trials, RTs more than 2:5 standard deviations above the mean were excluded from the analysis. Only 2% of the trials were removed as a result. Average accuracy and RT for the second stimulus of concordant pairs in both the concordant and random conditions are presented in Table 1. Accuracy and RT were analyzed with a two-way mixed-design ANOVA (between-subjects factor: concordant vs. random condition; within-subjects factor: congruent vs. Table 1, concordant and random conditions were compared via independent-sample t-tests (corrected for multiple comparisons). For every one of the 8 types of concordant trials, RT was reliably and consistently lower in the concordant condition than in the random condition. Despite the improvements in RT, only six of the 30 participants in the concordant condition were aware of the statistical relationship between the first and the second stimuli. The same pattern of results was observed when the six participants were excluded from the analysis. To further support the role of learning, we analyzed RT of the first 100 trials. Since the implicit learning process occurs over time, the effect in the first half of the experiment may not be as strong as that in the second half. Indeed, we found that the RT difference was smaller between the concordant and the random conditions [F (1,58)~4:99, p~:03] in the first 100 trials. There was a reliable interaction between trials (first vs. second 100 trials) and condition [F (1,58)~3:86, p~:04]. These results suggest that participants learned to exploit the partial predictability of congruence in the concordant condition, focussing attention adaptively for the second stimulus in a trial. The substantially lower RTs seen in Table 1 for congruent compared to incongruent trials may reflect the advantage accruing to spreading attention across multiple arrowheads with the same message. Experiment 2 Experiment 1 documents the ability to exploit concordance in sequential stimuli but leaves open the same question about discordance. In a discordant pair if the first stimulus is congruent then the second is incongruent, and vice versa. Experiment 2 was isomorphic to the first except that it involved a discordant condition in place of the original concordant condition. The discordant condition was composed of 200 trials, 80% of which were discordant pairs, 20% concordant. Half of the discordant trials involved a congruent stimulus followed by incongruent, and the reverse for the other half. As before, the central arrow direction was always randomly determined for every stimulus. For the random condition, the data from Experiment 1 were used again. Thirty new participants were recruited for Experiment 2, drawn from the same pool as before (21 female, mean age~23:8 yrs, SD~4:1). Results and Discussion We compared performance on matching second stimuli in the discordant versus random conditions, taking into account just trials 101{200. As before, only a subset of the trials were included in the analysis. Specifically, in the discordant condition, we included only the 80% of trials that exhibited different congruency for the two stimuli. Likewise, in the random condition, only the discordant trials were selected (50% of trials). Trials with RTs beyond 2.5 standard deviations of the mean for every participant and for every congruency level were excluded. Only 3% of the trials were removed as a result. Accuracy and RT for the second stimulus of discordant pairs in the two conditions are presented in Table 2. As in Experiment 1, accuracy and RT were analyzed using a two-way mixed-design ANOVA (between-subjects factor: discordant vs. random condition; within-subjects factor: congruent vs. incongruent trials). Pair-wise comparisons revealed that for none of the eight types of discordant trials was RT reliably lower in the discordant compared to random condition; there were also no reliable differences in accuracy. Notice, however, that for all types of trials, the RTs were numerically lower in the discordant compared to random condition. Only four of the 30 participants in the discordant condition noticed the statistical relationship, and the results remained the same without the four participants. The results show that there was no RT benefit in the second stimulus for discordant pairs. Comparison of the two experiments suggests that it is more difficult to learn discordant relationships compared to concordant. To verify this, we contrasted performance on trials in which the second stimulus was matched between concordant and discordant conditions. For example, a concordant trial w w w w w followed by v v v v v from Experiment 1 was matched with the discordant trial w w v w w followed by v v v v v from Experiment 2; performance on the second stimuli was then compared. As before, only trials 101{200 were used, and outliers were dropped (see Table 3). A two-way mixed-design ANOVA (between-subjects factor: concordant vs. discordant condition; within-subjects factor: congruent vs. reliably mismatched the congruence of the second. There was also reliably greater accuracy for concordant trials in which the second stimulus was incongruent. Why did participants fail to show improved performance for discordant trials? Either they failed to learn the statistical relationship between the two stimuli in a pair, or they failed to exploit the relationship despite learning it. To clarify the matter, we performed a third experiment in which participants were explicitly told about the statistical relationships between the two stimuli in a pair. This allowed us to examine whether having learned the regularities in advance would lead to a reduction in RT of the second stimulus in a pair. Experiment 3 Experiment 3 was like Experiment 2 except that participants were explicitly informed that every pair was discordant, that is, congruence was (invariably) followed by incongruence and vice versa. In a pilot study, we used 100% probability without explicit instructions, and the results were similar to those in Experiment 2. To ensure that participants have indeed learned the regularities, we used explicit instructions here. As before, half of the discordant trials involved a congruent stimulus followed by an incongruent stimulus, and the reverse for the other half. Since all trials were discordant, all trials were included in the analysis. Thirty new participants completed Experiment 3 (20 female, mean age~21:5 yrs, SD~3:2). Results and Discussion To determine whether participants exploited discordance in the present procedure, we compared performance on the second stimuli with performance on matching stimuli in the random condition of Experiment 1. As before, only trials 101{200 were included and RT outliers were excluded (only 2% of the trials were removed as a result). The results are presented in Table 4. As before, accuracy and RT were analyzed with a two-way mixed-design ANOVA (between-subjects factor: discordant vs. random condition; within-subjects factor: congruent vs. incongruent trials). For RT, there was a main effect of condition . This suggests that learning the discordant structure via explicit instructions resulted in faster RTs, compared to implicit learning of discordance. It can be seen that RTs in the present experiment were reliably lower for most stimulus types compared to the random condition of Experiment 1. The enhanced performance seems not to be due to delaying the response to first stimuli. The average RTs of the first stimuli in the present condition and in the random condition of Experiment 1 were 513:8k (SD = 167:3) and 571:0 (SD = 82:6), respectively, not reliably different [t(58)~1:68, p~:10]. We also note that there was no reliable difference in accuracy between the two conditions for any type of discordant trial. Thus, having been informed ahead of time about the regularities in discordant pairs resulted in faster RTs in the second stimulus. This suggests that the lack of improvement in RT in Experiment 2 is due to a failure to learn the statistical regularities of discordant pairs, rather than a failure to exploit the regularities despite learning them. General Discussion Our first experiment documents the control over attention that observers can exercise when they learn that the congruency of one stimulus tends to match that of a successor. Importantly, this effect is not due to repetition priming since the response required by either stimulus in a concordant pair was randomly determined. The unanimous reduction in RT in the second stimulus across all types of trials suggests that the effect is driven by the learning of the concordant structure. That is, a congruent stimulus tends to be followed by another congruent stimulus, or an incongruent stimulus followed by another incongruent one. Remarkably, such learning occurred at an implicit level, because the vast majority of participants were unaware of the concordant structure. Presumably, the control is based on extending attention to the flankers for upcoming congruent stimuli and restricting it for incongruent. The same RT effect was observed for discordant trials, where the congruency of the first stimulus mismatches that of the second, but only when such structure was explicitly mentioned (Experiment 3). In contrast, discordance (mismatch of congruency) was difficult to learn implicitly (Experiment 2). Our results suggest that cognitive control, a process which usually involves the active maintenance of explicit goals in working memory, can be governed by the implicit learning of stimulus relationships. Unlike most studies on cognitive control, participants in Experiment 1 were not informed about concordance, and instead learned the regularities spontaneously during the experiment. Such learning can be highly ecological and useful, suggesting that the existence of environmental regularities can allow participants to automatically increase their attentional control, even in the absence of explicit instructions or awareness. The current results also suggest that the Gratton effect can be driven by a failure in learning the discordant structure, rather than a failure to exploit the regularities despite learning them. Regardless of concordance or discordance, RT declined for all types of trials when the congruency in the first stimulus predicted that of the second stimulus implicitly in Experiment 1 and explicitly in Experiment 3. This is consistent with recent findings on how the congruency of the previous stimuli influences performance on the upcoming trial, and how increasing the amount of concordant or discordant trials influences RT [9,10]. It remains to understand why implicit learning is impeded by mismatching congruency between stimuli whereas it is possible when congruency matches (Experiment 2 versus 1). The two situations, after all, convey equivalent information. Several factors can potentially explain such failure in learning. First, it may be more difficult to perceive discordance than concordance [11,12], the former involving a change in stimulus property (e.g., congruency predicts incongruence), whereas the latter involving no such change (e.g., congruency predicts congruency). Second, even if discordance can be perceived, it may be more effortful to switch between attending narrowly to the central arrow and attending broadly to the whole stimulus, as evidenced by various task switching costs in previous studies [8,13]. This switch cost may not allow learning to be fully expressed. Third, since a discordant pair always involves the alternation between a congruent and an incongruent stimulus, such alternation could impede feature integration on the second stimulus [14] or result in negative priming [5,6], which can prevent the learning of regularities between stimuli. Specifically, feature integration on discordant pairs may slow down performance because partial feature repetition in the second stimulus may require the feature binding in the first stimulus to be undone. This account could account for the differences in RT between concordant and discordant pairs (see Table 3). Finally, since cognitive control can be configured by the occurrence of conflict [15], attentional control may be increased following high-conflict incongruent trials, and yet decreased following low-conflict congruent trials. This suggests that incongruent trials can trigger a narrowing of attention which would in turn lead to faster RTs for upcoming incongruent trials, while congruent trials can induce a broadening of attention which would lead to faster RTs for upcoming congruent trials but slower RTs for incongruent trials. This may explain the faster RTs in concordant trials in Experiments 1 and slower RTs in discordant trials in Experiment 2. Further studies are needed to tease these ideas apart. Other future research directions might profitably examine the generalizability of implicit learning of stimulus contingency on cognitive control [16], by using numeric stimuli or embedding new kinds of structure in the trial sequence. In the current study, the stimuli were grouped in pairs. This raises the question of how the history of congruency (e.g., the last five trials vs. the last ten trials) influences attentional control on upcoming trials. Finally, it remains to be seen how long the effects of implicit learning last on modulating cognitive control.
2017-04-14T12:35:54.660Z
2014-04-15T00:00:00.000
{ "year": 2014, "sha1": "648ae57bef89028e993746551a59b4781cf1d452", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0093874&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "648ae57bef89028e993746551a59b4781cf1d452", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
9906282
pes2o/s2orc
v3-fos-license
Scattering from Singular Potentials in Quantum Mechanics In non-relativistic quantum mechanics, singular potentials in problems with spherical symmetry lead to a Schrodinger equation for stationary states with non-Fuchsian singularities both as r tends to zero and as r tends to infinity. In the sixties, an analytic approach was developed for the investigation of scattering from such potentials, with emphasis on the polydromy of the wave function in the r variable. The present paper extends those early results to an arbitrary number of spatial dimensions. The Hill-type equation which leads, in principle, to the evaluation of the polydromy parameter, is obtained from the Hill equation for a two-dimensional problem by means of a simple change of variables. The asymptotic forms of the wave function as r tends to zero and as r tends to infinity are also derived. The Darboux technique of intertwining operators is then applied to obtain an algorithm that makes it possible to solve the Schrodinger equation with a singular potential containing many negative powers of r, if the exact solution with even just one term is already known. (i) Repulsive singular potentials make it possible to obtain a fairly accurate description of the short-range part of the nucleon-nucleon interaction. (ii) The (p, p) and (p, π) processes can be interpreted in terms of complex potentials r −n , with n > 2. (iii) Repulsive singular potentials reproduce also the interactions of nucleons with Kmesons, and α − α scattering processes. (iv) The Lennard-Jones potential, proportional to r −12 , can be used to study interactions among the overlapping electron clouds of non-polar molecules. (v) At a field-theoretical level, it appears quite remarkable that non-renormalizable field theories give rise to effective potentials in the Bethe-Salpeter equation which are singular [1,2], whereas super-renormalizable and renormalizable field theories give rise to regular or transition-type effective potentials, respectively. There was therefore the hope that any new insight gained into the analysis of non-relativistic potential scattering in the singular case, could be eventually used to obtain a better understanding of quantum field theories for which perturbative renormalization fails (cf section 6). (vi) In particular, one might then hope to be able to "map" the analysis of quantum gravity based on the Einstein-Hilbert action (plus boundary terms), which is well known to be incompatible with the requirement of perturbative renormalizability [16], into a scattering problem in the singular case, for which the Schrödinger equation for stationary states: has non-Fuchsian singularities (see appendix) both as r → 0 and as r → ∞. With our notation, q is the number of spatial dimensions, and l(l + q − 2) is obtained by studying the action of the Laplace-Beltrami operator on wave functions belonging to the tensor product [17] L 2 (R + , r q−1 dr) ⊗ L 2 (S q−1 , dΩ). Moreover, with a standard notation, one has (of course, the energy E is positive in a scattering problem) with U (r) the potential term in the original form of the Schrödinger equation. From now on, it is V (r) which will be referred to as the potential, following the convention in the literature. Among the analytic results obtained so far in the investigation of potential scattering in the singular case, we find it appropriate to mention what follows. (i) A constructive determination of the S-matrix, based on the polydromy properties of the wave function (see appendix) and on the Hill equation for the polydromy parameter [6,8]. (ii) Perturbative technique for the potential V (r) ≡ g 2 r −4 in three dimensions, by reexpressing the radial Schrödinger equation as a modified Mathieu equation [10], with evaluation of S-matrix and Regge poles. (iv) Generalization of the JWKB method to arbitrary order, with rigorous error bounds [14]. In our paper, sections 2 and 3 apply the method of [6,8] to the Schrödinger equation for stationary states in three or more spatial dimensions, proving that a simple but deep relation exists between the corresponding Hill equations in two and three or more spatial dimensions. Section 4 presents, for completeness, the JWKB analysis of the wave function, jointly with its limiting behaviour as r → 0 and as r → ∞. Section 5 studies the application of the intertwining operator technique to singular potential scattering. Results and open problems are described in section 6. Schrödinger equation for stationary states Following the remarks in the introduction, we first study the Schrödinger equation for stationary states in three spatial dimensions in a central potential: What is crucial is the polydromy of the wave function in the r variable. Indeed, if the potential V (r) is a single-valued function of r, one can find two independent solutions where χ 1 and χ 2 are single-valued functions of r, and γ is a parameter which can be determined from a transcendental equation (see below). The general solution of Eq. (2.1) is therefore of the form ψ(r) = α 1 ψ 1 (r) + α 2 ψ 2 (r). (2.4) Remarkably, one can compute directly χ 1 (r) and χ 2 (r) and study their behaviour as r → 0 and as r → ∞ [6,8]. For this purpose, the following Laurent expansions are used (the subscript for χ is omitted for simplicity): These expansions hold because V (r) is assumed to be an analytic function in the complex-r plane, with singularities only at infinity and at the origin [6,8]. The Laurent series (2.5) and (2.6) are now inserted into Eq. (2.1), which is equivalent to the differential equation (cf [6,8]) One thus finds the following infinite system of equations for the coefficients (cf [8]): where det(H) exists since the double series converges for all values of γ which do not correspond to zeros of the denominator. At this stage one can appreciate the substantial difference between regular and singular potentials. In the former case, u n is non-vanishing only for positive n. In the singular case, however, the presence of negative powers in the Laurent series (2.5) gives γ as the solution of a transcendental equation, i.e. (the vanishing of det(H) being necessary and sufficient to find non-trivial solutions of the system (2.8)) Equation for the γ parameter To evaluate F (γ), we point out that, on defining This simple but fundamental property makes it possible to perform the three-dimensional analysis by relying entirely on the investigation in two spatial dimensions, because H n,m now reads and hence, from the work in [6,8], one knows that where F is an even periodic function of γ, with unit period [8]. The equation is, as we said in section 2, a transcendental equation. If a root, say x, is known, at least approximately, one can then evaluate the desired γ parameter from the definition (3.1) as The ground is now ready for understanding the key features of singular potential scattering in an arbitrary number of spatial dimensions. For this purpose, we remark that, upon setting ψ(r) = r γ χ(r), Eq. (1.1) leads to the following second-order equation for χ (cf (2.7)): one can re-express Eq. (3.8) in the form At this stage, the Laurent expansions (2.5) and (2.6) lead to an infinite system of equations for the coefficients c n in the form (cf (2.8)) where we have defined (cf (3.2)) whereas the notation (2.10) remains unchanged. Thus, one can always perform the analysis in terms of the infinite matrix (3.4), provided that one defines γ and λ 2 as in (3.9) and (3.12), respectively. The resulting Hill-type equation which leads, in principle, to the evaluation of the fractional part of the polydromy parameter γ, involves an even periodic function of γ + 1 2 (q − 2). Asymptotic form of the solutions Since one might be eventually interested in the S-matrix, it is quite important to study the limiting behaviour of stationary states as r → 0 and as r → ∞. In the former case, one can perform a JWKB analysis of Eq. (1.1), setting therein This leads to the equation (the prime denoting differentiation with respect to r) If, in a first approximation, the term on the second line of Eq. (4.2) is neglected, one finds the equations which imply and hence, for some constant β, To second order inh, one has to consider the second line of Eq. (4.2). On taking the prefactor A(r) in the form (4.7), one has then to evaluate the phase S(r) from the equation (4.9) Only an approximate calculation of the square root in Eq. and hence we do not present further calculations along these lines. One should bear in mind, however, that the JWKB expansion has an asymptotic nature, and rigorous error bounds can be obtained [14]. In particular, in the physically more relevant case of three spatial dimensions, Eq. Moreover, as r → ∞, one has the familiar asymptotic behaviour (4.13) The S-matrix is given by the formula [6,8] S = where the A and B parameters are the ones occurring in the asymptotic expansions (4.11) and (4.13), and can be obtained by means of the saddle-point method [6,8]. Intertwining operators for singular potentials Since exact solutions of singular scattering problems in terms of special functions are known in a few cases only, it appears quite important to look for a technique that makes it possible to generate solutions for complicated problems, relying on what is known in simpler cases. The aim of the Darboux method is to generate families of isospectral Hamiltonians. It relies on a theorem which, in modern language, can be stated as follows [23]. Let ψ be the general solution of the Schrödinger equation If ϕ is a particular solution of (5.1) corresponding to an energy eigenvalue ε = E, then is the general solution of the Schrödinger equation In other words, if two Hamiltonian operators, say H A and H B , are given, one looks for a differential operator, say D, such that [24] It is then possible to relate the eigenfunctions of H A and H B by using the action of D (see below). Here, we focus on one-dimensional problems, with where V 0 and V 1 are the "potential" functions, and G is another function, whose form is determined by imposing the condition (5.6). This reads, explicitly, for all functions f which are at least of class C 3 . On imposing Eq. (5.10), one finds exact cancellation of the terms − d 3 f dx 3 and −G d 2 f dx 2 , since they occur on both sides with the same sign. Hence one deals with the equation which implies by virtue of the arbitrariness of f . It is now possible to use Eq. (5.12) to express Eq. which is solved by for some constant C. Equation (5.14) is known as the Riccati equation. Its non-linear nature makes it desirable to develop an algorithm to relate it, instead, to the solution of a linear problem. This is indeed achieved by considering the function ϕ such that with V replaced by V 1 , and V replaced by V 0 . One then finds, by virtue of (5.14) and (5.15), that ϕ obeys the linear second-order equation This is a simple but deep result: one first has to find the eigenfunctions of H 0 , say ϕ, belonging to the eigenvalue −C. Once this is achieved, the desired function G is obtained from (5.15), and hence the intertwining operator is In the applications, it is also convenient to use Eq. (5.12) to express Eq. (5.14) in the form leads to a second-order differential operator acting on y which is of the form (5.7) or (5.8). However, the choice of a suitable intertwining operator, aimed at relating operators H A and H B whose potential terms differ in a somehow substantial way, is a non-trivial task. For example, if one considers Eq. (5.18) may be then satisfied by (cf (5.9)) provided that A = k 2 , B = −2k and C = 0. However, the resulting potential V 0 (r) is found to be, from Eq. (5.12), so that the intertwining operator ends up by relating operators H A and H B whose potential terms have precisely the same functional form. A scheme of broader validity, however, is obtained by looking for V 1 (r) and G(r) in the form of (Laurent) series, i.e. V 1 (r) = ∞ n=−∞ a n r n (5.23) The insertion of (5.23) and (5.24) into Eq. (5.18) leads to the infinite system that should be solved, in principle, for b n , for all n. One then finds, from Eq. (5.12), a Laurent series for V 0 as well, i.e. For example, if one takes V 1 (r) ≡ g 2 r −4 , one has a n = g 2 δ n,−4 (5.28) and hence one deals with the infinite system whereas n = −1 leads to a trivial identity. Now equations (5.30) and (5.31) imply that b −3 = b −2 = 0, and hence g 2 = 0 from (5.32), which is incompatible with our assumptions. The remaining equations (5.33)-(5.35) allow for b −1 = 1, further to b −1 = 0, but with However, the implications remain of high interest: to find non-trivial solutions with g 2 = 0 and C = 0 one needs a large number of b p coefficients (maybe infinitely many), including those with p > 0. This still means that one has the opportunity to solve the Schrödinger equation with a complicated singular potential, starting from what one knows when the potential equals g 2 r −4 [8]. For this purpose, on denoting again by ϕ the eigenfunction of H 0 belonging to the eigenvalue −C, and by χ ≡ Dϕ the eigenfunction of H 1 belonging to the same eigenvalue, we notice that the desired ϕ can be written in the form where K(r, r ′ ) denotes the Green kernel of the intertwining operator D ≡ d dr + G(r). We need such an integral formula because we have chosen, in our particular example, the form of the potential term V 1 (r) in the Hamiltonian H 1 , for which the scattering states are already known in the literature [8]. The unknown are instead the scattering states resulting from the Hamiltonian operator with potential term equal to V 0 (r) (see (5.26)). Concluding remarks Our paper has studied some aspects of scattering from singular potentials in quantum mechanics. Its contributions are as follows. (i) The technique of Fubini and Stroffolini [6,8], with emphasis on the polydromy properties of the wave function, has been applied to an arbitrary number of spatial dimensions, say q, when the potential admits a Laurent series expansion. The equation obeyed by the polydromy parameter, γ, involves a function which is an even periodic function of γ + 1 2 (q−2). Interestingly, one can rely entirely on the analysis performed in [6,8], provided that one considers the parameters defined in (3.9) and (3.12) (strictly, the authors of [6,8] start from three dimensions, but use a transformation [9] leading to an equation formally analogous to the radial part of the stationary Schrödinger equation in two dimensions). Ultimately, one might want to use these properties to study quantum field theories which are not perturbatively renormalizable, according to the original motivations for this research field [1,2,9]. For this purpose, it seems crucial, to us, to consider the quantum gravity problem, focusing (at least) on the following questions. (i) What is the counterpart, in quantum gravity, of the Bethe-Salpeter equation containing effective potentials of the singular type? As is well known, this equation arises in the course of studying the quantum theory of relativistic bound states, and unfortunately a simple extension of the Schrödinger equation is not available [25]. Even on neglecting curvature effects due to gravitational fields, one then faces retardation effects which lead to an extra relative time variable in the problem [25]. An alternative description uses a mediating field, whose quantum properties, however, cannot be ignored [25]. When a quantum theory of gravity is considered in a space-time approach [26], one may expect of using the (formal) theory of the effective action, with a corresponding set of integrodifferential equations. These should be solved, in principle, by using the functional calculus. But even if one were able to achieve so much, and hence derive an effective potential which is a gravitational counterpart of the potential normally used to reduce the number of degrees of freedom of relativistic bound-state problems, the problem would remain of giving a proper interpretation of such potentials, since they seriously affect the exact theory and may introduce fictitious singularities (see page 493 of [25]). (ii) What kind of results can be then "imported" on mapping quantum gravity into a scattering problem from singular potentials (e.g. the asymptotic behaviour of the phase shift [9,13], the exact or approximate solutions derived with some particular choices of singular potentials [1-8, 10, 14], the existence theorem for the wave operators [12])? (iii) How fundamental is the Darboux method of intertwining operators [19][20][21][22] proposed in our section 5? Since also the variable phase approach to potential scattering relies on an equation of Riccati type [7], such a question appears to be non-trivial. The above issues seem to point out that new perspectives are in sight in the analysis of potential scattering for a wide class of singular potentials, with possible implications for a longstanding problem, i.e. the key features of a quantum theory of the gravitational field (see [16] and references therein). Hence we hope that the present work, although devoted to some technical issues, may lead to a thorough investigation of quantum gravity from a point of view well grounded in the general framework of modern high energy physics (cf [27]). To study the point at infinity, one defines When all singular points are Fuchsian, the corresponding differential equation is said to be totally Fuchsian. The non-Fuchsian singularities are, by contrast, singular points of (A1) for which the above conditions on poles and zeros of the functions p 1 and p 2 are not fulfilled. In our paper, the word polydromy refers to the well known property of some functions of complex variable of being multi-valued functions of the independent variable. For example, if z is complex, its logarithm is given by the formula log(z) = log |z| + i arg(z). (A6)
2014-10-01T00:00:00.000Z
1998-07-02T00:00:00.000
{ "year": 1998, "sha1": "8e2046750e091f8ca4fb4d7a51df23eee2935dc9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9807018", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8e2046750e091f8ca4fb4d7a51df23eee2935dc9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258154521
pes2o/s2orc
v3-fos-license
Cases of cow reproductive disorders at The Northern Bandung Dairy Farmer Cooperative by using Geographic Information System (GIS) approach Objective: This study aimed to determine the percentage of reproductive INTRODUCTION Dairy cows are one of the most common farm animals in Indonesia.Based on data from the Central Statistics Agency or Badan Pusat Statistik (BPS) for 2021, Indonesia had 578,579 dairy cows, with East Java as the region with the highest number of dairy cows with 301.780 cows.West Java was in third place with the largest population of 119,915 dairy cows, with West Bandung Regency as the area with the highest population [1].The high population is due to several dairy cooperatives in West Java, one of which is the North Bandung Cattle Breeders Cooperative (KPSBU) Lembang.In 2021, KPSBU Lembang had 19,712 dairy cows. The large cattle population in the Lembang KPSBU work area often encounters animal health problems, one of which is reproductive disorders.In 2021 there were more than 6,000 cases of reproductive disorders in dairy cattle at KPSBU Lembang, with the four highest cases were retained placenta, dystocia, endometritis, and ovarian function disorder (follicular cysts and corpus luteum cysts).Reproductive disorders can increase the costs and expenses of farmers because of the extension of the delivery interval and the budget for cow treatment [2].According to Rukayah [3], the farmer can lose up to ten million per seven years of cattle production due to an extension of the calving interval of cow. This research utilized geographic information system applications in making distribution maps that contain information related to the relationship between the percentage of cases and disease risk factors presented on a map model of an area.Geographic information systems can sort and analyze data from a database in the form of a map [4].This study aimed to determine the percentage of cases of reproductive disorders based on seasonal risk factors and body condition values (BCS) of cows and then classified them based on area. Time and location This research was conducted from June 20 to July 30 2022 in Lembang District, West Bandung Regency, West Java (Figure 1). Research design This research was a descriptive study to describe the types and proportions of reproductive disorders cases in dairy cattle at KPSBU Lembang traditional farm from 2019 to 2021 based on seasonal factors and cow BCS.Furthermore, this study described the distribution of cases of reproductive disorders in dairy cows in the KPSBU Lembang traditional farm in 2021 using a geographic information system.The data were obtained from secondary data from KPSBU Lembang.The data obtained were then processed using the Microsoft Excel application.Furthermore, the distribution map based on the area is made with the data processed using a geographic information system application. Research Procedure The research procedure began with collecting secondary data on dairy cattle and cases of reproductive disorders from the KPSBU Lembang database.Furthermore, geospatial data were collected from the Lembang District from the Indonesia Geospatial Portal website.Secondary data were processed using the Microsoft Excel application to find out the percentage of cases of reproductive disorders in cows, while geospatial data were processed using the Quantum Geographic Information System (QGIS) application to divide the KPSBU work area based on the village area.After that, tables and diagrams were made to present the average BCS and the percentage of cases of reproductive disorders.The distribution of cases was carried out by dividing the groups on the distribution map based on the rate of cases of reproductive disorders in cows.The data processing results were shown in tables, diagrams, and distribution maps then analyzed descriptively based on seasonal factors and BCS. RESULTS Based on secondary data from the Lembang KPSBU, data were obtained regarding reproductive disorders from 2019 to 2021.The reproductive disorders discussed in this study were retained placenta, dystocia, ovarian function disorder (follicular cyst and corpus luteum cyst), and endometritis.Cases of reproductive disorders in dairy cattle at KPSBU Lembang in 2019-2021 are shown in Table 1.The number of cases of reproductive disorders in dairy cattle at KPSBU Lembang per season in 2019-2021 is shown in Figure 2. Mapping the distribution of cases of reproductive disorders at KPSBU Lembang was carried out using the QGIS application.The mapping was based on a combination of TPK, the village area in Lembang District, and cases of reproductive disorders in 2021.The mapping of cases was divided into four distribution maps based on cases of reproductive disorders shown in Figure 4, Figure 5, Figure 6, and Figure 7.The distribution map were divided into five groups: group 1 with a ratio of 0 cases per 1000 cows, group 2 with a ratio of 1-100 cases per 1000 cows, group 3 with a ratio of 101-250 cases per 1000 cows, group 4 with a ratio of 251-500 cases per 1000 cows, and group 5 with a ratio of more than 500 cases per 1000 cows. Several TPK had non-ideal BCS values for dairy cows.BCS values of dairy cows that are not ideal will increase the risk of retained placenta because it can cause mineral and vitamin deficiencies [6].According to Beagley et al. [7], deficiency of vitamin E and selenium will increase the risk of retained placenta.Vitamin E and selenium act as antioxidants in the body to maintain the immune system.In poor conditions, the cow's body will experience immunosuppression, especially the ability of chemotaxis and the number of lymphocyte cells that play a role in releasing the placenta [7]. DISCUSSION The highest percentage of retained placenta cases were found in TPK Suntenjaya in 2019 (185 cases), 2020 (136 cases), and the second highest was in 2021 (118 cases).Meanwhile, the average BCS at TPK Suntenjaya was 2.5, which is presented in Figure 3.This data shows that dairy cows at TPK Suntenjaya had a low average BCS.According to Phillips [8], the ideal BCS in dairy cows is 3.5-4.0on a scale of 1-5.Cattle with a BCS of 1.0-2.0 or a BCS of more than 4.0 tend to had poor reproductive performance [8]. Based on Figure 2, cases of retained placenta in 2019-2021 were higher during the dry period.In the dry season, animals are more at risk of heat stress [9].Divers and Peek [6] state that heat stress is a predisposing factor for retained placenta.Cattle that experience heat stress during the dry period of the pen tend to experience immunosuppression such as the decreased ability of cell phagocytosis, weak cell proliferation system and reduced ability of immune response [10].In addition, another factor that is indirectly related to the dry season is that it will be more difficult for farmers to meet cows' nutritional needs due to the difficulty in finding good quality feed [9]. The lowest percentage of retained placenta cases were at TPK Nyampai in 2019 (3 cases) and 2020 (3 cases), while in 2021 it was at TPK Cikawari (8 cases).In Figure 3, TPK Nyampai had an average BCS of 3.5 and TPK Cikawari had an average BCS of 3.These results indicate that the cattle are getting good feed intake [8].In addition to BCS and season factors, good cage management can also prevent animals from being exposed to heat stroke which is a predisposing factor for retained placenta [6]. The distribution of cases of retained placenta is presented in Figure 4.The majority of areas had a percentage of cases that fall into group 3 with a ratio of 101-250 cases per 1000 cows.Cikahuripan-Gudang Kahuripan Village, The high percentage of dystocia cases in dairy cattle at TPK Barunagri and TPK Suntenjaya may be due to the low BCS of the cattle.In Figure 3, the average BCS of dairy cows at TPK Suntenjaya was 2.5, and at TPK Barunagri is 2.75.This data shows that the two TPKs had cattle BCS below the ideal value 3.5-4.0on a scale of 1-5 [8], while TPK Cikawari and TPK Cibolang had the lowest percentage of cases with an average cattle BCS of 3.0.and 3.5. According to Phillips [8], a cow's BCS can range from 2.0 to 3.0 after giving birth but will return to the ideal value at the end of the lactation period.If the mother at the time of delivery does not reach the ideal BCS, it will increase the risk of dystocia.Cow should not be given excessive or underfeeding when entering the last trimester of pregnancy.Cow with below-ideal BCS conditions will experience a lack of nutrition in the muscles so muscle tone decreases [7].This condition can cause dystocia in the mother due to uterine inertia or inadequate pelvic relaxation [11]. Conversely, a cow with BCS above the ideal number or excess feed also has a risk of experiencing dystocia due to the large size of the fetus and the smaller birth canal due to fat deposits in the pelvic area [11]. There was an exceptional case at TPK Cilumber, which had a BCS value of 3.5 but is the fourth TPK with the most dystocia cases in 2020.The incident at TPK Cilumber may be caused by other factors such as the size of the fetus being too large, abnormalities in the position of the fetus such as bent front legs, changes in the shape of the fetus, or a narrow birth canal [11]. Dystocia cases at KPSBU Lembang decreased in 2020 but increased again in 2021.More dystocia cases in 2019 occurred during the dry season.This data showed that season was not a risk factor for dystocia cases at KPSBU Lembang.Another possible factor that influences dystocia cases in housing management.Cattle in KPSBU Lembang generally used intensive maintenance.Cattle with intensive rearing tend to experience higher cases of dystocia than those with extensive rearing because the cows are less mobile [11].According to Mekonnen and Nibret [12], exercise can reduce the risk of developing dystocia.Muscles that are often trained will strengthen contractions during childbirth [11].Village.Meanwhile, there were four villages: Cikahuripan-Gudang Kahuripan Village, Lembang-Wangunsari Village, Pagerwangi-Kayuambon-Sukajaya Village, and Langensari-Sukajaya Village were included in group 2 with a ratio of 1-100 cases per 1000 cow.Only Suntenjaya Village had the highest percentage of dystocia cases in group 4. Based on the distribution map, there were no areas belonged to group 1 or group 5. The average BCS of cows in TPK Suntenjaya, below the ideal value, increases the risk of endometritis.to Phillips [8], low BCS can cause a negative energy balance which increases the risk of animals getting infected.This condition happens because in cows deficient in nutrients such as vitamin E and selenium, lymphocytes, and neutrophils will reduce the ability of lymphocytes and neutrophils in cell phagocytosis, cell proliferation, and the ability of the immune response [10].In the study of Ohtsuka et al. [13], it was shown that under low BCS conditions, cows have a higher number of lymphocytes circulating in the body compared to cows with ideal BCS. Cases of endometritis were higher during the rainy season, as shown in Figure 2.However, no studies have stated that season is the leading risk factor for endometritis cases.Research conducted by Judi et al. [14] stated that there was no relationship between the increase in endometritis cases and the season factor at KPSBU Lembang.Other studies also support the statement that season is not a risk factor for endometritis cases [15,16]. The high cases of endometritis during the rainy season may be related to the poor sanitation of the cow sheds during the rainy season.According to Judi et al. [14], wet environmental conditions and standing water increase the risk of bacterial infection.This condition is exacerbated when cows experience retained placenta, injuries during childbirth, wounds during mating, and contamination during treatment of the reproductive tract [17].This condition may also be the reason for the high percentage of endometritis cases at TPK Cilumber, even though the average BCS was 3.5, which was considered ideal according to Phillips [8]. The distribution of endometritis cases was shown in Figure 6.Most villages had cases in group 2 with a ratio of 1-100 cases per 1000 cows.Only Suntenjaya Village, Cibodas Village, Sukajaya Village, and Cibogo Village were included in group 3, with a ratio of 101-250 cases per 1000 cows.Based on the distribution map, there were no areas with the number of cases that fall into group 1, group 4, or group 5. Based on Figure 3, the average BCS of cows at TPK Cibodas and TPK Pojok had values below the ideal (3.5-4.0)[8].On the other hand, the average BCS of ideal cows, such as those aimed at TPK Kampung Baru, TPK Cijanggle, and TPK Nagrak, were 3.5, which had a lower percentage of cases.Poor nutritional intake is characterized by low BCS, which can reduce the ability to stimulate Gonadotropin-releasing hormone (GnRH). The ability to stimulate GnRH is influenced by the acetic acid content in the body [18].Acetic acid is one of the fly fatty acid substrates produced from the fermentation of organic matter in the rumen, for example, concentrate [19].Decreased GnRH stimulation will reduce FSH and LH levels in the blood [18].Decreasing levels of FSH and LH result in a decrease in the ability of the ovaries to form follicles and the corpus luteum [20]. Figure 2 shows that cases of ovarian function disorder were more common in dairy cows during the dry season compared to the rainy season.This condition may be related to cows experiencing heat stress, which reduces their appetite [11].This condition means that seasonal factors also affect cow BCS.According to Long et al. [21], stress can trigger an increase in the hormone cortisol, adrenocorticotropic hormone, or the hormone progesterone above normal levels, which can delay or inhibit the secretion of GnRH and LH, thereby inhibiting follicular development. Another factor that may affect of ovarian function disorder was the cow maintenance.Dairy cows at KPSBU Lembang used an intensive system with a rope tied to the cow's neck.According to Hermadi et al. [20], intensively reared cattle will increase the risk of decreased ovarian function.Research by Long et al. [21] The results show that cows kept with their necks tied can cause a decrease in the fertility rate of cattle.The distribution of cases of ovarian function disorder (follicular cysts and cysts of the corpus luteum) is shown in Figure 7.The majority of areas with a percentage of cases of impaired function belonged to group 3 with a ratio of 101-250 cases per 1000 cows.Cikahuripan-Gudang Kahuripan Village, Kayuambon-Pagerwangi-Mekarwangi Village, and Suntenjaya Village belonged to group 2 with a ratio of 1-100 cases per 1000 cows.Meanwhile, Cibodas Village had the highest percentage of cases of ovarian function disorder by entering group 4 with a ratio of 251-500 cases per 1000 cows.Based on the distribution map, there were no areas belonged to group 1 or group 5. Based on the four maps of the distribution of cases of reproductive disorders, it can be seen that all regions experienced cases of reproductive disorders.The distribution map also shows no areas with a ratio of more than 500 cases per 1000 cows.In this study, no spatial relationship was discussed between the high cases of reproductive disorders on the distribution map and the distribution area.The housing system [6] and cow rearing [21] carried out by breeders may influence the magnitude and distribution of cases of reproductive disorders. CONCLUSION The reproductive disorders in dairy cows at KPSBU Lembang are retained placenta, dystocia, ovarian function disorder (follicular cyst and corpus luteum cyst), and endometritis.Dystocia and retained placenta are the highest cases in KPSBU Lembang during the dry and rainy seasons.Areas that had dairy cows with unideal BCS became the highest area of each type of reproductive disorder cases. Cow population from 2019 to 2021 respectively 21,643, 19,676, and 19,712 dairy cows spread across 26 Cumulative Collection Points or Tempat Pengumpulan Kumulatif (TPK) of milk used as the population in this study. Figure 3 . Figure 3. Diagram of the average BCS of dairy cows at KPSBU Lembang for 2019-2021. Figure 7 . Figure 7. Distribution map of ovarian function disorder at KPSBU Lembang in 2021.
2023-04-16T15:26:22.808Z
2023-03-10T00:00:00.000
{ "year": 2023, "sha1": "164002b827518659df52a895cd19c75b3568fd93", "oa_license": "CCBYSA", "oa_url": "https://jurnal.uns.ac.id/lar/article/download/63993/40121", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "164002b827518659df52a895cd19c75b3568fd93", "s2fieldsofstudy": [ "Geography", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
248837274
pes2o/s2orc
v3-fos-license
Synthesis, molecular docking, and cytotoxicity of quinazolinone and dihydroquinazolinone derivatives as cytotoxic agents Background Cancer is the most cause of morbidity and mortality, and a major public health problem worldwide. In this context, two series of quinazolinone 5a–e and dihydroquinazolinone 10a–f compounds were designed, synthesized as cytotoxic agents. Methodology All derivatives (5a–e and 10a–f) were synthesized via straightforward pathways and elucidated by FTIR, 1H-NMR, CHNS elemental analysis, as well as the melting point. All the compounds were evaluated for their in vitro cytotoxicity effects using the MTT assay against two human cancer cell lines (MCF-7 and HCT-116) using doxorubicin as the standard drug. The test derivatives were additionally docked into the PARP10 active site using Gold software. Results and discussion Most of the synthesized compounds, especially 5a and 10f were found to be highly potent against both cell lines. Synthesized compounds demonstrated IC50 in the range of 4.87–205.9 μM against HCT-116 cell line and 14.70–98.45 μM against MCF-7 cell line compared with doxorubicin with IC50 values of 1.20 and 1.08 μM after 72 h, respectively, indicated the plausible activities of the synthesized compounds. Conclusion The compounds quinazolinone 5a–e and dihydroquinazolinone 10a–f showed potential activity against cancer cell lines which can lead to rational drug designing of the cytotoxic agents. Supplementary Information The online version contains supplementary material available at 10.1186/s13065-022-00825-x. Introduction Cancer is a complex disease resulting from perturbations in multiple intracellular regulatory systems and leading to a drastic increase in the number of the cells and thus tumor formation [1][2][3]. The investigations reveal that cancer is the second major cause of mortality in 2015. Moreover, there were 8.7 million deaths among 17.5 million cases diagnosed with cancer globally [4]. Breast, lung, prostate, and colorectal cancers are recognized as widespread types of invasive cancer, which account for about 4 in 10 of all diagnosed cases [5]. Depending on the type and stage of cancer, the common cancer treatments are radiotherapy, hormone therapy as well as surgery, and chemotherapy. However, the central problem of the last item is the failure in the distinction between healthy and cancerous cells, which results in inevitable adverse effects on the healthy cells [6]. Along the same line, Multidrug resistance (MDR) is another major source of conflict in the treatment of cancer due to the resistance of the cancerous cells against the traditional chemotherapeutic agents [7]. Therefore, the need for finding novel ways for cancer treatment is still needed. In 2016, we disclosed a novel multi-component strategy to assemble 1,2,3-triazole derivatives of 2,3-dihydroquinazolin-4(1H)-one via click reaction with in situ prepared organic azides [40]. Furthermore, we proposed an innovative approach of Quinazolin-4(3H)-ones synthesis by employing CuBr and Et 3 N in 2016 [41]. With this information in hand, we focus on the synthesis of novel quinazolinone and dihydroquinazolinone to obtain more effective cytotoxic agents. All synthesized derivatives were evaluated against MCF-7 and HCT-116 cancer cell lines (Fig. 1III). Chemistry Two straightforward synthetic pathways were adopted to synthesize the target compounds 5a-e and 10a-f as shown in Scheme 1. The sequence for the proposed reaction initiated by treating commercially available isatoic anhydride (1) with aromatic and aliphatic amines (2) in H 2 O at room temperature to obtain the corresponding 2-aminobenzamides (3) [42]. All compounds 3 were easily prepared and used without further purifications. Next, we employed the reaction of compound 3 and phenyl isothiocyanates (4) in the presence of CuBr and Et 3 N in DMF to achieve the final product 5 (Scheme 1 Method A). The second strategy is for the synthesis of compound 10a-f in which the intermediate 7 was produced through the reaction between 2-aminobenzamides (3) and 4-(prop-2-yn-1-yloxy)benzaldehyde (6) in the presence of K 2 CO 3 in ethanol at reflux. The presence of a triple bond in dihydroquinazolinone (7) reaction to form 1,2,3-triazole ring. As a result, compound 7 was reacted with the in situ prepared (azidomethyl)benzene (9) under the Sharpless-type click reaction conditions [43]. It was found that performing the reaction in the presence of CuI (7 mol%) as the catalyst in H 2 O/t-BuOH (1:1) at room temperature within 24 h led to the formation of the corresponding product 10a-f in plausible yields (Scheme 1 Method B) according to previously reported procedures [44,45]. The structures of final products have been verified by FT-IR, 1 H-NMR, as well as melting point, and CHNS elemental analysis. Biological activity Cytotoxic evaluation The selected compounds 5a-e and 10a-f were evaluated as possible cytotoxic agents against human colon cancer HCT-116 cell line and MCF-7 breast cancer cell line by MTT assay using doxorubicin as the standard drug. As shown in Table 1, the induced cellular toxicity in the cell lines was studied at 48 and 72 h. The IC 50 value was calculated from the inhibition rates at the mentioned durations. The analysis of variance for transformed response indicated that the cytotoxic effects of compounds depend on time, whether for the MCF-7 ( The first structure-activity relationship (SAR) explorations focused on MCF-7 cells. Assessments of 5a-e derivatives against MCF-7 demonstrated that 5d possessing R 1 = 4-OMe-C 6 H 4 and R 2 = Ph afforded good potency with an IC 50 value of 28.84 μM and 24.99 μM after 48 and 72 h followed by 5a bearing R 1 = Ph and R 2 = 2-Me-C 6 H 4 . It seems that increasing the bulkiness at R 1 may improve the potency. Cytotoxic screening of 10a-f revealed that 10a as unsubstituted derivatives exhibited IC 50 values of 62.29 μM and 18.88 μM after 48 and 72 h. The incorporation of halogen groups at R 3 position showed different behavior so that 4-F (10b) reduced the activity compared to 10a while para-chlorine (10c) or para-bromine (10d) improved the cytotoxic potency compared to 10a. Noteworthy, the substitution of 4-F-benzyl at R 1 position of 10b produced the most potent derivative in this set with IC 50 values of 41.47 μM and 16.30 μM after 48 and 72 h. With regards to the HCT-116 cancer cells, in testing the compounds 5a-e, it was shown that 5a was the most promising cytotoxic agent with IC 50 values of 7.15 μM and 4.87 μM after 48 and 72 h. Further investigations illustrated that the replacement of Ph with other moieties at R 1 as well as the replacement of 2-Me-C 6 H 4 with Ph at R 2 (5b, 5c, 5d, 5e) deteriorated the cytotoxicity potential, significantly. From the screening data of 10a-d, it was revealed that electron-withdrawing substitutions at R 3 (10b, R 3 = 4-F; 10c, R 3 = 4-Cl and 10d, R 3 = 4-Br) decrease the potency compared to 10a as unsubstituted derivative. By way of illustration 10b (R 1 = benzyl; R 3 = 4-F) recorded the least potency in this series with IC 50 values of 183.9 and 63.99 μM. Interestingly, the replacement of benzyl in 10b with 4-F-benzyl moiety leads to a noticeable increase in the cytotoxicity in 10f with an IC 50 value of 40.35 μM and 10.08 μM after 48 and 72 h. Overall, concerning the cytotoxic evaluations on 5a-e, it can be understood that 5d was the most active derivative against MCF-7 while 5a containing Ph at R 1 and 2-Me-C 6 H 4 at R 2 was the most potent cytotoxic agent against HCT-116. Assessments of 10a-f revealed that compound 10f bearing 4-F-benzyl at R 1 and 4-F at R 3 was the most active cytotoxic agent against both tested cell lines. Next, to determine the safety of 5a, 5d, and 10f as the most potent derivatives on normal cell line over cancer cell lines, these derivatives were examined on Hek293 as normal cell lines by MTT reduction assay. Results were presented in Table 4. As can be seen, derivative 5a demonstrated high toxicity against Hek-293 cell lines while 5d and 10f demonstrated low toxicity in this cell line. Molecular docking Poly (ADP-ribose) polymerases (PARPs) is a family of proteins involved in diverse cellular functions, especially DNA repair and maintenance of chromatin stability via ADP ribosylation. PARP10 (ARTD10) is one of the members of the PARP family that performs mono-ADPribosylation onto the amino acids of protein substrates from donor nicotinamide adenine dinucleotide (NAD + ) of target proteins [46]. Recent studies have linked the activity of PARP10 to support cancer cell survival and DNA damage repairing [30]. The silencing of PARP10 in MCF7 and CaCo2 cells decreased the proliferation rate that correlated with cancer [47]. Quinazolin-4-one derivatives (Compound F, Fig. 2) were first discovered by Oregon Health and Science University as effective PARPs inhibitors involved in mono ADP-ribosylation [48,49]. Further modification leads to the discovery of novel compounds (Compound G and H, Fig. 2) that inhibited PARP10 [50,51]. According to the literature, the amino acids His887, Gly888, Asn910, Ala911, Tyr914, Tyr919, Ala921, Leu926, Ser927, and Tyr932 are the most important ones in the PARP10 active site [52,53]. Regarding the similarity of reported PARP10 inhibitors with the designed structures, molecular docking evaluations were performed to study the binding mode of the most potent compounds 5a, 5d and 10f with PARP10 active site. Docking studies of the mentioned compounds were carried out using gold docking software. Validation of the molecular docking method was done by redocking the crystallographic ligand of the target enzyme, against PARP10 (PDB ID: 5LX6) which testified the validation of the docking calculations. The ChemScore fitness value of 5a, 5d, and 10f plus their interactions with residues in the PARP10 active site were documented in Table 5. Alignment of the best pose of veliparib in the active site of PARP10 and crystallographic ligand recorded and RMSD value of 0.63 Å. The docked structure veliparib exhibited the interaction of this compound with Tyr919, Ala921, Leu926, Ser927, Tyr932, and Ile987 residues. Moreover, this compound showed three H-bond interactions with Gly888 and Ser927. Figure 3 showed the docking interactions of compound 5d within PARP10. Docking evaluation depicted four pialkyl interactions between the amino quinazolin-4(3H)one ring and Ala921, Leu926, Tyr932, Ile987 as well as one hydrogen bound interaction between Ala911 and NH of amino quinazolin-4(3H)-one. 2-methylphenyl moiety exhibited one pi-sigma interaction with Val913 and one pi-alkyl interaction with Ala911 plus pi-alkyl interactions with Val913, Tyr917, Tyr919, Ile987. Also, pi-pi-T-shaped and pi-alkyl interactions were recorded between phenyl and Tyr919 and Ala911, respectively. According to the results of 5d docking studies (Fig. 4), the aromatic moiety of 4-methoxyphenyl presented a pisigma and a pi-pi-T shaped interaction with Ala911 and Tyr919, respectively. Phenyl pendant demonstrated a pipi-stacked interaction with His887 and a pi-alkyl interaction with Ala921. Amino-quinazolin-4(3H)-one also made a pi-alkyl interaction with Tyr919. The 3D interaction pattern of compound 10f (Fig. 5) showed two pi-pi-T-shaped and one pi-alkyl interactions with 4-fluorobenzyl moiety. The dihydroquinazolin-4(1H)-one ring participated in pi-pi-T-shaped and pi-alkyl interactions with Tyr932 and Ala911. Also, the phenoxy linker was fixed through pi-pi-T-shaped interaction with His887 and Typ932. Triazole ring in the middle of the molecules exhibited hydrogen bound with Typ932 plus two pi-sigma interactions with Leu926 and Ile987. Terminal 2-fluorobenzyl triazole participated in van der Waals, pi-sigma, and pi-alkyl interactions with Tyr932, Val913, Ala91, respectively. Overall it was shown that the findings of the docking study of the most active derivatives were in line with the results of cytotoxic effects. Materials and methods The measured data on melting points were evaluated on a Kofler hot stage apparatus and were uncorrected. The 1 H-NMR and IR spectra were gained by employing Bruker 400-NMR and ALPHA FT-IR spectrometer on KBr disks, respectively. The chemical reagents were obtained from Aldrich and Merck as well. Moreover, the Spectroscopic data of final products, including 1 H-NMR and are available in the supporting information and our previous studies [41,42]. Syntheses of 3-Substituted 2-(Arylamino) quinazolin-4(3H)-ones 5 (Method A) The corresponding 2-aminobenzamide derivatives (3) were synthesized via the reaction of equivalent amounts of isatoic anhydride (1) and an appropriate amine (2) in water at room temperature for 2-5 h [28]. After completion of the reaction, the precipitated products were precipitated and filtered off, dried at 60 °C, and used for the further reaction without any need for more purification. Then, A mixture of 2-aminobenzamide (3) (2 mmol), isothiocyanate derivative (4) (2 mmol), CuBr (1 mmol),and Et 3 N (1 mmol) in DMF (5 ml) was heated at 80° for 8-10 h. After the reaction completion (monitored by TLC), the mixture was filtered off through a bed of Celite and washed with AcOEt. Next, H 2 O (20 ml) was added to the filtrate, it was extracted with ethyl acetate (3 × 15), and dried with Na 2 SO 4 . The solvent was then removed under reduced pressure and the crude reaction mixture was purified by column chromatography on silica gel and petroleum ether (PE)/AcOEt (5:1) as eluent. All products were recrystallized from PE/AcOEt (1:1) to give pure products 5 [44,45]. Cytotoxic evaluation Cell lines and cell culture The human cancer cells MCF-7and HCT-116 as well as Hek-293 as normal cells were purchased from Pastor Institute of Iran. The cells were maintained in RPMI 1640 medium supplemented with 10% heat-inactivated fetal bovine serum (Company: DNAbiotec, Cat number: DB9723), and streptomycin (100 mg/mL) and penicillin (100 U/ml) at 37 °C in a humidified atmosphere with 5% CO 2 in the air. MTT assay The cytotoxic activities of compounds 5a-e and 10a-f were evaluated against cancerous cell lines. And the most potent cytotoxic agents (5a, 5d, and 10f ) against normal cell lines were examined by taking advantage of MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide) colorimetric assay as reported method [32,33]. The absorbance was read at 570 nm against a test wavelength of 690 nm using Graphpad Prism 8.2.1 software. The inhibition percentage of compounds was calculated as: OD wells treated with DMSO1% -OD wells treated with compounds /OD wells treated with DMSO1% *100 (OD = absorbance). Then, IC 50 values were calculated by nonlinear regression analysis. Molecular docking Docking assessments of 5a, 5d, and 10f were performed using the GOLD docking program according to previously reported protocol [55,56] The 3D-crystal structure of the PARP10 binding site (PDB ID: 5LX6) was retrieved from Protein Data Bank (http:// www. rcsb. org). The protein structure was prepared using the Discovery studio client so that waters and ligands were removed from 5LX6 and all hydrogens were added. The binding site of the enzyme was defined based on the native ligand Veliparib with a 8 Å radius. For validation of docking, the ChemScore function was chosen for docking of Veliparib inside the 5LX6. All other options were set as default. After validation, 5a, 5d and 10f compounds were sketched using Hyperchem software and energy minimized by the MM1 force field. The same docking procedure was applied for docking analyses of mentioned compounds with the GOLD docking program. The top-score binding poses were used for further analysis. Protein-ligand interactions were analyzed with Discovery Studio Visualizer. Conclusion In the quest for effective anticancer agents, the series of quinazolinone 5a-e and dihydroquinazolinone 10a-f were efficiently prepared and characterized. The synthetic compounds were evaluated for anticancer activity against two cell lines MCF-7 and HCT-116. Most of the compounds, especially 5a, 5d, and 10f were found to have very good activity against tested cancerous cell lines. Next safety and selectivity assessments of mentioned derivatives against normal cell lines revealed that 5d and 10f had low toxicity against Hek-293 cell lines. The molecular docking studies validated the outcome results from the anticancer activity and signified the potential of these derivatives as potent PARP10 inhibitors. As a result, these compounds can be modified further for the development of new anticancer therapeutics.
2022-05-18T13:35:35.763Z
2022-05-18T00:00:00.000
{ "year": 2022, "sha1": "e197377ada7cf498e07d8a1a10f4cdb55b469532", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e197377ada7cf498e07d8a1a10f4cdb55b469532", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237273638
pes2o/s2orc
v3-fos-license
The Impact of COVID-19 Pandemic and Monthly Income on Female Sexual Behavior Öz Bagcilar Med Bull DO I: 10.4274/BMB.galenos.2021.03.036 Evrim Ebru Kovalak, Özlem Karabay Akgül, Tolga Karacan, Özlem Yüksel Aybek, Hakan Güraslan University of Health Sciences Turkey, İstanbul Bağcılar Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey ORIGINAL RESEARCH Address for Correspondence: Evrim Ebru Kovalak, University of Health Sciences Turkey, İstanbul Bağcılar Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey E-mail: evrimebru@yahoo.com ORCID: orcid.org/0000-0001-5311-1060 Received: 21.03.2021 Accepted: 18.07.2021 Cite this article as: Kovalak EE, Karabay Akgül Ö, Karacan T, Aybek ÖY, Güraslan H. The Impact of COVID-19 Pandemic and Monthly Income on Female Sexual Behavior. Kovalak et al. The Impact of COVID-19 Pandemic and Monthly Income on Female Sexual Behavior Introduction COVID-19 is a contagious disease caused by Severe Acute Respiratory syndrome-coronavirus-2 (SARS-CoV-2), a new type of coronavirus. World Health Organization (WHO) first declared the presence of the virus on December 31, 2019, following spreading viral pneumonia cases in Wuhan, China. WHO declared the COVID-19 outbreak as a pandemic on March 11, 2020 (1). SARS-CoV-2 is an RNA virus that primarily affects the respiratory system and is transmitted by large respiratory droplets or direct contact. It can be fatal by causing pneumonia, bronchitis, and SARS (2,3). At the time of writing this study, 80,805,661 people were infected, and 1,766,726 people died because of the disease worldwide. Moreover, borders were closed, travel restrictions and lockdowns were imposed. The first coronavirus case was reported on March 10, 2020 in our country (4). The health authorities also implemented strict restrictions such as self-isolation, the use of masks, and home quarantine to prevent the spread of the disease. The disease-related isolation resulted in loneliness, fear of death, and depression. These restrictions also caused the separation of families and partners, and unemployment with loss of income, as observed in previous outbreaks (5,6). Rajkumar (7) reported increased anxiety, depression (experienced by 16-28% of the population), and selfreported stress (shared by 8% of the population) as the most common mental reactions during the COVID-19 pandemic. Sexuality is a significant part of a couple's life, which may affect mental health (8). Due to the pandemic, self-isolation and social distance have negatively affected sexual life (9). However, there is still a lack of knowledge regarding the effect of pandemic and decreased income on female sexual life. This study aimed to evaluate women's sexual behavior during the COVID-19 pandemic in our country. Materials and Methods This prospective, observational study was conducted in a tertiary referral hospital between June 27 and July 27, 2020. The approval for the study was obtained from the Ministry of Health (2020.05.25T22.02.17). Besides, ethical approval was obtained from the local ethics committee of our hospital (University of Health Sciences Turkey, İstanbul Bağcılar Training and Research Hospital Clinical Research Ethics Committee, approval number: 2020.07.2.08.109). The study was conducted in accordance with the Declaration of Helsinki and its later amendments. All participants were included after obtaining informed consent. A questionnaire consisting of 13 questions regarding the sexual life during the COVID-19 pandemic was applied to 169 women aged 18-45 years, who were admitted to the gynecology outpatient clinic due to routine controls. The questionnaires from the recent studies of Jacob et al. (10) from the UK and Li et al. (11) from China on the same topic were taken as examples. Women with regular and active sexual life were included. The questions about patients' basic characteristics, such as age, education, marital status, monthly income, systemic disease, drug addiction, alcohol consumption, and smoking, were asked. The questions evaluating sexual behavior such as the relationship with partner, sexual desire, frequency of intercourse, sexual satisfaction, and fertility desire were then recorded. The exclusion criteria were as follows; not having a regular partner, COVID-19 positivity, history of cancer, endometriosis, pelvic pain, incontinence, menopause, vaginal atrophy, severe systemic disease (diabetes, hypertension, and coronary artery disease), mental disorders, and pregnancy or lactation. Twentynine patients who were not compatible with the inclusion criteria were excluded. One hundred forty women were included in the study. Women were divided into two groups as those above and below the monthly hunger limit. At the time of the study, the monthly hunger limit per person in our country was 2.500 Turkish Liras (TL). The participants in the increased income group were excluded since there were only three women in this group, which was insufficient for statistical comparisons. The sample size was calculated via the G power program, version 3.1. The effect size to determine the sexual activity of the study group was determined as 0.26. The analysis with a 1-ß of 0,8 and alpha error of 0.05 revealed that 117 patients were required. Statistical Analysis Statistical analyses were performed with the Statistical Package for the Social Sciences (SPSS) 20 program (IBM Corp, Armonk, NY, USA). In addition to the descriptive statistical methods (mean, standard deviation), the Pearson's chi-square test and Fisher's Exact test were used to compare categorical variables. A p-value of <0.05 was considered statistically significant. Results The mean age of the participants was 32.9±7.74 years . The mean age of their spouse was 36.73±8.54 years. 93.6% of the participants were married, and 52.1% were primary school graduates. 49.3% of the participants had a monthly income below 2.500 TL. Regarding the monthly income, 84 (60%) patients had a decrease, 53 (37.9%) of them remained stable, and only 3 (2.1%) of them had an increase during the pandemic. The rate of living with their parents was 18.6%. Demographic features and the economic status of all participants were shown in Table 1. 16.4% of participants had a worsened relationship with their partner during the COVID-19 pandemic. Sexual desire of the participants was decreased by 32.9% and remained the same in 58.6% of the participants. The frequency of sexual intercourse was the same in 55.7% of women, decreased in 33.6% of women, and increased in 10.7% of women. During the COVID-19 pandemic, sexual satisfaction decreased in 27.9% of participants. 25.7% of respondents reported a reduced fertility desire ( Table 2). The study groups were divided into the stable and decreased income groups after excluding three patients with an increased income due to a low number of participants. No statistically significant difference was found between the groups regarding the relationship with a partner, frequency of sexual intercourse, and fertility desire (p>0.05). However, a statistically significant difference was found between the decrease in sexual desire rates. A higher rate of 40.5% was observed in the decreased group than in the stable income group with 22.7% (p=0.03). A statistically significant difference was found in terms of sexual satisfaction rates between the income level groups. The rate of increase in sexual satisfaction in those with a decreased income was found to be higher than in those with a stable income (10.7% vs. 0, respectively, p=0.04) ( Table 3). Discussion In this study, relationship with partner, sexual desire, frequency of intercourse, sexual satisfaction, and fertility desire remained stable in most participants during the COVID-19 pandemic. However, considering the monthly income and evaluation of sexual life, sexual desire was less in the decreased income group. On the contrary, there was an increase in sexual satisfaction in the decreased income group. This may be associated with increased time spent together, less work stress, and less social or family obligations. The social restrictions, decreased monthly income, fear of unemployment, loneliness, and fear of death increased stress, anxiety, and depression during the pandemic. Hamilton and Meston (12) have reported that prolonged exposure to the aforementioned stress reduces sexual desire. Several publications have been reported on the effect of the COVID-19 pandemic on sexual functions. Karsiyakali et al. (13) designed a study comprising 1.356 men and women living in large and small cities. They concluded a decreased rate of sexual desire and intercourse frequency but an increased rate of masturbation during the COVID-19 pandemic. The decreased rates were more prominent in couples living in large cities than those living in small cities. These results are supposed to be associated with the population density and the high number of COVID-19 cases in large cities (13). Ibarra et al. (14) also emphasized that COVID-19 negatively affected sexual behaviors. A study conducted with 868 sexually active people in the United Kingdom reported worsening sexual desire, especially in older and single people. (10). In another study conducted in China, 37% of the participants showed a decrease in the frequency of sexual intercourse during the pandemic. In this study, age, relationship status with the partner, and sexual desire were found to be closely related to the frequency of intercourse (11). In a study evaluating sexual functions before and after quarantine during the COVID-19 pandemic, in which 764 women from Poland participated, a statistically significant decrease was observed in the female sexual function index (FSFI) scores. The reduction was higher in unemployed women (15). A study from Italy also found that total FSFI scores of women of reproductive age decreased during the COVID-19 pandemic. Besides, being unable to work at home, having postgraduates, and having multiparity are reported as independent risk factors for a low FSFI score (3). Table 3. Comparison of the sexual behavior of the participants whose income level did not change with those whose income level decreased during the COVID-19 pandemic On the contrary, Hall et al. (16) reported that women's sexual activity increased significantly during intense stress. In another study, Yuksel and Ozgor (17). reported that menstrual irregularities and desire for sexual intercourse increased during the pandemic and total FSFI scores were higher. Micelli et al. (18) found that the frequency of sexual intercourse did not decrease in the vast majority of Italians (66.4%) before and during the pandemic (18). Our results were also compatible with this study. However, more than a third of them decided to postpone having children. These results could be associated with economic instability and fear of the impact of COVID-19 infection on pregnancy outcomes. Although the relationship between unemployment and sexual activity has not been clarified, unemployment is associated with an increased risk of depression (19). The unemployment and low income caused by the pandemic also support our results. Controversial results have been reported regarding the effect of the COVID-19 pandemic on sexual functions (15)(16)(17)(18)(19). Although our results are consistent with the decrease in sexual desire in the decreased income group, we may postulate that increased sexual satisfaction could be related to spending more time with the couples at home. The different study results could be explained by the different reactions of the countries to stress management. Study Limitations The present study has some limitations. The number of participants in the study group was relatively small. The strength of or study could be attributed to face-toface interview approach that could lead to more reliable answers. Although the FSFI questionnaire seems to be an objective method to evaluate female sexual dysfunction, the entire parameters of FSFI were not compatible with our study design. Conclusion Sexuality is a complex phenomenon affected by a variety of factors. Acute stress caused by the COVID-19 pandemic has negatively affected the quality of life and sexuality. In our study, the decreased income level was associated with decreased sexual desire in women. The truth is that there are still many unknowns regarding both COVID-19 and sexuality. Therefore, larger prospective studies are needed. Informed Consent: All participants were included after obtaining informed consent. Peer-review: Externally and internally peer-reviewed.
2021-08-22T16:27:43.833Z
2021-08-17T00:00:00.000
{ "year": 2021, "sha1": "c8cde28aadf653a535e5dca681aa92e5a72bb0d6", "oa_license": null, "oa_url": "https://doi.org/10.4274/bmb.galenos.2021.03.036", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c8cde28aadf653a535e5dca681aa92e5a72bb0d6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography" ] }
85083340
pes2o/s2orc
v3-fos-license
The effects of clinoptilolite on piglet and heavy pig production Abstract To evaluate the effects of clinoptilolite on piglet and heavy pig production two separated trials have been performed. In the first trial 40 pigs of the initial body weight of 55 kg were used. Animals were homogeneously allocated to two groups: a control group traditionally fed and a clinoptilolite group in which feed contained the additive at 2%. Pigs were slaughtered at about 160 kg body weight. Blood samples were taken to determine blood urea nitrogen (BUN). In the second trial a total of 116 piglets from 12 litters was used. Six litters were fed from the 7th day of life a diet containing clinoptilolite at 2%. According to the dietary treatment of the suckling period, 84 weaned piglets were homogeneously allocated to two groups fed up to 33 kg body weight a diet containing or not clinoptilolite at 2%. In both trials daily weight gain, feed intake and pigs’ health were regularly recorded. The dietary inclusion of clinoptilolite at 2% did not resulted in any modification either of growing performances or of uraemia. Piglets on clinoptilolite diet showed a significant (P<0.05) improvement of faecal dry matter content. At slaughtering the dietary inclusion of clinoptilolite resulted in a trend towards an improvement of lean cuts yield and in a significant increase (P<0.05) of the ratio between lean and fat cuts. From our data it is suggested that clinoptilolite does not impair pig growing performances, determines a higher dry matter content of piglet faeces and improves carcass quality of heavy pigs with particular regard to lean cuts yield and lean to fat cuts ratio. Introduction "Zeolites" is a collective term comprising some inorganic crystalline compounds, either natural or synthetic, characterised by ion-exchange capacity at low temperature (< 100°C) and reversibility of dehydration-hydration process below of 250-300°C (Mumpton e Fishman, 1977). From a chemical standpoint, zeolites are hydrate allumino-silicates of alkali metals (K and Na) and alkaline earthmetals (Ca and Mg) consisting of tetrahedrons of SiO4 and AlO4. Due to their structural properties zeolites are used in chemical and oil industries as well as in agriculture (sugar industries) and in animal production. The addition of zeolites either to litter or animal waste results in a reduction of gaseous emissions from manure (Malagutti et al., 1995;Piersanti, 1995). Zeolites can also improve technological properties of feedstuffs: in fact they can make pellets more durable and determine a reduction of dust production (Melcion, 1995). Zeolite can bind undesirable contaminants of feedstuffs such as mycotoxins and metals, particularly caesium (Ramos e Hernandez, 1997). Due to their ammonia-binding capacity, zeolites act also as detoxicant agents along the gastrointestinal tract; this fact can improve swine growth and health. According to Poulsen and Oksbjerg (1995) and to Kyriakis et al. (2000) results attained by using zeolites are comparable with those obtained by adding some growth promoters to feed. Intestinal ammonia binding, which corresponds to a reduction of ammonia emissions, results in obvious environmental benefits (Kyriakis et al., 2000;Theophilou, 2000). Zeolites can also act in controlling stocked cereals parasites by damaging arthropods tegument and causing parasites death due to dehydration (Contessi, 1995) The properties of the different zeolites are not constant depending on the type of mineral (or minerals) and on the presence of other compounds (i.e. feldspar, apatite, calcite). The wide variety of effects ascribable to the use of zeolites in animal feeding (particularly in monogastrics), has stimulated the present researches that were aimed to the investigation of the effect of clinoptilolite, a natural zeolite that has been recently approved by the EU (Reg. First trial: growing-finishing pigs Forty Duroc x (Landrace x Large White) barrows, ranging in body weight (BW) from 55 to160 kg, were used. Pigs were homogeneously allocated to two experimental groups each containing 4 replications of 5 animals. Each group was made up of 20 pigs fed as follows: -group A (control) in which pigs received a maize/soybean diet without clinoptilolite addition; -group B in which pigs received the same feed as group A but added with clinoptilolite at 2.04 % (consequently clinoptilolite replaced all the other ingredients at 2%). The main physical-chemical properties of the zeolite used in the present trials are displayed in table 1. To meet the pigs' requirements, two different feed formulations were used (from 55 to 100 kg BW and from 110 to 160 kg BW). Percent composition and chemical analyses of these diets are shown in table 2 and 3, respectively. Chemical analyses were performed according to ASPA methods (Martillotti et al., 1987). Feed was offered as liquid; meal to water ratio was 1/3. Pigs were fed at the rate of 9% of their metabolic live weight (LW 0.75 ) up to a maximum of 3.4 kg per head per day. To calculate average daily weight gain, animals were individually weighed at the beginning of the trial, after 89 days and at the end of the experiment. Pigs were slaughtered at about 160 kg BW. At slaughtering the following data were collected: carcass weight, dressing out percentage, lean meat yield (by Fat-o-Meater), pH of the thigh at 45' and 24 hours post-mortem, colour of the thigh according to L*a*b* method (McLaren, 1980) by Minolta CR-200 colorimeter (pH and colour of the thigh were determined following ASPA directions [1996] on muscle Semimembranosus), percentages on the right side of the main commercial cuts (ham, shoulder and loin) and lean to fat cuts ratio (right side dissection was performed according to ASPA methods [1991]). On the beginning of the trial, on day 89 th of trial and on the end of the experiment, after a 18 hours fast, blood samples from the jugular vein were taken from 10 pigs per group to determine blood urea nitrogen (BUN) by using a Boeringher Mannheim kit. Second trial: piglets from birth to post-weaning 116 piglets deriving from 12 litters were used. Sows were homogeneously chosen with respect to farrowing order (second parity-dams) and genetics (Duroc x Large White). Piglets were weighed at birth. Litters were equalised to assign to each sow the same number of piglets of the same sex. Litters were allotted to two experimental groups: -group A (control) in which the 6 litters received a feed without clinoptilolite addition; (Noblet et al., 1989). After weaning, according to dietary treatment of the previous phase, 84 piglets were placed in 14 post-weaning cages (7 per thesis) each one containing 6 piglets. During this period a pellet starter feed was offered. Chemical composition and analyses of feeds are shown in tables 4 and 5, respectively. Piglets were individually monitored for: weight at birth, weight at weaning, weight on the end of the trial, average daily gain during each phase, mortality (and causes) during the first 7 days of life, mortality at weaning and mortality in the post-weaning period; diarrhoea incidence during the different phases and diarrhoea score (severe, medium and mild intensity; to perform this classification the number of pigs in each cage and faces consistency were used); feed intake and feed conversion rate. During the post weaning period 4 samples of faeces per thesis (in all 24 samples) were collected every two weeks to determine dry matter percent by freeze-drying. All the data obtained in both trials were submitted to analysis of variance by using the model y ij = µ + α i + ε ij were: y ij = dependent variable; α i = effect of the diet (i = 2); ε ij = error contribution. To compare data from mortality and diarrhoea score, chi-square test was used. First trial Due to foot diseases, that were not experimentrelated, two pigs (one for each group) were removed from the trial. Consequently, data on breeding performances refer to 19 pigs per thesis (table 6). Growing parameters were similar between the two experimental groups. At the end of the growing-fattening period (data collection stopped when one half of the animals attained the required body weight for slaughtering) average body weight was, in fact, of 149.4 for control pigs and of 149.5 kg for clinoptilolite-fed animals. As a consequence of the similarity in growth rhythm, daily weight gains - Feed intake and feed conversion did not differ significantly, too. The dietary addition of clinoptilolite did not resulted in any modification of blood urea nitrogen. The incorporation of clinoptilolite in the diet at the rate of 2% resulted in a slight reduction of crude protein and energy intake (0.33 vs. 0.34 kg/d and 7.57 vs. 7.75 Mcal ED/d). Considering that growing parameters were similar between the two groups, an improvement (even if it was not significant) of protein and energy efficiencies (0.62 vs. 0.60 kg/kg and 14.05 vs. 13.76 Mcal ED/kg, respectively) could be supposed; this improvement may also corresponds the observed difference in meat yield of carcasses (table 7). Facing to similar dressing out percentages, pigs receiving clinoptilolite showed, in fact, a tendential (P<0.1) improvement of lean cuts (ham, loin and shoulder) yield. This difference become significant (P<0.05) on lean to fat cuts ratio (1.83 vs. 1.99). This result agrees with our previous findings (Parisini et al., 1993) on the use of another clay (sepiolite), in heavy pigs feeding. Data concerning meat pH and colour (expressed by lightness, hue and chroma) did not differ between the groups and fall within the normal range Besides the positive effects on lean to fat cuts ratio, it must be highlight that the incorporation in the feed of a compound that has no caloric power as clinoptilolite, did not reduce energy efficiency at all. This consideration allows the theoretical recover and attribution to clinoptilolite of the same amount of digestible and net energy as that lost by adding an energy-less material. This result agrees with the findings of Monetti et al. (1996) and Malagutti et al. (1997) who, in balance trials by using the same or the double quantity of zeolite, found a slight and not significant improvement in organic matter digestibility. The absence of effects on the reduction of blood urea nitrogen of clinoptilolite-treated pigs, could indicate that when dietary protein level is adequate (i.e. not excessive) the amount of ammonia produced and absorbed in the intestine does not contribute to BUN raising. Second trial Data concerning productive performances of suckling piglets are shown in table 8: no significant differences were detectable between groups. Mortality rate of newborns fell, in both groups, within the standard (about 10%) and was mainly due to crushing of piglets. Regardless to dietary treatment and probably due to the high hygienic conditions, diarrhoea occurrences were low in both groups. Growing parameters of piglets in the postweaning phase (8-30 kg BW) are shown in table 9. As already observed in the previous phase, also during this second period no significant differences were appreciable between groups. On the whole, growing parameters, either regarding daily weight gain (547 vs. 549 g/d) or feed conversion rate (1.78 vs. 1.74), were highly satisfactory. These results agree with the findings obtained by Tassinari et al. (1999) on piglets of the same weight as those used in the present trials and receiving zeolite at 2%. Beside the absence of mortality, also in the post-weaning period diarrhoea incidence was low; even if data were not significantly different, piglets receiving clinoptilolite had less diarrhoea occurrences (on the whole 13 vs. 21 cases; total absence of severe diarrhoea in treated group). These results agree with the findings of Kyriakis Poulsen and Oksbjerg (1995) this fact could be related to intestinal ammonia binding properties of clinoptilolite that resulted in a lower damage on mucosa surface. A lower mucosa damage could also corresponds to a lower cellular turnover resulting in a sparing of nutrients which become available for growing process. According to previous considerations on piglets' health, faecal dry matter content was significantly higher (P<0.05) in clinoptilolite-fed animals (33.27% vs. 31.21%); this result agrees with the findings obtained in digestibility trials on growing pigs (70-100 kg BW) by Monetti et al. (1996) and by Malagutti et al. (1997). Conclusions The inclusion of clinoptilolite at 2% in pig diets did not alter feed intake. The addition of clinoptilolite did not influence growing parameters (daily weight gain and feed conversion rate) all over the productive cycle of pigs, from birth to slaughter. At slaughtering treated pigs showed a higher percentage of valuable cuts and an improvement in lean to fat cuts ratio. The significant higher dry matter content of faeces observed during the post-weaning period, could indicate a positive role of this zeolite on intestinal pathologies that characterize this delicate phase. From an economical standpoint, therefore, the use of clinoptilolite can result both in a reduction of the cost of the feed (notably during the first phases of breeding when ingredients are particularly expensive) and in an improvement of carcass quality (higher lean cuts yield) without inducing any modification of qualitative traits of meat (pH and colour). Authors acknowledge Silver e Baryte Ores Mining Co, Athens (Greece), for the kind supply of clinoptilolite.
2019-03-22T16:11:47.316Z
2002-01-01T00:00:00.000
{ "year": 2002, "sha1": "4a0b370718c81b3877da31357256a9ce642e5da3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4081/ijas.2002.103", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "21b99cff8ced9eed52c5f7b9eb13a7ea9484a028", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
267223332
pes2o/s2orc
v3-fos-license
A Study of Penalties Levied on Various Banks Operating in Co-Operative Sector by Reserve Bank of India : This study contains analysis of various reasons of penalties levied by Reserve Bank of India on various banks operating in Co-operative sector in India. The analysis is based on unique data base of 100 financial penalties imposed during January 2023 to Aug. 2023 on almost 100 banks operating like PSU banks Small banks, Private Sector Banks or Co-operative banks. It is evident from the data that how co-operative banking institutions neglected required due diligence on various statutory or regulatory guidelines and attracted penalties from RBIs. We also demonstrate the various reasons behind the imposition of penalties to the co-operative banking sector. Introduction: RBI is Central Bank of India.With all other works it is also working as a regulator of Banking Industry and as every regulators likes to have compliant entities (to whom it regulates), RBI also like to have compliant banking industry in India.Every regulator is drawing their own lines of regulations on the basis of not only local conditions but considering global scenarios, also not only on the basis of actual instances happened around India or on global scenarios but also foreseeing different possibilities.After the instance of Yes Bank, PMC bank, RBI had initiated many steps to enhance security of the banking industry by adding some additional regulation on the entire banking field.RBI is publishing the details of penalties levied by it, as and when it penalized some Bank on the basis of its / NABARD's audit observations of various Bank's yearly audits, special or snap audits.It is an indicator for entire Banking fraternity that what are the reasons for which RBI has penalized the Banks and also gives a chance to banking fraternity to initiate appropriate measures to avoid recurrence of such reasons in their institutions.All banking fraternity (inclusive of those penalized) is able to instigate proper mechanism to contain the risk of penalty in the areas in which RBI penalized some bank.Its true here that "Prevention is better than cure" though the banks which was already penalized has to go for cure However, despite the brighter points discussed above, there is darker side to it as the Co-Operative Banking sector feels that they are penalized by RBI largely, as no. of co-operative banks appearing in the lists is high vis-à-vis other banks (nationalized, PVT.Sector, small finance and Indian arms of foreign banks). Review of Literature: Banking regulation and supervision differ between jurisdictions.Previous research such as Barth et al (2013) has documented the difficulty in measuring and comparing these differences.In the process of banking regulation, the emphasis is on ensuring that banks comply with existing laws.Supervision, on the other hand, is the regular examination of these regulated banks and its effectiveness is relative to the powers it receives through regulation Bank regulation and supervision are important because they impact a bank's risk-taking appetite.A number of studies have examined the relationship between supervisory intervention and supervisory performance.Berger et al. (2000) showed that supervisory performance is improved in cases of intense bank supervision that encourages collecting more precise information about bank performance.Delis and Staikouras (2011) studied the impact of supervision on risk taking and concluded that there is an impact when the supervision is intense, However.Barth et al (2008) examine a sample of banks operating in over 100 countries and conclude that there is no systemic relationship with a greater level of supervisory authority and bank stability and performance.This suggests that country-specific characteristics have an impact on the effectiveness of the supervisory authority. Scopes of the Study: The study has the following scope: • The study could suggest measures for the co-operative banks to avoid future penalties. • The study may help the government in creating & implementing new strategies to control future penalties. Objectives of the Study: • To study of various reasons behind the penalties imposed by RBI on various banks operating in Co-Operative sector in India.• To help the Co-operative banks & government for creating new strategies or policies to control the penalties. Research Methodology: The study is conducted to not only study reasons of penalties to co-operative banks by RBI but to find root cause of the issues.While doing the study only secondary data has been taken. Conclusion & Recommendation Though data shows tilt against co-operative banking sector, one should not forget that since nationalization of many banks and establishment of RBI as regulator of Banking industry, nationalized / public sector banks are under their control and from years practicing on various regulatory guidelines, statutory guidelines and having much better international and national exposure, due to which their non-compliance to various regulatory or statutory guidelines is less. Though Private sector and Small finance banks are new generation or new entrant in the field, these banks are under control of RBI since their inception.And mostly these banks are technology driven and having banking expertise, who are well-known to various regulatory stipulations, regulations as well as statutory guidelines applicable to their functional areas in Banking. Over the year's public sector nationalized banks have laid downs their operational manuals sing various SOP (standard operating procedures) and staff is not only guided by such SOP manuals but also expected to follow the same and any failure in following SOP standard operational manual which may create an issue, results into staff accountability and other Pvt Sector banks, small finance bank's which are new generation banks are copying the module followed by public sector/ nationalized banks As a result, these banks are comparatively less in appearance in penal list.Another dark side is that these banks are having monetary power to have best technology, best infrastructure as well as to hire sufficient and knowledgeable staff.In contrast banks operating in co-operative sector lagging behind in introducing suitable technology, which is off course costly for them, they have their own restrictions in hiring sufficient and knowledgeable staff.As most of the co-operative banks are working with very less no. of branches, having less money power.There is no comparison of co-operative banks with other banks like PSU, small banks, Private Sector banks or Indian arms of foreign banks.Due to their (PSU, Private Sector, small banks or Indian arms of foreign banks) size, capital, reserve these banks are less in number where penalty is levied for exposure norms etc.Also, name of such banks is not appearing for penalty to director related loans or loan to parties where directors are guarantor as director board is not located from specific areas/ group/ community etc. which is rampant in co-operative banks and these directors are well versed with such norms Though co-operative banks are also in market from many years few may be from century or near century, the banks from the co-operative sector was brought under RBI's control recently by amending the Banking Regulations Act 1949 on 29 Sept. 2020 which made applicable retrospectively from 29 June 2020.As such Co-operative banks which are not well versed with RBI's controlling guidelines are suddenly burdened by this change.Till that, expertise of co-operative bankers is according to local needs, as per co-operative act, rules and regulations brought by NABARD and they are newer to RBI's guidelines as such changing the system abruptly is somewhat challengeable as customer dominance of co-operative banking is from semi urban and rural areas where changing customer's mind-set is of customers is itself is a challenge.However, prior to that major challenge is changing mind-set of existing staff, board of directors whereas finding new experience staff is another challenge Changing of customer's mind-set and changing the overall scenario which is going from years together create a fear in minds of co-operative banks of losing customers base which was established on the basis of years' practices This create a gap in co-operative bank's fraternity's mind-set and RBI's expectations from Banking sector and now due to which they faced issues in complying with RBI's rules and regulations Difference is due to RBI is controlling bank of banking sector, having knowledge of not only national economics but also international economies RBI also able to brought international best banking practices in Indian Banking Sector Also RBI learnt a lot from failure of various banks in past and brought various measures in implementation to prevent recurrences of such instances.And as co-operative banks brought under RBI control recently, they are not well versed with RBI rules and regulations, RBI's expectations and having dual control of co-operative act and RBI regulations resultant into dual path for co-operative banks. Also the changes in all aspect of banking need retention of existing business, satisfying the existing customers, adoption of new technologies and over all educating not only director board, bank staff but also customers which required adequate resources, funds etc. Structure of many co-operative banks is a big stumble as its controllers are in hands of directors, dominantly from one community, one group, in some cases though bank is having board of directors, it's a one person oriented or driven as such big customers are also from specific community, group or many are indirectly related with directors by virtue of local ambience etc. Perusal of RBI penalty list enumerated main factors for penalties which are as under and which give picture that up to what extend RBI is penalizing the banks of what the reason of such penalty it gives reflection of spectrum of issues as well as PAN India picture of banking system and penalty levied by RBI By Banking Regulation Act, 1949 RBI is vested with various controlling and supervising powers and RBI is expected to utilize the same in judicially.Before penalizing any bank RBI is issuing notice to the bank narrating the instances of non-compliances noticed by it or NABARD in their audit of that bank After notice that particular bank is having a chance to represent personally.through their functionaries and documentary submission is also permissible.If RBI is not satisfied by that banks personal appearance and documentary submission, then & then only RBI is levying penalty. The list shows different quantum of penalties for same reason.It may conclude that the difference is due to no of instances, gravity of instances which differ from bank to bank It may also be noted that by virtue of penalty, financial position of that bank cannot be judge.Whenever RBI feels that some bank's financial position is not good then it asks the bank to stop its operation or some extreme cases cancelling the license also Now, what are the reasons for such penalties True the main reason is non-complying with RBI's rules and regulations and guidelines Now, if we try to locate the exact reason why banks are failing to comply with RBI's rules and regulations or guidelines, then we find out major reason which is nothing but "Not able to determined own risk appetite and Compliance failure". Risk: In general, when we look at the term Risk in banking it is nothing but a " potential loss to bank due to the occurrence of particular event".Such events not only considered as probability of advances turned into NPA, cash theft or burglary in branch, ATM or when cash-in-transit, frauds but now a day it has additions like data theft, cyber-attack, suspicious transactions, utilization of bank channels for terror funding etc.And as years goes and technology upgradation newer and newer aspects are coming into light. Looking at the known risk prone areas and probabilities of potential loss to bank due to occurrence of any unwanted event risk has to determined.However, risk is not only generated through known areas but may come from new and still unknown areas also, however, bank should look at the probabilities and logic to determine such risk also.One such risk which is unexpected and may unknown to any establishments is RBI penalty itself as it is a not only financial effect but have a reputational effect and in some cases existence may affected. Compliance risk, Operational risk, credit risk, market risk, country risk, systemic risk reputational loss risk is few type of risks appearing in banking.However, main ingredients of risks are Inherent risk and control risk.Any function in banking is having inherent risk and when we put controls to mitigate the same and still having some residual risk, along with a control risk which may arise due to failure of control applied.Not assessing own risk appetite comes in the picture when banks are penalized for exposure limits or some credit lacunas or any simple avoidable reasons. RBI have issued preventive guidelines like to what extent of capital/ NDTL one bank can finance to individual borrower or to a group firm Here RBI is guiding the banks about their own risk appetite and tried to reduce the exposure arithmetically on the basis of either Capital or NDTL, so to know own risk appetite bank can derive its financial limit to individual borrower or group concern.This is one example of mitigation of risk RBI also restricted the banks in exposure level in unsecured loans along with permissible repayment period in case of unsecured loans also.Again it is a risk appetite defined by RBI.Where as in case of failure in various areas of lending exposures, exposure limits must be first to considered then risk rating, availability of securities etc. should be looked into This is applicable in non-fund based exposure like BGs/ LCs also where certain guidelines are also laid down to avoid any unforeseen future or in banking parlance contingent liability Non classification of NPA (non-performing asset) is also have a financial impact on any banking institutes as NPA is not only restrict booking of applied but un recovered interest/ income along with provision is required from profit earned.As such correct NPA classification is must. Though above examples are from asset side there are few risk which any banker need to understand and coming from liability side.For example: Concentration risk of deposits which involves not only concentration of large value deposits, concentration of high interest rate deposits, majority of deposits maturing on certain date or in certain period.Another example is offering high rate of interest to attract deposits and at the same time allowing interest rate deviation to exiting borrow to sustain the business, both things leading to risk. Another major risk is asset-liability mismatch i.e. loans repayment and deposit maturity period and amount mismatch for which RBI is stipulating maintenance of CRR (Credit reserve ratio) and SLR (Statutory liquidity ratio). From the above few examples one can understood Risk involved in banking and with a changing time risk is dynamic and changing at very high fast. Compliance: In simple language compliance is nothing but to comply with all prudential expectations, various rules and regulations which are brought out by any applicable law of country, various regulators and statutory expectations.As such Compliance risk is "the risk of legal or regulatory sanction, material financial loss or loss of reputation a bank may suffer as a result of its failure to comply with laws, regulations, rules and code of conduct etc., applicable to its activity and banking industry, as a whole. While compliance risk is arising out of non-compliance of applicable laws of country, various regulatory norms, statutory rules, regulations and expectations few other risk like credit risk country risk, systemic risk etc. are also arise due to non-compliance of some safeguards applicable to that specific areas. As such risks and compliance is going hand to hand and may summaries as failure in any compliance can turned into risk or risk is arising from non-compliance also And if there is Compliance failure in any bank result is increasing not only risk in the areas where compliance is failed but increasing risk of financial loss due to penalty, it may have levied by RBI which is not only financial loss but also business risk and reputational risk which is inherent conclusion of RBI penalty.So Compliance is major function to contain RBI penalty Compliance is the major issue with majority of Co-operative banks, which lacks somewhere into compliance function. There may be some reason which was not came into the picture off-late but there is possibility of such instances happening and which can be treated as non-compliance by RBI.For example, on RBI has penalized Nutan Nagarik Sahakari Bank Ltd., Ahmedabad for Rs 26 lakhs Among other reason one reason for penalty is issuance of debit cards to CC customers.Till the date that bank has not realized that issuance of debit card to CC account holder is one of the non-compliance and in recent past no other bank was penalized for this reason.Looking at the scenario two different views are coming forward 1.Other Bank's realized that issuance of debit card to CC account holder is deviation to RBI guidelines.2. Functionaries of Nutan Nagarik Sahakari Bank Ltd., Ahmedabad may of the opinion that their action of issuance of debit cards to CC account holder is correct as no bank is yet penalized by RBI for such reason till date. The overall opinion of co-operative fraternities is that without having separate compliance function in place, they are complying with the various regulatory and statutory rule and regulations till few years back However, though it is true, it is piece meal approach and for which every functionary should be aware of regulatory as well as statutory rules and regulations in letter and spirit not only theoretically.Banking is the area which can be run on the basis of various regulatory and statutory guidelines, rules & regulations along with knowledge of industry and not on the basis of logic which is mixed with paltry knowledge of industry.Moreover, it should not have piece and meal approach but should have dedicated approach to become compliant.It's an ongoing process and no one should show laxity of approach when we are, looking at the compliance. Interest Payment Overdue: Another reason which largely appeared in the list of failure of interest payment in case of overdue deposits and non-payment of interest in individual proprietorship current account where account holder/ proprietor died though these are reason at large in penalty to many banks, it is still happening showing that bankers are not vigil enough to such simple guidelines.There may be lacunae in software system utilized by banks but shows compliancy of the staff while routing such transactions and rechecking the same Another major reason found is failure of maintenance of CRR and SLR.Many banks faced the heat of penalties for such simple reason Again it's a human ignorance or human error. the bank failed to make payment of applicable interest on balance amounts lying in the current accounts of deceased individual depositors / sole proprietorship concerns, The secondary data has been taken from RBI notification various reports, working papers & government publications. 16 failed to ensure that customers are not contacted after 7 pm and before 7 am Kotak Mahindra Bank Limited 395.00 17 Failure to submit data to CIC on regular basis/ or not submitting at all / no integrity or quality of data is not upto the mark 18Sanctioning / renewal of loans to directors or relative of Baran Nagrik Sahkari Bank Ltd., Baran, Rajsthan 2.00 • Email: editor@ijfmr.comIJFMR240112137Volume 6, Issue 1, January-February 2024 7directors ot levied penal charges in certain accounts for late payment of credit card dues though the customers had paid the dues by the due date, through third party platforms. In furtherance to the same 1. Acharya, V.V., Engle, R., Richardson, M., 2012.Capital Shortfall: A new approach to ranking and regulating systemic risks.The American Economic Review 102 (3), 59-64.2. Acharya, V.V., Pedersen, L.H., Philippon, T., Richardson, M.P., 2010.Measuring systemic risk.Working Paper.New York University.3. Bank for International Settlements, 2012.Core principles for effective banking supervision.ISBN 92-9131-146-4.4. Bank for International Settlements, 2013.Global systemically important banks: Assessment methodology and the additional loss absorbency requirement.ISBN 92-9131-947-3
2024-01-26T17:16:55.301Z
2024-01-18T00:00:00.000
{ "year": 2024, "sha1": "6d1c198d5d4e381e4852a9b1f98dd75b0a807870", "oa_license": "CCBYSA", "oa_url": "https://www.ijfmr.com/papers/2024/1/12137.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ba5269a5e832c9720aa961875b771db1cf3f0e93", "s2fieldsofstudy": [ "Business", "Law", "Economics" ], "extfieldsofstudy": [] }
250368119
pes2o/s2orc
v3-fos-license
Intervention effect of encouraging mental and programmed nursing of patients in interventional operating room on their compliance and bad moods BACKGROUND Patients’ lack of correct understanding of cardiovascular disease and interventional therapy is often accompanied by varying degrees of fear, depression and anxiety. Negative emotion will affect the hemodynamic fluctuation of patients undergoing interventional surgery, which is not conducive to the smooth and safe operation of interventional surgery. Therefore, it is very important to implement effective nursing intervention in the operating room. AIM To explore the intervention effect of motivational psychological nursing combined with programmed nursing on compliance and bad mood of patients in interventional operating room. METHODS A total of 98 patients in the interventional operating room of our hospital from October 2019 to March 2021 were randomly divided into study group (n = 49) and control group (n = 49). The control group took routine nursing. However, the study group took motivational psychological nursing combined with procedural nursing on the basis of the control group. Statistics were made on rehabilitation compliance, Positive and Negative Affect Schedule of bad mood, Simplified Coping Styles Questionnaire score of coping style and satisfaction of intervention between the two groups before and after intervention. RESULTS The rehabilitation compliance of the study group (95.92%) was higher than that of the control group (81.63%) (P < 0.05). After intervention, the scores of upset, fear, irritability, tension and fear in the study group were respectively, which were lower than those in the control group (P < 0.05). After intervention, the score of positive coping in the study group was higher than that in the control group. However, the score of negative coping in the study group was lower than that in the control group (P < 0.05). The intervention satisfaction of the study group (93.88%) was higher than that of the control group (79.59%) (P < 0.05). CONCLUSION The intervention of motivational psychological nursing combined with procedural nursing can improve the rehabilitation compliance, and alleviate the bad mood. In addition, it can change their coping style to the disease, and the patients are more satisfied with the nursing work. INTRODUCTION The interventional operating room is an important department for the clinical treatment of cardiovascular and other diseases. This kind of patient condition is serious, and progress occurs rapidly. Some patients even have a sense of near death, and patients lack correct understanding of their own diseases and interventional therapy. As a consequence, this condition is often accompanied by varying degrees of fear, depression and anxiety [1][2][3]. Negative emotion can affect the hemodynamic fluctuation of patients undergoing interventional surgery, which is not conducive to the smooth and safe operation of interventional surgery [4,5]. As a consequence, it is important that effective nursing interventions be implemented in the operating room. Motivational psychological nursing mainly stimulates the internal potential and strength of patients through positive incentives. In addition, it is adopted for disease nursing intervention to help patients alleviate negative emotions through self-regulation and accept interventional therapy with a relaxed and positive attitude [6]. Programmed nursing formulates a series of nursing programs according to the characteristics of the interventional operating room. In addition, it intends to ensure the standardization, organization and rationality of nursing measures and to avoid the blindness and arbitrariness of nursing intervention. Moreover, programmed nursing intends to pay more attention to nursing responsibility to provide quality nursing services for patients [7]. However, there are few systematic studies on the value of programmed nursing combined with motivational psychological nursing intervention at present. As a consequence, this study intends to select 98 patients in the operating room of our hospital to explore the application value of the above combined intervention program. General data A total of 98 patients in the interventional operating room of our hospital from October 2019 to March 2021 were selected. The inclusion criteria were as follows: (1) All patients were in the department of cardiology; (2) Patients who were to be treated with PCI; (3) Patients who were informed consent to this study; and (4) Patients with good compliance and can cooperate to complete the investigation. Exclusion criteria: (1) Patients with speech communication disorder, hearing impairment and mental system disorders; (2) Patients with malignant tumor; (3) Patients with severe systemic diseases; (4) Patients with allergic constitution; and (5) Patients with history of alcohol and drug dependence. According to the simple random number table, the patients were divided into study group (n = 49) and control group (n = 49). There were 26 males and 23 females, aged 44-79 years, with an average of (63.54 ± 10.37) years, and education level: primary school and below 13 cases, junior middle school and senior high school 26 cases, college and above 10 cases. In the control group, there were 29 males and 20 females, aged from 41 to 79 years old, with an average of (65.04 ± 11.32) years, and education level: primary school and below 16 cases, junior middle school and senior high school 25 cases, junior college and above 8 cases. The clinical data of the two groups were balanced and comparable (P > 0.05), and this study was approved by the Ethics Committee of our hospital. Methods The control group were treated with routine nursing care, including assisting patients with relevant examination before operation, preparing surgical materials, health education, close monitoring of vital signs during operation, and so on. On the basis of the control group, the study group adopted motivational psychological nursing combined with programmed nursing, programmed nursing: (1) Preoperative intervention, grasping the patient's condition and other information through communication with patients, and explaining in detail the safety and necessity of interventional therapy; (2) preparing the materials needed for interventional therapy, putting in sodium, nitroglycerin, dopamine, etc., according to the risks during interventional therapy, specify the drug name and dose, etc.; (3) carrying out psychological intervention for those with clear consciousness to alleviate their tension and fear, to arrange nurses to accompany them when entering the operating room, avoiding letting patients wait alone in the interventional operating room; (4) properly adjusting the humidity and temperature of the interventional operating room. It can also play soothing and soft music according to patients' preferences; (5) intervention during operation, perfect preoperative treatment, including double vena cava needle placement, skin preparation, etc., connect the monitor, closely monitor the patient's ECG and blood pressure, and inform the doctor to take corresponding treatment in case of sweating or convulsions; and (6) postoperative intervention, timely informing the patient that the operation is very successful, and avoiding worrying too much about the treatment effect. Besides, closely observing the electrocardiogram and vital signs, making a detailed record, straightening and breaking the operating side of the limb 4-6 h after operation, and telling nurses to stay in bed for 24 h. checking the puncture site for bleeding, reporting it to the doctor in time if bleeding, and taking corresponding nursing plan. Monitor the limb temperature and arterial pulsation on the operative side, and tell the patients to drink more water, the amount of drinking water should be more than 1500 mL within one day, and it is appropriate not to feel abdominal distension after drinking water, so as to urge the contrast medium to be excreted as soon as possible. Motivational psychological nursing: (1) Preoperative motivational psychological nursing, patiently listening to patients' cognitive and psychological needs of their own diseases and interventional therapy, and responding positively through physical movements such as nodding in the process of listening. avoid too many summaries and inquiries. Encouraging language was adopted to respond to patients during listening, make them feel cared for and cared for, and establish a harmonious and trusting relationship; (2) Adopting easy-tounderstand language, keeping the speech speed moderate and the tone calm, explaining the interventional operation method, anesthesia scheme and necessity to the patients, and patiently answer the questions raised by the patients; (3) Intraoperative motivational psychological nursing, they are prone to strong psychological stress reactions after entering the interventional operating room, such as language tremor, rapid heartbeat, pale complexion and so on because the patients are unfamiliar with the interventional surgical treatment plan and the interventional operating room. When nursing staff find such manifestations, they appease and encourage patients by means of body movements, language and eyes. Adopting self-appropriate positive and positive language cues to help patients build up confidence in treatment [8]; and (4) postoperative motivational psychological nursing, patiently and carefully listening to patients' experience and feelings of operation and anesthesia and postoperative rehabilitation nursing needs. In addition, it is necessary to alleviate patients' negative emotions and build up confidence in rehabilitation. urge patients to actively cooperate with postoperative rehabilitation nursing by explaining cases with ideal results in the past to. Observation index Statistics of rehabilitation compliance of the two groups, self-scale evaluation, were conducted including standardized drug use, healthy diet, regular work and rest, a total of 100 points, 90-100 points for complete compliance, 70-89 points for basic compliance, less than 70 points for non-compliance. Rehabilitation compliance = (complete compliance + basic compliance)/total number of cases × 100%. The negative emotions of the two groups before and after intervention were counted and evaluated according to the Positive and Negative Affect Schedule (PANAS) negative psychological questionnaire. Five dimensions of upset, irritability, nervousness and fear were selected. The score range of each dimension was 1-5 points, and the lower the score, the better. The coping styles of the two groups before and after intervention were counted. According to the Simplified Coping Styles Questionnaire (SCSQ) evaluation of the simplified coping style questionnaire, including positive coping and negative coping, a four-level scoring system was adopted, which was divided into "often adopted", "sometimes adopted", "occasionally adopted" and "not adopted" [9]. There were 19 items evaluated by the Newcastle Nursing Satisfaction Scale, with a full score of 95, > 85 as very satisfactory, 67 to 85 as general satisfaction, < 67 as dissatisfaction, satisfaction = (general satisfaction + very satisfactory)/the total number of cases in this group × 100% [10]. Statistical analysis The data were analyzed by SPSS22.0, measurement data were expressed with mean ± SD, and conducted t test, counting data were expressed with n (%), and conducted χ 2 test. P < 0.05 indicated that the difference was statistically significant. Rehabilitation compliance The rehabilitation compliance of the study group (95.92%) was higher than that of the control group (81.63%) (P < 0.05, Table 1). SCSQ score Before intervention, there was no significant difference between the positive coping score and the negative coping score in the study group (1.35 ± 1.04), (2.76 ± 0.73) and the control group (1.29 ± 0.97) and (2.80 ± 0.76) (P > 0.05). After intervention, the score of positive coping in the study group (3.03 ± 0.96) was higher than that in the control group (2.21 ± 0.88). However, the score of negative coping in the study group (0.74 ± 0.46) was lower than that in the control group (1.10 ± 0.59) (P < 0.05, Table 3). The intervention satisfaction The intervention satisfaction of the study group (93.88%) was higher than that of the control group (79.59%) (P < 0.05, Table 4). DISCUSSION The patients in the interventional operating room were seriously ill. They have serious negative emotions, which can affect the efficacy and safety of interventional surgery because they are worried about their own diseases and therapeutic effects and lack a correct understanding of interventional therapy [11][12][13]. As a consequence, the implementation of effective nursing intervention during the treatment of patients in the interventional operating room is very important to ensure the therapeutic effect of the disease and promote positive outcome of the disease [14,15]. Routine nursing only pays attention to the treatment of diseases. However, it does not pay enough attention to the psychological state of patients, and the nursing content is not systematic enough, thus decreasing patient benefits. Programmed nursing can provide patients with systematic, professional and standardized nursing measures, thus helping them avoid nursing errors, ensure the safety of patients to the maximum extent, and provide high-quality nursing services compared with routine nursing [16,17]. Motivational psychological nursing is an important clinical psychological support technology that mainly entails stimulating the behavioral goals and motivation of the intervention objects to keep them highly excited, mobilizing their enthusiasm and giving full play to their inherent potential [18,19]. In motivational psychological nursing, mental motivation can enable patients to establish correct health beliefs and alleviate a negative psychological state; goal motivation can enhance patients' successful experience and enhance treatment initiative and enthusiasm; example incentives can help patients establish treatment confidence; and trust incentives can reduce patients' psychological pressure and improve compliance [20]. After the intervention of patients in the interventional operating room combined with motivational psychological nursing and programmed nursing, it was found that the rehabilitation compliance of the study group (95.92%) was higher than that of the control group (81.63%), PANAS scores were lower than those of the control group, and the improvement of each dimension score of SCSQ scale was significantly better than that of the control group (P < 0.05). This shows that motivational psychological nursing combined with programmed nursing has high application value in interventional operating room patients, which is helpful to alleviate patients' negative emotions, urge them to face the disease actively and improve their compliance. The main reasons are as follows: (1) Programmed nursing mainly takes nursing procedures and treatment procedures as the basis, comprehensively considers the patient's condition, and formulates a standardized and programmed intervention plan to provide efficient and high-quality nursing services for patients. The prevention of nursing omissions and can shorten the rescue time to ensure the effect of treatment. Additionally, preoperative programmed nursing can deepen patients' understanding of interventional therapy, guide them to face their own diseases and treatment correctly and positively, establish confidence in rehabilitation, improve nurse-patient communication and make them feel respected and cared for to accept interventional surgery with a good mentality and reduce the stress reaction caused by invasive operation. Intraoperative nursing can ensure the safe and smooth progress of interventional surgery, shorten the treatment time and reduce the risk of complications, while postoperative nursing can provide security for patients and make them feel that nurses have a rigorous and serious attitude; and (2) By listening to the patients' cognition and attitude towards their own diseases and treatment plans before operation, we can understand the emotional state of the patients; introduce the intervention operation, necessity and treatment purpose in detail; and answer the patients' questions. These steps can eliminate patients' confusion and alleviate the fear caused by the lack of correct understanding of the disease and interventional surgery in motivational psychological nursing. The positive implication can directly alleviate the negative psychology of patients, stimulate positive emotion and body potential, effectively mobilize self-control and subjective initiative, and face the disease and treatment and rehabilitation with the best psychological state. In addition, it was also found that the intervention satisfaction of the study group (93.88%) was higher than that of the control group (79.59%) (P < 0.05). It was further confirmed that the combined program of motivational psychological nursing and programmed nursing had high application value in interventional operating room patients, mainly because the nursing intervention program could effectively regulate the physical and mental state of patients from the subjective point of view. Urging them to face the disease and treatment with a correct attitude is conducive to shortening the process of disease rehabilitation and increasing patient satisfaction. CONCLUSION Generally, adopting motivational psychological nursing combined with procedural nursing intervention on patients in the operating room can improve rehabilitation compliance and alleviate negative emotions. In addition, it can change their coping style to the disease and increase patient satisfaction with nursing work. Research background Patients' lack of correct understanding of cardiovascular disease and interventional therapy is often accompanied by varying degrees of fear, depression and anxiety, which will affect the hemodynamic fluctuation of patients undergoing interventional surgery, which is not conducive to the smooth and safe operation of interventional surgery.
2022-07-09T15:19:32.944Z
2022-07-26T00:00:00.000
{ "year": 2022, "sha1": "3eb6f5b680fd5cde0f8da51dd7ca63cf276ed804", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12998/wjcc.v10.i21.7285", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d989c7e7c395cf05ed145e7f26f6e2e8f9a0eb3f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
59603721
pes2o/s2orc
v3-fos-license
CDK4/6 inhibitors target SMARCA4-determined cyclin D1 deficiency in hypercalcemic small cell carcinoma of the ovary Inactivating mutations in SMARCA4 (BRG1), a key SWI/SNF chromatin remodelling gene, underlie small cell carcinoma of the ovary, hypercalcemic type (SCCOHT). To reveal its druggable vulnerabilities, we perform kinase-focused RNAi screens and uncover that SMARCA4-deficient SCCOHT cells are highly sensitive to the inhibition of cyclin-dependent kinase 4/6 (CDK4/6). SMARCA4 loss causes profound downregulation of cyclin D1, which limits CDK4/6 kinase activity in SCCOHT cells and leads to in vitro and in vivo susceptibility to CDK4/6 inhibitors. SCCOHT patient tumors are deficient in cyclin D1 yet retain the retinoblastoma-proficient/p16INK4a-deficient profile associated with positive responses to CDK4/6 inhibitors. Thus, our findings indicate that CDK4/6 inhibitors, approved for a breast cancer subtype addicted to CDK4/6 activation, could be repurposed to treat SCCOHT. Moreover, our study suggests a novel paradigm whereby critically low oncogene levels, caused by loss of a driver tumor suppressor, may also be exploited therapeutically. C ancer therapy is shifting towards genotype-based strategies, where signaling pathways activated by oncogenic mutations are targeted by highly selective inhibitors. However, a majority of cancers lack a known druggable driver oncogene or have lost tumor suppressors that are not directly actionable, thus remaining as a major clinical challenge. Subunits of SWI/SNF chromatin remodeling complexes are mutated in >20% of human cancers, across a broad range of cancer types, highlighting their important roles in tumorigenesis [1][2][3] . SMARCA4, encoding a SWI/SNF catalytic ATPase subunit, is inactivated by mutations or other mechanisms in different cancers, including non-small cell lung cancer (NSCLC), breast cancer, glioblastoma, and others [4][5][6][7] . However, the underlying mechanisms of SMARCA4 loss in driving tumorigenesis are currently unclear. Thus SMARCA4-deficient cancers still lack rationalized and targeted treatment options. We and others uncovered that small cell carcinoma of the ovary, hypercalcemic type (SCCOHT), a rare and often lethal cancer of young women, is almost always caused by biallelic deleterious mutations in SMARCA4, leading to loss of SMARCA4 protein expression [8][9][10][11] . SCCOHT is an aggressive cancer, with long-term survival rates at early stage diagnosis of 33% with current surgical and chemotherapy/radiation treatments 12 . In contrast to other ovarian cancer subtypes, SCCOHT has a remarkably simple genome that harbors few mutations or chromosomal alterations 13,14 . While SMARCA4 loss is not directly targetable, this monogenic nature of SCCOHT presents an ideal opportunity to uncover druggable targets that are synthetic lethal with SMARCA4 loss through functional genetic screens. Results SCCOHT cells are dependent on CDK4/6 kinase activities. We set out to uncover vulnerabilities in SMARCA4-deficient SCCOHT using synthetic lethal screens, which are powerful tools to identify drug targets and help derive cancer-specific therapies that have minimal side effects in normal tissue 20,21 . Appropriate controls for the synthetic lethal screen would be isogenic SCCOHT cell lines engineered to express SMARCA4 or non-transformed cells of the same origin that are SMARCA4proficient. In line with a previous report 22 , we found that forced SMARCA4 expression in the two established SCCOHT cell lines, BIN-67 23 and SCCOHT-1 24 , resulted in strong growth inhibition; this was not seen in the non-transformed ovarian epithelial IOSE80 cells that are SMARCA4-proficient (Supplementary Fig. 1). Similar SMARCA4-induced growth inhibition was also observed in COV434 25 ( Supplementary Fig. 1), which was initially designated as an ovarian juvenile granulosa cell tumor line but recently redefined as a SCCOHT cell line 26 and has been verified in an independent study 27 . Therefore, isogenic controls may not be feasible for synthetic lethal screens in settings where SCCOHT cells require SMARCA4 loss for normal proliferation. Although the exact origin of SCCOHT is unknown, the tumors clearly arise from the ovary; thus IOSE80 was chosen as a SMARCA4-proficient screening control. OVCAR4, an ovarian carcinoma-derived cell line which is SMARCA4-proficient, was also included as an additional control. Using our validated short hairpin RNA (shRNA) library targeting the human kinome [28][29][30] , we performed pooled screens to identify kinases whose inhibition is selectively lethal to BIN-67 but not to IOSE80 and OVCAR4 (Fig. 1a). This focused library was chosen because pharmacological inhibitors targeting the kinases identified from our screens, if available, would have the highest chance of clinical implementation. In addition, incomplete gene suppression by RNA interference (RNAi) could provide a more realistic mimic of drug inhibition. Upon screen completion, we analyzed the data using the MAGeCK statistical software package 31 and identified CDK6 as the first ranked gene that was negatively selected in BIN-67 ( Fig. 1b and Supplementary Data 1). In contrast, CDK6 was not significantly selected in IOSE80 and OVCAR4 control cells ( Fig. 1b and Supplementary Data 1). Validating this, CDK6 knockdown caused a marked inhibition of proliferation in all three SCCOHT cell lines BIN-67, SCCOHT-1, and COV434 but did not significantly impact SMARCA4-proficient control cell lines IOSE80 and OVCAR4 (Fig. 1c, d). CDK6 and the closely related CDK4 are activated by forming complexes with D cyclins to phosphorylate and inhibit retinoblastoma (RB) protein, allowing cell cycle progression 16,18 . Consistent with this, CDK6 knockdown suppressed RB phosphorylation in SCCOHT cells but not in SMARCA4-proficient cells (Fig. 1d), supporting the decrease in proliferation observed. From the same screen analysis, we noted that CDK4 was the second ranked lethal gene in BIN-67 and was also significantly selected in the control cells ( Fig. 1b and Supplementary Data 1). In line with this, suppression of CDK4 expression using two independent shRNAs inhibited growth of all cell lines (Fig. 1c). However, RB phosphorylation was suppressed only in SCCOHT cells but not in SMARCA4-proficient controls upon CDK4 knockdown (Fig. 1d). These observations suggest that growth inhibition induced by CDK4 knockdown in SMARCA4proficient controls is mediated by a kinase-independent activity of CDK4; in contrast, inhibition of CDK4/6 kinase activities in SCCOHT cells is likely to underlie the suppression of proliferation upon CDK4/6 knockdown. Supporting this, reconstitution of wild-type CDK6 but not the kinase-inactive mutant CDK6 D163N rescued the growth inhibition induced by CDK6 knockdown in SCCOHT cells (Fig. 1e, f). Similar results using wild-type CDK4 and the kinase-inactive mutant CDK4 D158N were also obtained in SCCOHT cells (Fig. 1g, h). In contrast, both CDK4 constructs rescued growth inhibition induced by CDK4 knockdown in SMARCA4-proficient cells (Fig. 1i, j). Taken together, these findings indicate that SCCOHT cells are more vulnerable to inhibition of CDK4/6 kinase activities, compared to SMARCA4-proficient control cells. SCCOHT cells are highly sensitive to CDK6 inhibitors. Three highly selective CDK4/6 inhibitors, palbociclib (PD-0332991), ribociclib (LEE001), and abemaciclib (LY2835219), have been recently approved by the FDA for treating ER + /HER2 − advanced breast cancers, which are often characterized by dysregulated CDK4/6 activation [15][16][17][18][19] . In keeping with our above findings that SCCOHT cells are more susceptible to inhibition of CDK4/6 kinase activities compared to SMARCA4-proficient controls, we found that SCCOHT cells but not SMARCA4-proficient controls, including IOSE80, OVCAR4, and OVCAR8 (an additional ovarian carcinoma line), are highly sensitive to palbociclib in both colony-formation (Fig. 2a) and cell viability (Fig. 2b) assays. Furthermore, SCCOHT cells have similar or lower half maximal inhibitory concentration (IC 50 ) compared to the control ER + breast cancer cells MCF7 and CAMA-1 (Fig. 2a, b), the latter among the most palbociclib-sensitive lines in a panel of~50 breast cancer cell lines 32 . Consistent with the growth response, Fig. 1 SMARCA4-deficient SCCOHT cells are vulnerable to inhibition of CDK4/6 kinase activities. a Schematic outline of the shRNA screens for kinases whose inhibition is selectively lethal to SMARCA4-deficient SCCOHT cells (BIN-67) but not to SMARCA4-proficient control cells (IOSE80, OVCAR4). Cells were infected with the lentiviral shRNA library (T0) and cultured for selection for 14 days (T1). The relative abundance of shRNAs in the cell populations was determined by next-generation sequencing. b Analysis of the shRNA screens using the MAGeCK statistical software package 31 . CDK6 (magenta) and CDK4 (blue) are the first two ranked genes that were negatively selected in BIN-67 cells. All genes were ranked based on their RRA (robust rank aggregation, top) or raw p values (bottom) generated from the MAGeCK analysis. c, d Validation of CDK6 and CDK4 in SCCOHT cells (BIN-67, SCCOHT-1, COV434) and SMARCA4-proficient controls (IOSE80, OVCAR4). c Colony-formation assay of the indicated cell lines expressing pLKO control or shRNAs targeting CDK6 or CDK4 after 10-15 days of culturing. For each cell line, all dishes were fixed at the same time, stained, and photographed. d Western blot analysis of CDK6 and CDK4 and phosphorylated RB at serine 795 (pRB-S795) in the cells described in c. HSP90 was used as a loading control. e-j SCCOHT cells are more vulnerable to inhibition of CDK4/6 kinase activities, compared to SMARCA4-proficient control cells. e BIN-67 cells stably expressing pLX304-GFP, pLX304-CDK6, or pLX304-CDK6 D163N were infected with viruses containing pLKO control or a shRNA targeting the 3'UTR of CDK6, selected for integration, and cultured for 14 days. All dishes were fixed at the same time. f Western blot analysis for CDK6, pRB-S795, and HSP90 in the cells described above. g, i BIN-67 (g) and OVCAR-4 (i) cells expressing pLX317-GFP, pLX317-CDK4, or pLX317-CDK4 D158N were infected with viruses containing pLKO control or a shRNA vector targeting the 3'UTR of CDK4, selected for integration, and cultured for 14 days. For each cell line, all dishes were fixed at the same time. h, j Western blot analysis for CDK4, pRB-S795, and HSP90 in the cells described above Common genes that significantly changed (p < 0.05) in both cell lines was analyzed using Gene Set Enrichment Analysis (GSEA). Top ten cellular processes by Gene Ontology (GO) term (d) and GSEA plots for the top three cellular processes (e) are shown. NES normalized enrichment score, FDR false discovery rate. f, g Transcriptome profiling in SCCOHT cells show that top ranked pathways affected upon CDK6 knockdown are related to cell cycle regulation. RNA-Seq was performed in BIN-67 and SCCOHT-1 cells expressing pLKO control or two independent shRNAs targeting CDK6. Common genes that significantly changed (p < 0.05) in both shRNAs and both cell lines were analyzed using GSEA. Top ten cellular processes by GO term (f) and GSEA plots for the top three cellular processes (g) are shown. NES normalized enrichment score, FDR false discovery rate palbociclib suppressed RB phosphorylation in both SCCOHT and breast cancer cells but not in IOSE80 and OVCAR4 (Fig. 2c). Similar results were also obtained using abemaciclib and ribociclib ( Supplementary Fig. 2). Next, we performed transcriptome analysis using RNA-Seq in BIN-67 and SCCOHT-1 cells treated with palbociclib or expressing shRNAs targeting CDK6. Gene set enrichment analysis (GSEA) show that the top ranked pathways affected in these cells upon palbociclib treatment or CDK6 knockdown are all cell cycle process-related and closely mirror each other ( Fig. 2d-g). Together, these data demonstrate that CDK4/6 inhibitors are highly effective in inhibiting proliferation of SCCOHT cell predominantly through cell cycle suppression. Cyclin D1 deficiency in SCCOHT drives the drug sensitivity. To address the molecular mechanism underlying the susceptibility of SCCOHT cells to inhibition of CDK4/6 kinase activities, we examined the expression of key regulators of G1-to S-phase cell cycle progression in our cell line panel. In contrast to SMARCA4-proficient ovarian controls, SCCOHT and ER + breast cancer cell lines retain RB and express lower levels of the CDK4/6 inhibitor p16 INK4a (Fig. 3a), a profile that has been associated with positive responses to palbociclib [15][16][17][18][19][33][34][35] . In addition, both SCCOHT and breast cancer cell lines express lower levels of CDK4 (Fig. 3a). Neither protein nor mRNA expression of CDK4/ 6 and other relevant cell cycle regulators such as cyclin E, p27, and p21 associate with SMARCA4 status (Fig. 3a, Supplementary Fig. 3a-f). Surprisingly, and in contrast with SMARCA4proficient cells, all three SCCOHT cell lines express very low levels of cyclin D1 protein (Fig. 3a), which is associated with mRNA expressions of CCND1 coding for cyclin D1 (Fig. 3b). We also examined other D cyclins in the cell line panel as they also complex with CDK4/6. SCCOHT-1 is the only cell line expressing cyclin D2 ( Supplementary Fig. 3a, g). BIN-67 and SCCOHT-1, but not COV434, express lower levels of cyclin D3 compared to SMARCA4-proficient cells ( Supplementary Fig. 3a, h). Therefore, cyclin D1 deficiency is the shared characteristic among the three SCCOHT cell lines and associates with SMARCA4 status. We hypothesized that this cyclin D1 deficiency may limit the kinase activities of CDK4/6 in SCCOHT cells. To test this, we first performed immunoprecipitation to analyze the D cyclins associated with CDK4, whose expression is less variable among the cell lines compared to CDK6 (Fig. 3a). As expected, cyclin D1 was found to be in a complex with CDK4 in all SCCOHT cell lines and IOSE80 control (Supplementary Fig. 4a-d). Cyclin D3 was also found to complex with CDK4 in IOSE80 and COV434 cells but not in BIN-67 and SCCOHT-1 cells, likely due to its lower expression in the latter two cell lines ( Supplementary Fig. 3a). Cyclin D2 was detected in the CDK4 immunoprecipitate of SCCOHT-1 ( Supplementary Fig. 4c), suggesting that it may partially compensate for the low cyclin D1 expression in this cell line. However, our in vitro kinase assays, using the normalized amount of immunoprecipitated CDK4 complexes, show that phosphorylation of recombinant RB substrate was more efficient in IOSE80 cells compared to all SCCOHT cell lines, indicating lower total CDK4 kinase activity in SCCOHT cells (Fig. 3c). These data suggest that the common cyclin D1 deficiency of SCCOHT cells constrains their CDK4/6 activity, which could result in vulnerability to CDK4/6 inhibition. Cyclin D1 deficiency in SCCOHT is induced by SMARCA4 loss. In parallel to the above described biochemical analysis, we performed RNA-Seq transcriptome profiling in BIN-67 and SCCOHT-1 cells before and after SMARCA4 restoration to unbiasedly uncover the genes and pathways dysregulated owing to SMARCA4 loss. Consistent with the previous established role of SMARCA4 as transcriptional activator 36 , we found that SMARCA4 restoration predominantly upregulates gene expression-while 662 common genes are upregulated ( SMARCA4 has been linked to cyclin D1 regulation in other cancer types with opposing effects: SMARCA4 suppresses CCND1 in MCF7 ER + breast cancer cells 37 , whereas SMARCA4 knockdown leads to downregulation of cyclin D1 protein in triple-negative breast cancer and glioma cells 38,39 . We found that SMARCA4 restoration in all three SCCOHT cell lines strongly elevated cyclin D1 mRNA and protein expression (Fig. 4d, e). In contrast, SMARCA4 restoration did not alter CDK6 expression in SCCOHT cells ( Supplementary Fig. 7a). Conversely, SMARCA4 knockdown in IOSE80 and OVCAR4 cells resulted in a strong downregulation of cyclin D1 mRNA and protein ( Fig. 4f, g), further supporting the role of SMARCA4 in activating cyclin D1 expression. As anticipated from our RNA-Seq analysis ( Fig. 4c and Supplementary Data 2), this regulation of cyclin D1 by SMARCA4 in these cell line pairs was not observed for other relevant cell cycle genes including CCND3, CCNE1, CDKN1B, and CDKN1A ( Supplementary Fig. 8), although CCNE1 and CDKN1A have been shown to be regulated by SMARCA4 in other cancer types, likely due to context dependency 40,41 . The substantial elevation of CCND1 mRNA and the chromatin remodeling role of SMARCA4 suggest that SMARCA4 can directly regulate CCND1 transcription. Indeed, chromatin immunoprecipitations (ChIPs) in SMARCA4-restored SCCOHT-1 and BIN-67 cells showed a significant SMARCA4 occupancy at the CCND1 promoter but not at the control upstream regions of CCND1 locus or promoters of CCND3 and CCNE1 (Fig. 4h). Conversely, SMARCA4 occupancy at CCND1 promoter, but not at the same control regions, was significantly reduced in IOSE80 and OVCAR4 cells upon SMARCA4 knockdown ( Fig. 4i). Supporting our findings, publicly available SMARCA4 ChIP-Seq data sets of eight cell lines of different tissue origins also show consistent SMARCA4 occupancy at the CCND1 promoter region [42][43][44][45][46][47][48][49] (Supplementary Fig. 9). Together, these results are consistent with the model that SMARCA4 is an activator of CCND1 transcription in the ovarian context. To further support the role of SMARCA4 in cyclin D1 regulation and drug response to CDK4/6 inhibitor, we introduced low levels of exogenous SMARCA4 expression using a doxycycline-controlled expression system ( Supplementary Fig. 10). We found that minimum levels of leaky SMARCA4 expression in BIN-67 and SCCOHT-1 cells (compared to IOSE80 control) were sufficient to upregulate cyclin D1 expression ( Supplementary Fig. 10a, b, e, f), demonstrating the robust regulation by SMARCA4. Even though full expression of SMARCA4 in SCCOHT cells strongly inhibits growth in longterm cultures ( Supplementary Fig. 1), such low levels of SMARCA4 restoration in BIN-67 and SCCOHT-1 cells was tolerable ( Supplementary Fig. 10c, g) and partially rescued these cells from the growth inhibition of palbociclib in a short-term SCCOHT-1 HSP90 HSP90 Input lgG . c SCCOHT cells have lower total CDK4 kinase activities compared to SMARCA4proficient control cells. Normalized amount of immunoprecipitated CDK4 kinase complexes from IOSE80 and SCCOHT cell lines were subjected to in vitro kinase assays using recombinant RB. Upper, western blot analysis of immunoprecipitated CDK4 input; lower, kinase assay radiography. d-i Ectopic cyclin D1 expression increases RB phosphorylation and confers resistance to palbociclib in SCCOHT cells. d, e Western blot analysis of immunoprecipitations using an antibody against CDK4 or IgG in SCCOHT-1 (d) and BIN-67 (e) cells stably expressing GFP or CCND1. f, g Western blot analysis of SCCOHT-1 (f) and BIN-67 (g) cells with stable ectopic expression of GFP, CDK4, or CCND1. h, i Colony-formation assay of SCCOHT-1 (h) and BIN-67 (i) cells (described in f, g) treated with palbociclib. j, k Spontaneously palbociclib-resistant clones of SCCOHT-1 expressed elevated cyclin D1 and RB phosphorylation. j Cell viability assay of SCCOHT-1 parental cells and resistant clones (R1 and R2) treated with palbociclib for 9 days. Error bars: mean ± standard deviation (s.d.) of biological replicates (n = 4). k Western blot analysis for the indicated proteins in the cells described above. l, m Cyclin D1 knockdown in palbociclibresistant SCCOHT-1 cells resensitizes them to palbociclib. l Cell viability assay of SCCOHT-1 R1 cells expressing pLKO control or two independent shRNAs targeting cyclin D1 treated with palbociclib for 9 days. Error bars: mean ± standard deviation (s.d.) of biological replicates (n = 4). m Western blot analysis for the indicated proteins in the cells described above growth assay ( Supplementary Fig. 10d, h). These data support the model that SMARCA4 loss in SCCOHT cells results in profound downregulation of cyclin D1 and is synthetic lethal with CDK4/6 inhibition. Palbociclib is effective against SCCOHT tumor growth in vivo. Our data suggest that SMARCA4-dependent cyclin D1 deficiency constrains CDK4/6 activity and leads to the selective sensitivity to the CDK4/6 inhibition in SCCOHT cells. To establish this potential treatment strategy in vivo, we examined the responses of SCCOHT tumors to the FDA-approved palbociclib in mouse xenograft settings. As shown in Fig. 5a, b, palbociclib treatment elicited a potent growth inhibition of BIN-67 tumors. Similar findings were obtained in a second xenograft model of SCCOHT-1 cells during the treatment course of 42 days (Fig. 5c, d). Furthermore, the immunohistochemical (IHC) analysis of tumors isolated at the treatment end points showed that RB phosphorylation, Ki67 expression, and mitotic index were significantly suppressed in palbociclib-treated cohorts (Fig. 5e-h), confirming SMARCA4_αSMARCA4 SMARCA4_lgG Ctrl_lgG Ctrl_αSMARCA4 shSMARCA4_αSMARCA4 SMARCA4_lgG Up2 Up3 CCND1p Fig. 5i, palbociclib treatment also significantly suppressed the growth of SCCOHT PDXs. Together, these results establish that palbociclib is effective in treating SCCOHT tumors in vivo. SCCOHT patient tumors are deficient in cyclin D1 expression. The reduced cyclin D1 expression in SCCOHT cells result in their sensitivity to CDK4/6 inhibitors (Fig. 3, Supplementary Fig. 5). Therefore, we analyzed the expression of cyclin D1 and other relevant cell cycle regulators in multiple patient tumor collections to evaluate the potential clinical implications of our findings. We first examined CCND1 mRNA expression obtained from a NanoString gene expression study in 17 SCCOHTs 8,50 and 6 ovarian high-grade serous carcinomas (HGSCs). Consistent with the above-described cell line data (Fig. 3b), SCCOHT tumor samples expressed significantly lower CCND1 levels compared to HGSCs (Fig. 6a, b). One of the three SCCOHT cell lines expressed CCND2 (Supplementary Fig. 3a, g). In line with this, SCCOHT tumor samples expressed variable levels of CCND2, in contrast to HGSCs, which all exhibited low CCND2 expression (Supplementary Fig. 10a, b). The NanoString data set also shows that these SCCOHT samples expressed higher CDK4 levels than HGSCs ( Supplementary Fig. 11a, b). While elevated CDK4 mRNA expression was observed in two of the three SCCOHT cell lines (~1.5-fold compared to controls, Supplementary Fig. 3b), CDK4 regulation by SMARCA4 requires further investigations as SMARCA4 perturbation in SCCOHT and ovarian control cell lines yielded inconsistent results ( Supplementary Fig. 7a-d). To validate the above-mentioned gene expression study, we performed quantitative PCR (qPCR) using available fresh frozen patient material from an independent series of HGSCs (n = 7) and SCCOHTs (n = 5). These SCCOHTs also expressed significantly lower levels of CCND1, but not CCND2 and CDK4, compared to HGSCs (Fig. 6c and Supplementary Fig. 11c). Given that CCND1 is an ER target gene 51 , we evaluated the potential contribution of ER to the differential CCND1 expression observed. We found that four of the seven HGSCs expressed similar levels of ESR1 as SCCOHTs (Supplementary Fig. 10d) and that CCND1 but not CDK4 expression in these HGSCs was still significantly higher than SCCOHTs ( Supplementary Fig. 10e), suggesting that ESR1 is not a confounder of our analysis. Together, these tumor gene expression data support that SMARCA4 loss results in reduced CCND1 expression in SCCOHT. Using IHC coupled with unbiased automated quantification 52 , we next analyzed the protein expression of these key cell cycle regulators in patient tumor samples of SCCOHT or HGSC embedded in tissue microarrays (TMAs). Figure 6d, e show that SCCOHT tumors (n = 32) expressed significantly lower levels of cyclin D1 compared to HGSCs controls (n = 52) (~14-fold median difference, Supplementary Data 3). Similarly, SCCOHTs also expressed significantly lower levels of CDK4 compared to HGSCs with a~29-fold median difference ( Supplementary Fig. 11f, g and Supplementary Data 3). SCCOHT tumors showed heterogeneous CDK6 expression, similar to HGSCs (Supplementary Fig. 11f, g), which is in line with our previous observation that SMARCA4 restoration did not alter CDK6 expression in SCCOHT cells (Supplementary Fig. 7a). Additional IHC analysis also showed that SCCOHTs were generally RB-proficient and p16-deficient, which is the reverse of what is observed in HGSCs (Fig. 6d, e). These patient tumor IHC data are in support of our in vitro findings and confirm that cyclin D1 deficiency and lower CDK4 expression is a unique feature of SCCOHT; these samples also retained the known RB/p16 profile associated with positive responses to palbociclib [15][16][17][18][19][33][34][35] . Collectively, these results support the notion that CDK4/6 inhibitors could be effective in treating SCCOHT patients. Discussion Our study highlights the power of functional genetic screens in revealing unexpected cancer vulnerabilities to devise potential effective treatment strategies, especially in the context of tumors with quiescent genomes. Using this unbiased approach, we uncovered that cyclin D1 deficiency in SCCOHT results in exquisite susceptibility to CDK4/6 inhibitors. First, our kinome-focused RNAi screens identified that SCCOHT cells are selectively sensitive to CDK4/6 knockdown. Subsequent rescue experiments using wild-type and kinaseinactive mutants of CDK4/6 show that SCCOHT cells are more dependent on CDK4/6 kinase activity than are SMARCA4proficient controls. In line with this, SCCOHT cells are highly sensitive to CDK6 inhibitors both in vitro and in vivo. Mechanistically, SMARCA4 loss causes cyclin D1 deficiency, which limits CDK4/6 kinase activity in SCCOHT cells and results in less buffering against CDK4/6 inhibition. This is supported by multiple lines of evidence: immunoprecipitations and in vitro kinase assays indicate that cyclin D1 deficiency of SCCOHT cells constrains their CDK4/6 activity; ectopic expression of cyclin D1 in SCCOHT cells led to elevated RB phosphorylation and resistance to CDK4/6 inhibition; spontaneous palbociclib-resistant clones of SCCOHT cells expressed elevated cyclin D1 and RB phosphorylation, while cyclin D1 knockdown re-sensitized these cells to palbociclib. Finally, we show that SCCOHT patient tumors, similar to cell lines, also expressed reduced CCND1 mRNA and cyclin D1 proteins yet retain the RB-proficient/p16deficient profile, indicating that SCCOHT patients may benefit from CDK4/6 inhibitor treatment. Fig. 4 Cyclin D1 deficiency in SCCOHT cells is caused by SMARCA4 loss. a-c RNA-Seq analysis in BIN-67 and SCCOHT-1 cells stably expressing pReceiver control or pReceiver-SMARCA4 identified CCND1 as the top ranked cell cycle regulator upregulated upon SMARCA4 restoration (n = 3). a, b Venn diagrams showing the genes upregulated (a) or downregulated (b) upon SMARCA4 restoration (fold change >3, adjusted p < 0.05). c Heatmap of the top 50 genes upregulated upon SMARCA4 restoration in both BIN-67 and SCCOHT-1 cells. Red arrow points to CCND1. d, e SMARCA4 restoration in SCCOHT cells upregulated CCND1 mRNA (d) and cyclin D1 protein (e) levels. d Relative expression levels of CCND1 mRNA (normalized to GAPDH) in SCCOHT cells were measured by qRT-PCR. Error bars: mean ± s.d. of biological replicates (n = 3; two-tailed t test, **p < 0.01). e Western blot analysis for the indicated proteins in the cells described above. f, g SMARCA4 knockdown in SMARCA4-proficient cells suppressed CCND1 mRNA (f) and cyclin D1 protein (g) levels. f Relative expression levels of CCND1 mRNA (normalized to GAPDH) in IOSE80 and OVCAR4 cells were measured by qRT-PCR. Error bars: mean ± s.d. of biological replicates (n = 3; two-tailed t test, ****p < 0.0001, **p < 0.01). g Western blot analysis for the indicated proteins in the cells described above. h, i SMARCA4 occupancy in the CCND1 promoter region. Chromatin immunoprecipitation experiments were performed in SCCOHT cells (SCCOHT-1, BIN-67) expressing pReceiver or pReceiver-SMARCA4 (h) and in SMARCA4-proficient cells (IOSE80, OVCAR4) expressing pLKO or shRNA targeting SMARCA4 (i), using an antibody against SMARCA4 or IgG. qPCR was used to analyze SMARCA4 occupancy using the primer sets for CCND1, CCND3, and CCNE1 locus as indicated. p promoter, TSS transcription start site, error bars: mean ± s.d. of measurement replicates of a representative experiment (n = 3; two-tailed t test, *p < 0.05, **p < 0.01) It is possible that cyclin D1 deficiency in SCCOHT may be compensated by other regulators of cell cycle progression. For example, we observed elevated cyclin D2 expression in one of the three SCCOHT cell line (SCCOHT-1) as well as in some SCCOHT tumors. However, the in vitro kinase assays indicate that SCCOHT-1 cells, despite of elevated cyclin D2 expression, still have lower total CDK4 kinase activity compared to SMARCA4-proficient cells. This suggests that cyclin D2 elevation cannot fully compensate for cyclin D1 deficiency. Although we have not identified alterations in other key cell cycle regulators, our data do not rule out other potential dysregulations of cell cycle progression in SCCOHT. In addition to cyclin D1, other factors associated with SMARCA4 loss may also contribute to drug sensitivities of SCCOHT cells to CDK4/6 inhibition. Furthermore, activation of other oncogenic pathways caused by SMARCA4 loss can drive malignant transformation 2,3 , which remains to be investigated in SCCOHT. Our unexpected findings contrast with the initial application of CDK4/6 inhibitors in treating ER + breast cancers that are often characterized with dysregulated CDK4/6 activation [15][16][17][18][19][33][34][35] , where the oncogenic addiction to cyclin D1 is being targeted. However, the role of cyclin D1 in modulating response to CDK4/ 6 inhibitors is more complex than previously appreciated. A recent in vitro study across >500 cell lines suggests that cancer cells with activating CCND1 genetic alternations (e.g., amplification, translocation) are sensitive to abemaciclib; paradoxically, CCND1 mRNA is positively correlated with IC 50 of abemaciclib and high cyclin D1 protein expression is also weakly associated with abemaciclib resistance 53 . The latter finding of this study is consistent with our data, which provides an explanation for low expression levels of cyclin D1 in driving drug sensitivity to CDK4/6 inhibitors in certain contexts. Independently corroborating our findings in SCCOHT, we demonstrate that SMARCA4 loss in NSCLC also results in cyclin D1 deficiency and is synthetic lethal with CDK4/6 inhibition 54 . Our data demonstrate that cyclin D1 deficiency is a druggable vulnerability of SCCOHT, providing a rationale for a clinical trial using the available CDK4/6 inhibitors to treat this often-fatal cancer of young women. SCCOHT patients may also benefit from the antitumor immunity triggered by CDK4/6 inhibition as recently shown by others 55,56 . Finally, our study reveals a novel paradigm whereby a critically low level of an oncogene such as cyclin D1, caused by loss of a driver tumor suppressor, may also be a cancer vulnerability that can be targeted therapeutically. Methods Cell culture and viral transduction. The BIN-67 cell line was obtained from Dr. S. R. Goldring (Hospital for Special Surgery, New York). OVCAR4 was from Dr. E. Wang (University of Calgary, Calgary). BIN-67, SCCOHT-1 24 , OVCAR4, and OVCAR8 were all cultured in RPMI with 7% fetal bovine serum (FBS), 1% penicillin/streptomycin, and 2 mM L-glutamine. IOSE80 cell line was obtained from Dr. Nelly Auersperg and was cultured in Medium 199/ MCDB 105 Medium (Sigma) with 5% FBS. COV434 was purchased from Sigma; OVCAR8, MCF7, and CAMA-1 cells were from ATCC and were chosen as representative breast cancer cells to direct compare the drug sensitivities with other cell lines used in our study; these cell lines were cultured in Dulbecco's modified Eagle's medium with 7% FBS, 1% penicillin/streptomycin, and 2 mM L-glutamine. All cell lines were free of mycoplasma and were maintained at 37°C and 5% CO 2 . All cell lines have been validated by Short-Tandem Repeat profiling and regularly tested for mycoplasma using the Mycoalert Detection Kit (Lonza). Lentiviral transduction was performed using the protocol as described at http:// www.broadinstitute.org/rnai/public/resources/protocols. Infected cells (30 h postinfection) were selected in puromycin or blasticidin for 2-4 days (2 days for RNA-Seq) and harvested immediately for the experiments. Pooled shRNA screen. Synthetic lethal screens using an shRNA library targeting human kinases and additional kinase-related genes constructed from the TRC human genome-wide shRNA collection (TRC-Hs1.0) were performed as described [28][29][30] . The screen data were processed using the MAGeCK statistical software package (version 0.5.4) 31 . Colony-formation assays. Single-cell suspensions of all cell lines were seeded into 6-well plates (2-8 × 10 4 cells per well depending on proliferation rate and cell size) and cultured both in the absence and presence of drugs as indicated. At the end points of colony-formation assays, cells were fixed with 3.75% formaldehyde, stained with crystal violet (0.1% w/v), and photographed. All relevant assays were performed independently at least three times. Cell viability assays. Cultured cells were seeded into 96-well plates (1000-4000 cells per well). Twenty-four hours after seeding, serial dilutions of palbociclib were added to cells to final drug concentrations ranging from 0.0026-4 µM. Cells were then incubated for 5-10 days and cell viability was measured using the CellTiter-Blue viability assay (Promega). Relative survival in the presence of palbociclib was normalized to the untreated controls after background subtraction. Protein lysate preparation and immunoblots. Cells were first seeded in normal medium without inhibitors. After 24 h, the medium was replaced with fresh medium containing the inhibitors as indicated in the text. After the drug stimulation, cells were washed with cold phosphate-buffered saline (PBS), lysed with protein sample buffer, and processed with Novex® NuPAGE® Gel Electrophoresis Systems (Invitrogen). Immunoprecipitation and kinase assay. Cells were resuspended in ice-cold lysis buffer (50 mM Tris pH 7.5, 150 mM NaCl, 1% NP40, 1 mM dithiothreitol (DTT), and protease/phosphatase inhibitors) and broken by passing through 20-gauge needles 20 times. After 30 min incubation on ice, lysates were clarified by centrifugation at 14,000 × g for 15 min at 4°C. Supernatant was collected as cell extract and protein concentrations were determined using Bradford Protein Assay (Bio-Rad). Three micrograms of HA (F-7, Santa Cruz) or CDK4 (DCS-35, Santa Cruz) antibodies were added to 2 mg of pre-cleared cell lysate in 500 μl of lysis buffer and incubated overnight at 4°C with continuous rocking. Protein immunocomplexes were then incubated with 40 μl protein G sepharose beads (Protein G Sepharose 4 Fast Flow, GE Healthcare) at 4°C for 2 h. Precipitated proteins were washed three times with lysis buffer and eluted with sodium dodecyl sulfate (SDS)-loading buffer at 95°C for 10 min and analyzed by western blot using Novex® NuPAGE® Gel Electrophoresis Systems (Invitrogen). Cell line RNA-Seq. Total RNA from cell lines was extracted with the RNeasy Mini Kit (Qiagen, Hilden, Germany) and quality controlled and subjected for RNA-Seq at Genome Quebec and the Institute for Research in Immunology and Cancer at Université de Montréal. Sequencing reads were mapped to reference human genome sequence (hg19) downloaded from Illumina iGenomes using STAR 58 (version 2.4.2). The number of fragments was counted with HTSeq 59 (version 0.6.1p1) based on known gene from RefSeq database on UCSC Genome Browser. Differential expression of genes was analyzed by Bioconductor package DESeq2 60 (version 1.19.38). Differently expressed genes were visualized in heatmap using R package gplots (version 3.0.1). Gene ontology biological process was performed using the GSEA 61 provided by Broad Institute. Among the enriched gene signatures, the top ten signatures were presented as bar plots according to p value, with GSEA plots for the top three signatures. RNA isolation and quantitative reverse transcriptase-PCR (qRT-PCR). For cell line samples, cells were first seeded and then harvested for RNA isolation using Trizol (Invitrogen) the next day. Synthesis of cDNAs and qRT-PCR assays were carried out to measure the mRNA levels of genes as described 29 . Relative mRNA levels of each gene shown were normalized to the expression of the house-keeping gene GAPDH. The sequences of the primers for assays using SYBR® Green master mix (Roche) are as follows: GAPDH_Forward For both BIN-67 and SCCOHT-1 models, mice were housed in groups of 3-5, with each group consisting of both vehicle control and treatment animals matched for tumor size on Day 1 of treatment. All gavage treatments were carried out using sterile straight 20-or 22-gauge, 38.1 mm stainless steel feeding tubes (Harvard Apparatus, QC). Tumor progression was monitored and measurements using digital calipers (VWR International) were recorded twice weekly. The persons who performed all the tumor measurements and the IHC analysis for the end point tumor samples were blinded to the treatment information. The SCCOHT PDX was established and viably preserved at Memorial Sloan Kettering Cancer Center (MSKCC). The PDX establishment was performed in accordance with the Institutional Animal Care and Use Committee at MSKCC. The patient tumor sample was received from the University of North Carolina with the Institutional review board approval and informed consent from the patient. The PDX work described in this manuscript was undertaken at NYU Langone Health and use of animals was overseen by the Division of Comparative Medicine under the direction and approval of the Institutional Animal Care and Use Committee at that institution and was conducted in accordance with all pertinent Federal regulations and policies. The experimental protocol used in these studies was approved by the Institutional Animal Care and Use Committee at NYU Langone Health. We used 5-6-week-old female mice: strain NOD.Cg-Prkdc scid IL2rg tm1Wjl /SzJ (Jackson Laboratory; ref. #005557). The viably frozen PDX was thawed and initially implanted in six mice for each treatment arm in the animal facility at NYU Langone Health as follows. The frozen PDX was thawed in PBS and washed twice to remove residual dimethyl sulfoxide. Tumor cells (300 µL) were then injected intraperitoneally using a 16-gauge needle. Once the tumors developed (approximately 3 months after injection), they were further propagated in multiple mice by subcutaneous injection. Single-cell tumor suspension was mixed with matrigel in a 1:1 ratio (50:50 µL) and injected in the flanks. Once tumors reached approximately 100 mm 3 in volume, we randomized animals into two groups; vehicle-treated and palbociclib-treated. As a vehicle, we used 50 mM sodium L-lactate (pH4). Palbociclib was solubilized in vehicle to a final concentration 15 mg mL −1 and stored at −80 o C. Treatments were performed daily by oral gavage with 200 µL (3 mg) of palbociclib solution. Mice were treated with the initial dose of 150 mg kg −1 of palbociclib. After first signs of any animal discomfort, we reduced the dose to 100 mg kg −1 . Mice that showed weight loss were not included for the analysis. TVs and weight were measured twice a week. To calculate TVs, we used caliper to measure the height (H), length (L), and width (W) and followed this formula: TV = (H × L × W)/2. Patient tumor samples. Tumor samples of 54 different SCCOHT patients were used in this study (Fig. 6). The SMARCA4 mutation status of 51 cases were confirmed by DNA analysis. Three cases that do not have DNA mutation information were confirmed for SMARCA4 protein loss by IHC. Studies on SCCOHT patient tumors (n = 33) were approved by the Institutional Review Board (IRB) at McGill University, McGill IRB # A08-M61-09B. Of these 33 SCCOHTs, 5 cases had fresh frozen samples for qPCR analysis (Fig. 6c) and 32 cases were used in making the TMA (Fig. 6d, e). Studies on 59 pathologist-confirmed (B.A.C.) ovarian HGSC samples (n = 7, Montreal, Fig. 6c and n = 52, Toronto, Fig. 6d, e) were approved by the ethics boards at the University Hospitals Network and the Jewish General Hospital respectively. Informed consent was obtained from all participants in accordance with the relevant IRB approvals. Hematoxylin and eosin (H&E)-stained sections of the 32 SCCOHTs (confirmed by DNA mutation analysis or/and SMARCA4 IHC) and 52 HGSC cases were reviewed and areas containing tumor only were demarcated and cored to construct TMAs using duplicate 0.6-mm cores from the demarcated areas. The NanoString gene expression studies using 17 SCCOHT 8,50 and 6 HGSC patient samples were approved by the IRB at MSKCC (Fig. 6a, b). Specialty gynecologic pathologists reviewed the cases to confirm diagnosis using previously described guidelines 8 . RNA was extracted from formalin-fixed paraffin-embedded (FFPE) tumor sections with at least 50% tumor cell content. For the RNA extraction, we used the Ambion's RecoverAll TM Total Nucleic Acid Isolation Kit for FFPE (Cat# AM1975) and performed extraction according to the manufacturer's suggestions. Subsequently, the RNA quality was confirmed using the Agilent's Bioanalyzer and the RNA amounts applied to the platform were adjusted according to the Quality Control score. Gene expression analysis. For the NanoString gene expression (nCounter Pan-Cancer Pathways Panel) study, the raw reporter code count data were normalized using the NanoStringNorm (version 1.1.21) R package. VSN affine transformation was applied to stabilize variance to normalize the data for visualization. Heatmap visualization was with ComplexHeatmap 62 (version 1.10.2). Immunohistochemistry. For mouse xenografts, 4-micron-thick sections from FFPE tissue were cut, deparaffinized, and stained using a IntelliPath automated immunostainer (Biocare Medical). The protocol included an antigen retrieval treatment in Diva Decloaker RTU (Biocare Medical) for 10 min followed by incubation with the primary antibody (phosphoRB, Cell Signaling, 9308, 1/200 dilution; KI-67, Abcam, 16667, 1/100 dilution) for 1 h at room temperature. Incubation was followed by detection using a Goat anti Rabbit horseradish peroxidase (Dako) and 3,3'diaminobenzidine (DAB; Dako). The slides were digitalized using an Aperio scanner. The mitotic index was measured by counting the mitotic active cells in 10 high-power fields (×400) of the H&E-stained tumor slides. Patient TMAs were stained with the following primary IHC antibodies Cyclin D1 (rabbit polyclonal, AbCam, 1:100), CDK4 (rabbit polyclonal, AbCam, 1:100), CDK6 (rabbit polyclonal, AbCam, 1:500), RB (mouse monoclonal, Cell Signaling 9309, 1:300), and p16 (mouse monoclonal, MTM Laboratories, prediluted). Sections were treated with xylene (EMD Chemicals Gibbstown, NJ) for 30 min and rinsed in xylene for 5 min each and then rehydrated in a series of descending concentrations of ethanol (Fisher Scientific Fair Lawn, NJ). Slides were then treated with 0.3% H 2 O 2 /methanol (AppliChem Darmstadt, Germany/Fisher Scientific Fair Lawn, NJ) for 30 min to block endogenous peroxide. Heat-induced epitope retrieval was achieved by treating slides in a pressure cooker with 0.01 M citrate buffer (Vector Burlingame, CA) (pH 7.6). Slides were then rinsed in 0.1 M Tris buffer (Dako Caprinteria, CA) with Tween 20 (Dako Caprinteria, CA), then blocked with 2% FBS (Sigma-Aldrich St. Louis, MO) for 5 min. Slides were subsequently incubated with the specified primary antibodies for 1 h at room temperature or overnight at 4°C. Slides were then rinsed in Tris (Dako Caprinteria, CA) and then incubated with biotinylated anti-mouse IgG or anti-goat IgG (Vector Laboratories; U0625) for 30 min at room temperature. After rinsing in Tris, slides were incubated with the avidin-biotin complex (Vector Laboratories; PK-6100) for 30 min at room temperature, followed by incubation with DAB (DAKO Cytomation; K3468) for 10 min at room temperature. Slides were then rinsed, dehydrated through a series of ascending concentrations of ethanol and xylene, and coverslipped. Automated quantification. Both SCCOHT and HGSC TMAs were scanned using an Aperio Scanscope Scanner (Aperio Vista, CA) and viewed through the Aperio ImageScope software program. An individual blinded to the experimental design captured JPEG images from each core (circular area of 315 cm 2 corresponding to the entire core) at ×10 magnification on the Aperio ImageScope viewing program. The same blinded individual performed quantification of immunostaining on each JPEG using an automated analysis program with Matlab's image processing toolbox based on previously described methodology 52 . Cores with low tumor cellularity and artifacts were not included in the analysis. The algorithm used color segmentation with RGB color differentiation, K-Means Clustering, and background-foreground separation with Otsu's thresholding. To arrive at a score for each core (represented as pixel units), the cases were unblinded and the number of extracted pixels were multiplied by their average intensity. The final score for a given case and marker was calculated by averaging the score of two cores (for each case) for a given marker.
2019-02-05T15:29:46.935Z
2019-02-04T00:00:00.000
{ "year": 2019, "sha1": "252c0b4b4c8185ec8eb5447f0fcc30cf96204a4c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-06958-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b27c746b93ec0c63dd198b009b03445b58b3f38", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12661023
pes2o/s2orc
v3-fos-license
A case of Kikuchi-Fujimoto disease with autoimmune thyroiditis Kikuchi-Fujimoto disease (KFD) is a benign self-limiting disease characterized by fever and lymphadenitis. The etiology and pathogenesis of KFD is unclear. However, two hypotheses have been suggested: a viral infection hypothesis and an autoimmune hypothesis. Several KFD patients with various types of autoimmune diseases have been reported, and these reports support the hypothesis for autoimmune pathogenesis of KFD. Here, we report the case of a 17-year-old female patient diagnosed with KFD and autoimmune thyroiditis. This case serves as additional evidence that the etiology of KFD is autoimmune origin. Introduction Kikuchi-Fujimoto disease (KFD) is a benign self-limiting disease characterized by fever and lymphadenitis, especially of the neck 1,2) . The exact cause and pathogenesis of KFD have not yet been defined. Previously, it was thought that some viral infections such as Epstein-Barr virus (EBV), human herpes virus (HHV), parvovirus B19, and human T-lymphotropic virus-1 (HTLV-1) might cause lymphadenitis in KFD 2) . On the other hand, reports of KFD patients with autoimmune diseases seem to suggest that the pathogenesis of KFD is autoimmune [1][2][3] . Several KFD patients with systemic lupus erythematosus (SLE) and hemophagocytic lymphohistiocytosis (HLH) have been reported in Korea, but a KFD patient with autoimmune thyroiditis has not yet been reported 4,5) . Here, we report the case of a 17-year-old female patient diagnosed with KFD and autoimmune thyroiditis. Our findings could serve as additional evidence of the autoimmune origin of KFD. Case report A 17-year-old girl was admitted to a university hospital with lymphadenopathy on the right side of the neck lasting for a week, and she was treated with antibiotics. However, she complained fever, sore throat, and otalgia beginning on the fourth day of hospitalization, and she was transferred to Seoul St. Mary's Hospital at her request on the seventh day of hospitalization. Three years prior she had experienced fever with lymphadenopathy on the left side of the neck. She was admitted to the same hospital, treated with antibiotics, and recovered. At that time, she was investigated for nonfunctioning goiter. Thyroid function tests were normal and the levels of antithyroid antibodies were close to the upper limits of normal. The thyroid scan showed diffuse distribution of the radioisotope. Her mother and maternal grandmother have hypothyroidism. She was conscious at the time of transfer to our hospital. Her blood pressure was 100/70 mmHg, heart rate was 78 beats/min, respiratory rate was 20 breaths/min, and body temperature was 38.4℃. She had multiple tender lymph nodes on the right lateral side of the neck and in the right supraclavicular area, and the largest lymph node was 3×2 cm in size. She also had a tender goiter. Her laboratory tests showed anemia (hemoglobin 7.6 g/dL), leucopenia (white blood cell count 2,700/μL), and elevated levels of erythrocyte sedimentation rate (ESR) of 70 mm/hr, C-reactive protein of 0.93 mg/dL and lactate dehydrogenase (LDH) of 688 U/L. Laboratory tests for anemia revealed iron deficiency. The test for EBV infection, tuberculin skin test and blood culture were negative. She was negative for rheumatoid factor and antinuclear antibodies were detected (titer=1:100). Thyroid function tests were normal, but antithyroid peroxidase antibodies and antithyroglobulin antibodies were elevated (Table 1). Computed tomography of the neck revealed multiple enlarged lymph nodes at levels II, III, IV and V on both sides of the neck and in the right supraclavicular area of the neck (Fig. 1). On the second day of hospitalization, she complained of pruritic skin rashes on her lower extremities. Despite antibiotic and analgesic treatment, the fever persisted, the skin rashes spread to her trunk and upper extremities, her cervical lymph nodes continued to enlarge, and the lymphadenopathy spread to the occipital area. On the sixth day of hospitalization, an excisional biopsy of the enlarged cervical lymph node was performed, and the histopathologic findings were consistent with KFD (Fig. 2). Her fever persisted after the excisional biopsy, so we started the administration of oral prednisolone (0.5 mg/kg/day) on the seventh day of hospitalization. On the ninth day of hospitalization, the fever disappeared and the skin rashes began to subside. Ultrasonography of her thyroid showed heterogeneous echogenicity of the thyroid gland and a solitary nodule. We diagnosed her with autoimmune thyroiditis on the basis of her family history and the laboratory results. Thus, she was ultimately diagnosed with KFD and autoimmune thyroiditis. Her hemoglobin was 9.2 g/dL after one month of iron supplementation. Discussion KFD was first described in 1972 separately by Kikuchi 6) and Fujimoto et al. 7) . It has a worldwide distribution with a higher prevalence in Southeast Asian women 2) . KFD usually occurs in young women between the ages of 20 and 40 years 1) . The female to male ratio is 4:1 1) , but some reports from East Asia show a relatively lower female to male ratio, 1:1 to 2.28:1 [8][9][10] . KFD commonly manifests as cervical lymphadenitis with fever, but mediastinal, mesenteric, inguinal, or axillary lymph nodes may be involved 2,10) . Other uncommon symptoms include upper respiratory symptoms, sore throat, weight loss, fatigue, headache, nausea, vomiting, arthralgia, skin rash, and hepatosplenomegaly 1,2,9,10) . Leucopenia, anemia, elevated ESR, elevated serum LDH, or elevated liver enzyme levels have been reported in many cases, although there is no specific laboratory finding indicative of KFD 1,2,[8][9][10] . KFD can be diagnosed definitively through histological analysis of lymph node biopsy tissue 1) . Fine-needle aspiration cytology (FNAC) can be used to diagnose KFD 11) . However, the diagnostic accuracy of FNAC for KFD is about 56%, thus excisional biopsy of the involved lymph node is necessary for the accurate diagnosis of KFD 1) . The involved lymph nodes characteristically demonstrate apoptotic necrosis with abundant karyorrhectic debris in the cortical and paracortial areas 1,12) . The necrotic lesions are surrounded by many different types of histiocytes including plasmacytoid dendritic cells, and the number of transformed lymphocytes with immunoblast morphology is increased in some cases 1,12) . However, neutrophils and eosinophils are characteristically absent, and plasma cells are few or absent 1,12) . Lymphocytes are predominantly T cells rather than B cells, and the T cells are predominantly CD8+ T cells rather than CD4+ T cells 1,12) . The cause and pathogenesis of KFD remains unclear, and two hypotheses have been suggested: a viral infection hypothesis and an autoimmune hypothesis 1) . Prodromal upper respiratory symptoms, the lack of response to antibiotic therapy, atypical lymphocytes in some cases, T cell zone expansion with immunoblast proliferation, and elevated interferon-α support the hypothesis of a viral etiology 1) . Many viruses including EBV, cytomegalovirus, HHV, parvovirus B19, and HTLV-1 were suggested as the causative organism, but a causal relationship has not been confirmed 2) . In contrast to the pathologic findings of KFD, viral lymphadenitis has less prominent histiocyte infiltration, more neutrophils and plasma cell proliferation, and CD4+ T cell predominance 1) . Based on the clinical similarity (fever, lymphadenitis, skin rash, arthralgia) and histologic similarity (paracortical necrosis with karyorrhectic debris and inflammatory cell responses) of KFD and SLE, an autoimmune etiology was suggested 3,12) . In addition, continuing reports of autoimmune diseases in KFD patients and therapeutic response to steroids and intravenous immunoglobulins in KFD patients with autoimmune diseases also support the autoimmune origin of KFD 3) . KFD cases with SLE, HLH, Sjögren syndrome, adult onset Still's disease, and polymyositis have been reported from several areas of the world, and the autoimmune diseases developed before, after, or simultaneously with KFD [2][3][4][5]10,13) . However, only a few KFD cases with autoimmune thyroiditis have been reported 3,14) . Until now, a case of KFD with autoimmune thyroiditis has not been reported in Korea. We diagnosed this 17year-old girl with KFD on the basis of the histopathologic findings of the excised lymph node, and autoimmune thyroiditis was diagnosed on the basis of the presence of antithyroid antibodies. This case is an additional example of the relationship between KFD and autoimmune pathogenesis. KFD is a self-limiting condition and usually resolves in one to six months with recurrence rates of 3 to 4% 1,9) . Usually, only symptomatic treatment with analgesics and antipyretics is needed 1) . Steroids can be used to treat patients with complicated or systemic KFD, recurrent KFD, prolonged fever, and symptoms lasting more than two weeks despite treatment with nonsteroidal antiinflammatory drugs 14) . In our case, lymphadenopathy spread to the occipital region and the fever remained after the excisional biopsy, so we decided to treat her with oral steroids, and there was a dramatic clinical effect. To date, several KFD cases associated with various types of autoimmune diseases have been reported 4,5,13,15) . These reports support the hypothesis that KFD is caused by an autoimmune mechanism. Clinicians should consider KFD when lymphadenopathy arises in patients with autoimmune disease, and should also keep in mind the possibility of autoimmune disease in KFD patients. However, it is not clear whether KFD and autoimmune diseases have a direct association or occur concomitantly by chance 2) . Additional evaluations of the relationship between KFD and autoimmune diseases are necessary.
2016-05-12T22:15:10.714Z
2012-11-01T00:00:00.000
{ "year": 2012, "sha1": "2a4710484308c198c1f96c76247db48d9e4c1b4e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3345/kjp.2012.55.11.445", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd411812c96ed735397686970e0c9ba5b7ea840b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225783027
pes2o/s2orc
v3-fos-license
Improvement of the Methodology for the Property Registry Formation as a Tool Preventing the Development of Hidden Monopolies The paper considers the problem of creating hidden monopolies as a result of concentration of ownership. The object of research is the transformation of ownership and mechanisms for evaluating and monitoring these changes in order to avoid the formation of hidden monopolies. The transformation of ownership is considered as a change in any component of the right of ownership: ownership, use and disposal, since a change in any of these conditions leads to a change in the economic effect of ownership. The features of the functioning of the institution of trust management are considered, it complicates the information base about the ultimate beneficial owners of the existing property registers, in turn, they focus on the right to own property and change the owner. This approach provokes an additional concentration of ownership, is not monitored by state authorities and entails negative socio-economic consequences. The General Property Register proposed in the work provides for the presentation of information on property relations in their dynamics, not only taking into account the right of ownership, but also use and disposal, allows to cover information about the ultimate beneficiaries, previously not always available. A matrix approach is proposed for the formation of the information base of the registry itself, which will be combined into a single database of property relations in the dynamics of both individuals and legal entities. It is assumed that this Register should be formed by the Ministry of Justice of Ukraine. The chronology of property transformation can be recorded from the moment of registration of legal entities and the assignment of the code of the Unified State Register of Enterprises and Organizations of Ukraine, individuals-entrepreneurs and public organizations and the receipt by individuals of an identification code. Information The registry will be available to public authorities that are involved in the planning, disposal, monitoring and evaluation of changes in all forms of ownership from the point of view of public administration. Positive results are expected from the use of the Unified Property Register. Introduction One of the key economic categories at all levels of economic relations is property. A feature of this category is that the range of its scientific study and practical application is very wide: from philosophical sciences to economics and jurisprudence. Property is valuable for its capabilities and prospects for obtaining economic benefits in the current period or in the future to its owner. However, this category is quite dynamic, since the final result depends on the ownership of it or other operations depending on the transformation of the property itself, namely the change of ownership, terms of use and disposal. The transformation of ownership is a continuous and cyclical phenomenon [1]. It is it that contributes to the development of new forms of production and innovation, and hence economic development [2]. However, the experience of many countries of the world shows that the process of changing the conditions of ownership requires strict regulation and control by state authorities. The revi-talization of the property transformation process provokes an increase in the concentration of ownership, limits the level of competitive rights of all market participants and complicates the emergence of new participants [3]. Monopolies are becoming more common in the world, despite developed antitrust laws. Some countries, such as China, are seeking a balance between state antitrust regulation and state monopolies as key market participants [4]. However, the biggest problem that many countries of the world have faced in recent years is the formation and use of the necessary database, which is becoming an increasingly important factor in the economy [5]. Due to the fact that concentration assessment is carried out only on indicators of ownership, often using data on the use and disposal of property, data on the ultimate beneficiary of the property are not taken into account. This creates an environment conducive to the creation of hidden monopolies. Digitalization processes require new forms of antitrust regulation, which, according to forecasts [6], will change from centralized closed platforms to decentralized open innovation networks, including those based on blockchain technologies. In this regard, the study of the control procedure for the processes of property transformation and their improvement is relevant and timely. Thus, the object of research is the transformation of ownership and mechanisms for assessing and monitoring these changes in order to avoid the formation of hidden monopolies. And the aim of research is to improve the methodology for the formation of the Unified Register of Property, which is form a matrix of data on changes in ownership, use and disposal of property and is a tool to prevent the development of hidden monopolies. Methods of research The main regulatory authority that exercises state control over the concentration and protection of economic competition [7] is the Antimonopoly Committee of Ukraine. The Antimonopoly Committee of Ukraine or the administrative board of the Antimonopoly Committee of Ukraine grants permission to concentrate if it does not lead to monopolization or a significant restriction of competition in the entire market or in a significant part of it. However, it uses the methodology for assessing the possible concentration, which is based on the analysis of assets, the volume of sales of goods, works, services, as well as the calculation of total shares in commodity markets [8]. Using the method of analysis and synthesis, as well as the dialectical method in the study of alternative ways to avoid the prohibition of concentration and the role of trust in it, a number of shortcomings of the current legislation are identified. They relate to the additional possibility of obtaining economic benefits from the transformation of property in terms of concentration of assets. Research results and discussion The concentration of capital on an allowed scale makes it possible to scale up economic activity and develop it. However, it can lead to negative consequences. Especially when it comes to unauthorized concentration of hidden monopolies, which are most often formed as a result of abuse of trust in property, and difficulties in identifying ultimate beneficiaries. The ultimate beneficial owner (controller) -an individual, regardless of formal ownership, has the ability to exercise a decisive influence on the management or economic activity of a legal entity directly or through other persons. This is done, in particular, by exercising the right to own or use all assets or their significant share, the right to decisively influence the formation of the composition [9]. Despite the requirements of the Antimonopoly Committee, indicate in the statements on the concentration of economic activity of all the ultimate beneficial owners, according to the annual reports of the State Financial Monitoring Service, this requirement is very often violated [10]. Trust property has a number of features: -anonymity when the ultimate owner of the property remains unknown; -possibility of joint ownership of property (for example, real estate) by several owners; -ability of the same person to be a founder and beneficiary, thereby receiving all the benefits from the property; -distinction between owners, beneficiaries and managers, which make trust relations a convenient mechanism for tax evasion. In some countries, beneficiaries (property users) are not required to report the income of the trust in which the beneficiaries live; -use of trust for various economic and legal purposes: the inaccessibility of property to creditors, the abi lity to declare the absence or insufficient amount of available own assets and claim, for example, to use a lower tax rate or to receive assistance from the state; -possibility of registering/re-registering enterprises as dummies (students, pensioners, socially vulnerable segments of the population, persons registered in the territory not controlled by Ukraine) for a monetary reward [11]; -complication of the ownership structure. This confirms the need to consider property and its transformation from the perspective of three components: ownership, use and disposal. And although, at first glance, they belong to the categories of law, abandoning them, the understanding of the economic result from further transformations is lost. The right of ownership presupposes the right of possession, use and disposal. Hence, if the owner owns, uses and disposes of a certain object, then this process can be characterized by the corresponding three components of ownership, which can change and acquire economic significance: 1) ownership -reflects the market value of the property; 2) use -reflects the potential profits that the owner can get as a result of using the object; 3) order -reflects the potential income that the owner may receive as a result of any operations with the object related to its order (sale of his or her part, leasing, etc.). A change in any of these indicators leads to a change in the economic result from the property itself. So, it is possible to own property, use and dispose of it, but it is possible, for example, rent it out and only own and dispose of it in trust and only own it without using it but not using it. The components of property transformation are constantly changing, because the owner will always look for the optimal solution for optimizing property relations. That is why let's believe that a change in any component of the right of ownership: possession, use or disposal leads to the transformation of the property itself, which let's consider from the perspective of not only ownership [12]. This approach makes it possible to form entire matrices of property transformation, which allow tracking not only the owner's change operations, as in existing registries, but also other operations related to use and disposal. An important aspect in the activities of state bodies administering in the field of state corporate rights is the creation of prerequisites for an adequate valuation of property, requires the formation of a single assessment base, which is why the creation of a Unified Register of Property is proposed (Fig. 1). The formation and use of a common unified register of property usually should bring certain results ( Table 1). Each of these results of using the proposed tool -a unified register of property -has its own explanation. ISSN 2664-9969 In addition to the Antimonopoly Committee and the State Financial Monitoring Service, the Register data would be extremely useful also for the State Property Fund of Ukraine. First of all, it is responsible for the effective implementation of privatization and disposal of state property. On the other hand, it should monitor the privatized property to ensure that investors fulfill their obligations and the socio-economic effect of privatization. So, there are certain contradictions between the extent of privatization of state property [13] and the economic results of property transformation. The transformation of ownership took place through a large-scale and fleeting change in the legal status of state-owned enterprises. Moreover, the development and implementation of the economic mechanism for the sale of enterprises of non-state forms of ownership were pushed into the background [14]. The Ministry of Economic Development and Trade of Ukraine evaluates the effectiveness of property transformation due to the reduction of state corporate rights. There is a large list of successful enterprises, a significant share of the shares of which the state owns. It is for such enterprises that it is important to avoid cases of concen-tration of ownership through unknown beneficiaries in one structure, violates the principles of transparency and open competition. Conclusions The study shows that the creation of a single common register of property as the basis for the use of tools for assessing, planning and monitoring the transformation processes of various forms of ownership would allow to adequately monitor property transformations not only in terms of ownership, but also use and disposal. Adequacy of the assessment of property transformations means the uniformity of the provision of certain assessments for the same parameters of property and the conditions for its use in different regions of the country. Ukrainian competition law provides only for the previous form of control over the economic concentration of business entities. The registry proposed in the work will help formalize the process of property transformation in dynamics, helping to avoid the creation of hidden monopolies. Such an instrument for the assessment, state planning and control
2020-07-30T02:06:42.419Z
2020-06-30T00:00:00.000
{ "year": 2020, "sha1": "af61e0a9fe7dedb123893c5a5a0664536a0a0fe7", "oa_license": "CCBY", "oa_url": "http://journals.uran.ua/tarp/article/download/207048/208544", "oa_status": "GOLD", "pdf_src": "ElsevierPush", "pdf_hash": "fe1cee6909369658dabd0e0d872647168b03d574", "s2fieldsofstudy": [ "Law", "Business" ], "extfieldsofstudy": [ "Business" ] }
150813111
pes2o/s2orc
v3-fos-license
Use of the van Hiele Theory in Investigating Teaching Strategies used by College of Education Geometry Tutors The main purpose of this study is to explore the extent to which 11 selected mathematics tutors facilitate the teaching and learning of geometry at the van Hiele Levels 1, 2, 3 and 4 at the college of education level in Ghana. The van Hiele theory of geometric thinking was used to explore the type of teaching strategies employed by the mathematics tutors. The theory served as a guideline from which classroom observation protocol was developed. Results indicated that the tutors exhibit a good conceptual understanding in facilitating the teaching and learning of geometry that is consistent with van Hiele Levels 1 and 2. However, much of the geometry teaching and learning strategies of the mathematics tutors were not structured in a way that support the development of geometric thinking as described in van Hiele Levels 3 and 4. Implications for the involvement of college mathematics tutors in utilizing the van Hiele framework were discussed. INTRODUCTION The mathematics Curriculum in Ghana (Ministry of Education, Science and Sports (MOESS), 2007) reverberates the belief of researchers (Al-ebous, 2016;Alex & Mammen, 2016;Furner & Marinas, 2011;Jones, 2002) that geometry is an essential subject which deserves significant attention in the teaching of mathematics.This is because geometry considered as a tool for understanding and interacting with the natural world is perhaps the most intuitive, concrete and reality-linked part of mathematics (Güven & Kosa, 2008).It is in the language of geometry that the physical and also imagined spatial environments are conceptualized and analyzed, thus, helping students to develop the skill of critical thinking, deductive reasoning and problem-solving (Alex & Mammen, 2012). In contemporary mathematics and technology, relevant concepts such as symmetry are most simply introduced in a geometric context.Moreover, real life activities such as designing an electronic circuit board, a building, a dress, an airport, a bookshelf, or a newspaper page, necessitate an understanding of geometric principles (Yegambaram & Naidoo, 2009).As a result of the importance of geometry in both education and real life, many countries and educational jurisdictions are concerned about how teachers teach and how learners learn various aspects of the subject. In Ghana, geometry forms a substantial amount of the college of education mathematics curriculum as it is put into two major aspects; content aspect, studied in the first year and methodology aspect, which is studied in the second year (Institute of Education, 2005).Geometry at the primary school level is treated as shape and space, occupying approximately 17% of the 6 major content areas covered in the mathematics teaching syllabus.The main aim of teaching shape and space is to enable learners "organize and use spatial relationships in two or three dimensions, particularly in solving problems" (MOESS, 2007, p. ii) and also to help them possess a better understanding of other concept areas in mathematics.Thus, the teaching of geometry at the college of education level should be done in ways that promote geometric thinking; so that pre-service teachers can in turn prepare their basic school pupils to effectively master the subject. / 13 Basic school geometry teaching has conventionally focused on the categorization of shapes (Baffoe & Mereku, 2010).As a result of living in a three-dimensional world, learners enter school with a significant amount of intuitive geometric experience.Yet, most mathematics teachers teach geometric concepts using just textbooks and the chalkboards (De Villiers, 2012;Khalid & Azeem, 2012).In most geometry lessons, the main emphasis is put on twodimensional geometry, while three-dimensional geometry rather stays in the background.Ordinary routine calculations dominate geometric activities while spatial-visual skills are frequently and extensively avoided (Christou, Jones, Pitta, Pittalis, Mousoulides & Boytchev, 2007;Yegambaram & Naidoo, 2009).Currently, existing evidence indicates that traditional textbook-chalkboard teaching strategies do not provide significant spatial experience (Alex & Mammen, 2016;Erdogan & Durmus, 2009). Geometry learning is problematic to many students, as they fail in developing the appropriate understanding of geometric concepts and in acquiring the geometric problem solving skills.Van Hiele (1986) attributed this to the fact that traditionally, teachers presented the curriculum at a higher level than those of the students.In the view of Alex and Mammen (2012) students' weakness in geometric thinking may be due to teachers' failure to use effective and appropriate teaching methods that will actively engage students and help them use their thinking skills effectively.Shulman (1987) made a distinction between three groups of knowledge that a teacher should hold in order to teach successfully.These are content knowledge, pedagogical content knowledge and curriculum knowledge.Nevertheless, emphasis is heavily placed on pedagogical content knowledge "because it identifies the distinctive bodies of knowledge for teaching" (Shulman, ibid.). Studies (Armah, Cofie, & Okpoti, 2017;Gogoe, 2009) suggest that majority of Ghanaian pre-service teachers geometrical knowledge is abysmally low.These studies have therefore suggested the need for educators to adopt model-based teaching strategies that are supportive and involve more hands-on investigations that will actively involve the pre-service teachers.This is buttressed by the Ghanaian mathematics curriculum (MOESS, 2007) when it endorses the use of realia and model-based instructions.The van Hiele model is one of such model-based instructions recommended by many authors in literature for effective geometry instruction (Alex & Mammen, 2016;Howse & Howse, 2015;Mostafa, Javad & Reza, 2017). Research has shown that, studies on the van Hiele theory and how students learn geometry is an important source to understanding teachers' pedagogical content knowledge on geometry teaching (Erdogan & Durmus, 2009;Halat, 2008).The van Hiele theory has been suggested to be the best and most well-defined theory for students' levels of thinking in the field of geometry (Alex & Mammen, 2016;Rizki, Frentika & Wijaya, 2018).The theory was created to provide and develop geometric understanding.The van Hiele theory categorizes students learning abilities in geometry into five distinct and hierarchical Levels of geometric thinking, and also offers a model of teaching that teachers could apply in order to promote their learners' levels of understanding in geometry.This is because the teacher has to be aware of effective teaching methods that are aimed at raising students' understanding and giving students the opportunity to participate in the process of teaching. The van Hiele Levels The van Hiele theory originally consists of five sequential and hierarchical discrete Levels of geometric thought namely: Visualization, Analysis, Order (Informal Deduction), Deduction, and Rigor (van Hiele, 1986).Each of the five Levels (see Table 1) describes the thinking processes used in geometric contexts.As one progresses from one Level to the next, the object of one's geometric thinking changes.In primary school, students tend to move from level 1 to level 2. For example, at level 1, learners recognize figures by appearance alone "and compare the figures with their prototypes or everyday things ("it looks like a door"), categorize them ("it is / it is not a…").They use simple language" (Vojkuvkova, 2012, p.72).At level 2, learners start analyzing and naming properties of geometric figures but they do not understand the interrelationship between different types of figures.Then in junior high Contribution of this paper to the literature • In Ghana, colleges of education are responsible for training basic school teachers.It is recognized that Ghanaian basic school learners have great difficulties in geometry. • This study investigated the teaching strategies used by college of education geometry tutors in Ghana using the van Hiele theory as a lens. • The study developed an observation schedule that could be used to investigate the tutors' teaching strategies.The results from the study showed that the tutors have good conceptual understanding in facilitating the teaching and learning of geometry that is consistent with van Hiele Levels 1 and 2 but not at levels 3 and 4. / 13 school learners move to level 3 where they see the interrelationship between different types of figures.They can create meaningful definitions and give informal arguments to justify their reasoning at this Level.Logical implications and class inclusions, such as squares being a type of rectangle, are understood at this level (Halat, 2008).Learners at level 4 can give deductive geometric proofs.They understand the role of definitions, theorems, axioms and proofs.It is the view of Schwartz (2008) that since senior high school geometry curriculum is built on geometric proofs, this is the level of development that they need to be prior to completion of senior high school.Learners at Level 5 "understand the formal aspects of deduction, such as establishing and comparing mathematical systems" (Mason, 1998, p. 5).Here, learners learn that geometry needs to be understood in the abstract; see the "construction" of geometric systems.The basis of the van Hiele theory is the idea that a student's growth in geometry takes place in terms of distinguishable levels of thinking.In planning geometry lessons, it is important to have these levels in mind (Choi-koh 1999 cited in Howse & Howse, 2015). The van Hiele Teaching Phases According to van Hiele (1986), cognitive progress in geometry can be accelerated by instruction and that there are five phases in the teaching process that promote students' progress from one van Hiele level to the next in geometry classroom instruction.The van Hiele phases (see Table 2) are Inquiry/information, Directed/guided orientation, Explicitation/explanation, Free orientation, and Integration.The approach used in these five phases provides a structured lesson by giving clear explanations of how the teacher should proceed to guide students through "introductory discussion, straightforward exercises, language development, multi-path solution exercises and topic overview in a sequence" (Pegg, 1995, p.98) as they move from one level to the next.The current study used the van Hiele theory which encompasses the levels of geometric thinking and the phases of instruction.The phases of instruction specifically considers teachers' mode of instruction; the main objective at this point was to explicate the pedagogical patterns of geometry instruction in colleges of education in Ghana using the van Hiele phases as a lens. RESEARCH ON THE USE OF VAN HIELE THEORY IN INVESTIGATING GEOMETRY TEACHING STRATEGIES A review of literature on the van Hiele theory in geometry instruction has revealed a global advocacy for its adoption to improve the way geometry is taught and learnt (Alex & Mammen, 2016;Howse & Howse, 2015;Suwito, Yuwono, Parta & Irawati, 2017).This advocacy for the use of the van Hiele theory has implications for mathematics teacher education since the teacher is the driving force in any instructional process.Teacher training institutions have to provide not only professional knowledge but also instructional design capacities to pre-service/prospective teachers to enable them to implement the van Hiele theory in their teaching (Armah et al, 2017).The aim of this provision is to afford pre-service teachers with what is required to teach with strategies and approaches proved to be effective. Information Teachers guide students to develop vocabulary and concepts for a particular task.The teacher assesses students' interpretation/reasoning and determines how to move forward with future tasks. Directed orientation Students actively engage in teacher-directed tasks.They work with the developments from the previous stage to gain an understanding of them as well as the connections among them.Explicitation Students are given the opportunity to verbalize their understanding.The teacher leads the discussion. Free Orientation Students are challenged with tasks that are more complex and discover their own ways of completing each task. Integration Students summarize what they have learned, creating an overview of the concept at hand. / 13 In examining how geometrical concepts were taught to a group of pre-service teachers, van Putten (2008) employed the van Hiele theory of levels of teaching as the theoretical framework.Findings indicated that most preservice teachers were taught geometry with route learning methods, using textbooks to present theorems and proofs.Moreover, most proof exercises given to the pre-service teachers were not challenging enough to compel pre-service teachers to engage in effective reasoning.As a result, most of the pre-service teachers could not recall geometric concepts which were taught and were also not able to logically apply the concepts. Similarly, Muyeghu (2008) investigated the extent to which mathematics teachers facilitate the teaching and learning of geometry at the van Hiele levels 1 and 2 at a Grade 10 level in selected schools in Namibia.The van Hiele theory of geometric thinking and the five Kilpatrick components of proficient teaching were used to investigate the type of teaching strategies employed by the mathematics teachers.These components include conceptual understanding, procedural fluency, strategic competence, adaptive reasoning, and productive disposition (Kilpatrick & Findell, 2001).Findings obtained from the classroom observations and interviews indicated that much of the teaching and learning of Grade 10 geometry teachers in the selected schools are structured in such a way to support the development of geometric thinking as described in the van Hiele theory, particularly at levels 1 and 2. However, the study found evidence indicating that gaps exist with regard to the facilitation of the teaching and learning of geometry in the schools studied.Ding and Jones (2007) also analyzed the teaching of geometrical proof at (the lower secondary school level) grade 8 in Shanghai, China using the van Hiele theory.The analysis indicated that though the first three of the van Hiele teaching phases were found in the Chinese lessons, the instructional intention of the guided orientation was not exactly the same as that identified by van Hiele.The study suggested that teachers use classroom strategies which attempt to reinforce visual and deductive approaches in order to develop learners' thinking in the transition to deductive geometry instruction. Using the van Hiele model of geometry instruction as a lens, Atebe (2008) explored geometry instructional practices that possibly contributed to the levels of geometric conceptualization exhibited by a group of high school learners in Nigerian and South African schools.Videotaped lessons were analyzed to determine its conformity with criteria on the checklist of van Hiele phase descriptors developed for the study.Observation findings indicated that instructional processes in the majority of the lessons videotaped did not conform to the van Hiele model of instruction in the geometry classroom.By comparing the van Hiele model of geometry instruction with the observed geometry instructional approaches, Atebe (2008) concludes that the geometry teaching approaches in Nigerian and South African classrooms give learners little opportunity to learn the subject.This according to the researcher resulted in the low levels of geometric conceptualization of the cohorts of learners from these areas. However, in a similar study explaining why Japanese school learners usually outperformed many of their counterparts from other countries in TIMSS, Stigler and Hiebert (1999) concluded that schools in Japan use teaching methods which offer greater opportunities for learning mathematics.Atebe (2008) suggests that in order to teach geometric concepts well, knowledge, skill and judgement are required.Instruction based on the van Hiele model develops substantial geometrical knowledge and lifelong learning skills (Alex & Mammen, 2016).Given the findings of these studies, it would seem that learners whose instructional experiences are in line with the van Hiele phases of learning are most likely to exhibit a better understanding of geometric concepts.One of the strengths of using the van Hiele framework is that the advancement of learners from one geometric thinking level to the next depends more on teaching than on the chronological age of the learner.Teacher training institutions should take note of this as they have the sole mandate of preparing pre-service/prospective teachers for pedagogical activities in the classroom. PROBLEM STATEMENT A number of studies have been carried out to investigate pupils understanding in mathematics in Ghana (Anamuah-Mensah & Mereku, 2005; Anamuah-Mensah, Mereku & Asabere-Ameyaw, 2008; Baffoe & Mereku, 2010).These studies have reported nothing but the abysmal performance of pupils especially in the field of geometry.The study by Baffoe and Mereku (2010) specifically sought to find out the stages of the van Hiele levels of understanding Ghanaian students reach in the study of geometry before entering senior high school.Results from the study indicated that the stage of the van Hiele level of understanding reached by most (i.e. over 90%) Ghanaian students before entering senior high school is lower than what most students at this stage reach in other countries in the study of geometry. An empirical study (Armah et al., 2017) corroborated with the above evidence where majority of Ghanaian preservice teachers who took part in that study attained very low levels of van Hiele geometric thinking in a test conducted to assess their geometrical thinking levels.In the study, 75.33% of pre-service basic school teachers' van Hiele Levels of geometric thinking was below Level 3 (Order).This pointed out to the fact that the pre-service teachers operate at a van Hiele Level that is lower than that expected of their future learners.It is, therefore, EURASIA J Math Sci and Tech Ed 5 / 13 necessary to look at the kind of training and preparation being given to pre-service teachers in geometry.This is because "the issue of providing quality education to pupils is directly related to the quality of teachers in the system" (Ampiah, 2010, p. 3). The van Hiele theory has motivated a considerable amount of research in many developed countries (Howse & Howse, 2015;Ma, Lee, Lin, & Wu, 2015;Noh & Abdullah, 2016;Tieng & Eu, 2014) and this has resulted in changes in their geometry curricula and teachers' instructional approaches.However, in Ghana, the literature suggests that studies on the van Hiele theory are limited.For Ghana to realize significant development there is the need for a resilient education system which can contest in the global world of mathematics.This can materialize if there are good teachers who can continuously advance the teaching strategies in mathematics especially in the geometry classroom.Teachers should, therefore, be trained to pose as facilitators of learning and their instructional approaches should be in agreement with what has been proved to be effective. PURPOSE OF THE STUDY AND RESEARCH QUESTION Geometry is one of the major areas of mathematics where students' performance has been abysmal.Over the years research has shown that many students are having difficulties in learning geometry and making wrong associations in geometry classrooms (Alex & Mammen, 2012;Armah et al, 2017;Rizki, Frentika & Wijaya, 2018).It is for these reasons that the researchers found it expedient to explore more rigorously the way geometry is taught and in particular how college of education mathematics tutors facilitate the learning of geometry.In pursuance of this purpose, the following research question was formulated to guide the study: To what extent do college of education mathematics tutors facilitate geometry teaching and learning consistent with the van Hiele Levels of geometric thinking? Research Design This research is conducted within an interpretive orientation.Theoretically, interpretive paradigm gives researchers the opportunity to see and understand the world through the perceptions and experiences of the participants.Interpretivist researchers, therefore, discover reality through participant's views and experiences (Yanow & Schwartz-Shea, 2011).The researcher who employs interpretive paradigm uses those experiences collected to build and interpret his understanding from gathered data so as to find the answers for research (Thanh & Thanh, 2015). The researchers adopted a case study methodology as it aligns consistently with an interpretive orientation.Willis (2007) emphasized that "interpretivists tend to favour qualitative methods such as case studies and ethnography" (p.90).He argues that qualitative approaches such as case studies give exhaustive accounts that are essential for interpretivists to entirely appreciate circumstances.Following the nature of the interpretive paradigm and case studies, the researchers conducted this study to elaborate and explore the extent to which college of education mathematics tutors facilitate geometry teaching and learning consistent with the van Hiele Levels of geometric thinking. Participants and Setting The study involved eleven college of education mathematics tutors who were purposively selected from four colleges of education in the Ashanti, Central and Greater-Accra regions of Ghana.Two of the tutors had Master of Philosophy degree, seven had Master of Education degree while the other two had Bachelor of Education degree in Mathematics.All the tutors attended the two major teacher training universities in Ghana, University A and B. Two of the tutors had ten years of teaching experience at the college of education level, three had eight years teaching experience while the rest had less than six years of teaching experience.There were one female tutor and ten male tutors.Three of the tutors were between 40 to 45 years, another three were between 36 to 39 years and the rest were between 30 to 35 years.The participants were assured of their right to confidentiality and anonymity and were also informed of their right to withdraw from the study at any stage. Data Collection In this study, data were gathered through classroom observation.The classroom observation schedule was adapted from Muyeghu (2008) based on the van Hiele theory.The schedule was constructed by taking into account the findings of researchers on the knowledge and skills pre-service teachers and tutors need to master and facilitate at the various van Hiele Levels.Muyeghu (2008) considered the role of the teacher at the Visual Level as well as at Armah & Kissi / Investigation of GeometryTeaching Strategies 6 / 13 the Analysis Level thus activities at the Visual Level (Level 1) and at the Analysis Level (Level 2) were shown in his schedule.These activities were evaluated against the teacher's practice to determine the teaching strategies that selected teachers used to facilitate the development of geometric thinking at the van Hiele Levels 1 and 2 (Muyeghu, 2008).However, for the purpose of this study, the researchers adapted the observation schedule by extending the Levels to include Levels 3, and 4. The schedule assessed the teaching skills of the tutors on a three-point scale.They are weak, moderate and strong.There was an additional column for any comment that the researchers wished to make. Each tutor was observed twice in a college of education first year class teaching plane shapes such as triangles (including congruent and similar triangles), rectangles, parallelograms, rhombi, squares and their properties.At the time of the data collection, there was an average of 35 pre-service teachers in a class and each geometry lesson lasted for an average of one hour.The classroom observation protocol that was designed integrated elements of the van Hiele Levels.The researchers with particular interest observed the structure of lessons, introductions, presentations, evaluations and classroom management of lessons with a particular focus on the measures that surround the van Hiele Levels.During each tutors lesson, keen observations were made as against the various descriptions of criteria surrounding the van Hiele theory on the observation checklist developed.Thus, for each tutor's lesson, clear indications of the strength of the lesson were made in line with the van Hiele Levels.That is, whether the lesson was weak, moderate or strong in developing geometric thinking at van Hiele Level 1, 2, 3 and 4. In cases where there was the need for additional comments, the researchers provided that in a column given.The researchers also deeply observed and noted the transition from various forms of direct instruction as a tutor moved through the teaching phases towards the pre-service teachers' independence from the tutor. Analysis of Data In analyzing the data, it was important to deeply understand the nature of the van Hiele theory.After a profound study of the van Hiele framework as well as van Hiele-based research on the teaching phases and thinking levels, the researchers formulated an operational characterization of the theory in plane shapes teaching and used it to analyse data collected in the Ghanaian college of education classroom. In order to explore more rigorously how colleges of education mathematics tutors facilitate the learning of geometry, the study adopted qualitative research methods.Data analysis thus, involved contextualization; which involves the interpretation of research findings with reference to data obtained from interviews and observations (Mertler & Charles, 2005).In this study, the researchers recounted all events that originated from the study by describing and interpreting the outcomes.Hittleman and Simon (2002, p.38) state that the basic qualitative research purposes are to "...describe, interpret, verify and evaluate" and further elaborate by saying that "... in interpretive analysis, the researcher explains or creates generalizations".Patterns that emerged from the classroom observation in this study were described in connection with the elements that characterize the van Hiele theory so that one can make meaning from the data. RESULTS The themes discussed in this section were derived from the classroom observations.They also represent some of the characteristic teaching elements of the van Hiele Levels.The researchers report on the observations according to the following themes: • Observations regarding displaying of shapes. • Observations regarding the use of language to describe shapes. • Observations regarding providing hands-on activities for pre-service teachers. • Observations regarding guiding pre-service teachers to establish the properties of a shape. • Observations regarding guiding pre-service teachers to analyze properties of geometric shapes and interrelationship between different types of shapes. • Observations regarding the development and usage of accurate terminology. • Observations regarding guiding pre-service teachers to create, compare and contrast different proofs. Observations regarding Displaying of Shapes Among the eleven colleges of education mathematics tutors observed, six of them had a variety of different geometric shapes to show to their classes as should be done in line with van Hiele Level 1.These were five male and a female mathematics tutor.These shapes were triangles and parallelograms (squares, rectangles and rhombi).It was also noted that apart from these six tutors, none of the other tutors provided examples of the properties of EURASIA J Math Sci and Tech Ed 7 / 13 shapes.During the lessons, some of the tutors emphasized that the pre-service teachers had been taught such shapes as triangles and parallelograms at their previous Levels of education (junior high school and senior high school) thus, they were already familiar with them and therefore there was no need to display such shapes to them again.This strategy, from the researchers' observation, was weak in developing geometric thinking at van Hiele Level 1; the strategy did not help the pre-service teachers to visualize the shapes even though they were familiar with them. Although pre-service teachers had been taught such shapes during their mathematics lessons at the junior high school and senior high school Levels, it was observed that during the six tutors' lessons, pre-service teachers were stimulated and motivated through a visualization approach.Pre-service teachers were able to visualize shapes to considerably develop geometric thinking.However, majority of the tutors provided some routine problems on shapes and asked the pre-service tutors to solve using the area and perimeter formula rather than referring to the properties in general.This is a strong strategy and it is consistent with van Hiele Level 1, as pre-service teachers know the perimeter and area formulae from their previous Levels of education. Observations regarding the Use of Language to Describe Shapes In line with van Hiele Level 1, observations were made to determine whether the tutors used informal as well as formal language to describe shapes and whether the tutors encouraged pre-service teachers to describe in their own words the properties of a typical shape.It was observed that the tutors frequently guided pre-service teachers to use both informal and formal language to describe shapes.The following are some examples: Tutor A: The tutor asked pre-service teachers to describe a scalene triangle in their own words.One of the pre-service teachers answered, "in a scalene triangle all the parts have different measure". Tutor E: The tutor asked pre-service teachers to describe a rectangle.One pre-service teacher answered "a rectangle is a figure with opposite sides equal and also parallel" Apart from these examples, it was also observed that the tutors themselves also used informal language to describe shapes.These examples show that tutors encourage pre-service teachers to describe concepts in their own words using informal language which is regarded as a strong strategy in developing Level 1 geometric thought within the van Hiele theory (van Hiele, 1986;Yazdani, 2007). Observations regarding Providing Hands-on Activities for Pre-service Teachers In line with van Hiele Level 2, observations were carried out to determine whether tutors provided hands-on activities to pre-service teachers requiring them to focus on the properties of shapes and to use vocabulary appropriately.Observations showed that only three of the tutors provided hands-on activities to their pre-service teachers.The activities designed for the pre-service teachers included the use of mathematical instruments and paper cut-outs in constructing, folding, reflecting, rotating and measuring plane shapes such as triangles, squares, rectangles and rhombi.Again, these tutors asked pre-service teachers to construct shapes according to their properties.These were described as strong strategies.However, majority (eight) of the tutors taught concepts of shapes theoretically using the chalk or marker boards and teaching was greatly dominated by the tutors.This was a weak strategy in developing van Hiele Levels 2 and 3 thinking.Once again, from the classroom observations, the assumptions of the tutors were that the pre-service teachers had been taught concepts of shapes at their previous Levels of education (junior high school and senior high school) thus, there was no need for any hands-on activities. Observations regarding Guiding Pre-service Teachers to Establish the Properties of a Shape Teaching learners how to empirically establish the properties of a typical shape is regarded as Level 2 geometric thought within the van Hiele theory.It is easier and logical to establish properties of a typical shape by first undertaking hands-on activities on those shapes (van Hiele, 1986;Vojkuvkova, 2012).Since most of the tutors did not provide any hands-on activities on geometrical shapes to pre-service teachers which required them to focus on the properties of the shapes, it was observed that those tutors could not logically guide pre-service teachers to empirically establish the properties of a typical shape.However, it was observed that the three tutors who provided some well-designed hands-on activities for their learners were much able to guide them in establishing properties of plane shapes. Observations regarding Guiding Pre-service Teachers to Analyze Properties of Geometric Shapes and Interrelationship between Different Types of Shapes Van Hiele (1999) posited that at van Hiele Level 3: Students use properties that they already know to formulate definitions, for example, for squares, rectangles, and equilateral triangles, and use them to justify relationships, such as explaining why all squares are rectangles or why the sum of the angle measures of the angles of any triangle must be 180 (p.311). In order to effectively help learners analyze properties of geometric shapes and see the interrelationship between different types of shapes, appropriate activities should be designed for them (Abu & Abidin, 2013;Kutluca, 2013;van Hiele, 1999).It was expected that tutors provide hands-on activities to pre-service teachers, however, most of them did not.Therefore, guiding pre-service teachers to analyze properties of geometric shapes and interrelationship between different types of shapes was difficult for the tutors and most of them did not achieve this. Observations regarding the Development and Usage of Accurate Terminology Observations were made to determine whether tutors ensured that the accurate and appropriate geometric terminology is developed and used.Classroom observations revealed that most of the tutors ensured the development and usage of accurate geometric terminology.Below are examples by some tutors Tutor A: The tutor drew an isosceles triangle on the board and asked pre-service teachers to name and describe it.One pre-service teacher answered, "it's a triangle with two equal parts and two equal angles".The tutor asked the pre-service teacher to use the appropriate terminology to describe the triangle well.Another pre-service teacher answered, "it's an isosceles triangle; it has two opposite sides and angles equal".The tutor then emphasized that "it's an isosceles triangle and the equal angles are called base angles.In a triangle, equal angles face equal sides". Tutor C: The tutor asked pre-service teachers to define perpendicular lines.One pre-service teacher answered, "they are lines that form an angle of 90°".The tutor then emphasized that "yes, they are lines that meet at 90° or right angles".These techniques were described as strong strategies and are consistent with the van Hiele theory and operate at van Hiele Levels 2 and 3. Observations regarding Guiding Pre-service Teachers to Create, Compare and Contrast Different Proofs At van Hiele Level 4 learners should be able to construct proofs, understand the role of axioms and definitions, and also know the meaning of necessary and sufficient conditions (Yazdani, 2007).Mathematics course outline for college of education pre-service teachers indicate that they are to be taught "congruent and similar triangles" where they are to use criteria or postulates to prove whether triangles are congruent or not.Also, under the topic "circle theorems", the pre-service teachers are to do some simple proofs.It was observed that only two tutors carried out some deductive geometric proofs with pre-service teachers.However, the proofs were not presented in a rigorous manner and also most of the proof exercises given to the pre-service teachers were not challenging to invoke high level thinking as in van Hiele Level 4. This strategy was described as moderate.Apart from these tutors the others only discussed the various types of triangles and their properties on the topic "congruent and similar triangles".No deductive geometric proofs were carried out to facilitate understanding of van Hiele Level 4. This was on the other hand described as a weak strategy in developing van Hiele Level 4 geometric thinking. DISCUSSION AND CONCLUSION Whereas the van Hiele theory is said to have tremendous pedagogical benefits in geometry education, research is still limited about its use and effect on the spatial ability and geometric knowledge for teaching among preservice teachers in Ghana.The purpose of the study was to use the van Hiele theory in investigating teaching strategies used by the college of education geometry tutors in Ghana and thus, provide a rich and in-depth EURASIA J Math Sci and Tech Ed 9 / 13 description of the geometry instructional practices at the college of education level in Ghana.In this study, each college tutor's lesson (on two-dimensional geometry) was observed twice with particular emphasis on the measures that surround the van Hiele Levels. Contextual analysis of the data from classroom observation revealed that the selected college of education mathematics tutors exhibit a good conceptual understanding of geometry in facilitating the teaching and learning of geometry that is consistent with van Hiele Levels 1 and 2. For instance, tutors displayed geometric shapes to preservice teachers, tutors encouraged pre-service teachers to describe concepts in their own words using informal language and tutors also ensured that the accurate and appropriate geometric terminology is developed and used.These results confirm the findings (Ding & Jones, 2007;Muyeghu, 2008) that teachers provide opportunities for learners to develop geometric thinking skills needed to operate at basic van Hiele Levels (i.e.levels 1 and 2).As Armah et al. (2017) acknowledged, it is anticipated that involving pre-service teachers in more effective geometry lessons could make them feel competent and to have absolute control in their future teaching practices at the basic school. Key to this study is the finding that majority (eight) of the tutors taught most geometric concepts theoretically using only textbooks, chalk and marker boards with the lessons greatly dominated by the tutors themselves as reported in literature (Atebe, 2008;De Villiers 2012;Khalid & Azeem, 2012;Yegambaram & Naidoo, 2009).Consequently, these tutors did not choose materials and tasks which targeted the key concepts and procedures under consideration.Teaching was therefore not inspiring, enthusiastic and challenging.The results also confirm the finding (van Putten, 2008) that many pre-service teachers are taught geometry with route learning methods, using textbooks to present geometric concepts.This according to Armah et al. (2017) may have resulted in the low geometric thinking levels of most pre-service teachers as currently reported in colleges of education in Ghana.However, the study found that three tutors did otherwise by providing some carefully designed hands-on activities for their pre-service teachers which is significant to mention at this point.The activities included using such mathematical instruments as pair of compasses, protractor, pencils, measuring ruler and paper cut-outs in constructing, folding, reflecting, rotating and measuring plane shapes such as triangles, squares, rectangles and rhombi.Even though the activities involved short tasks, from keen observations these activities gradually revealed to the pre-service teachers the structured characteristic of the geometric shapes under study.To a large extent, these activities succeeded in eliciting specific responses from the pre-service teachers as they observed things about the angles, sides, and diagonals of the plane shapes. As van Hiele (1986) reiterated, carefully designed hands-on activities are imperative for guiding students to empirically establish the properties of a shape.It was therefore not surprising that most tutors could not guide their pre-service teachers to establish properties of plane shapes as these tutors failed to assign some hands-on activities to pre-service teachers.Furthermore, to analyze properties of geometric shapes and interrelationship between different types of shapes, hands-on activities are a prerequisite (Abu & Abidin, 2013;Kutluca, 2013).Van Hiele (1999) also explained that at van Hiele Level 3 students use properties of shapes that they already know to formulate definitions and justify relationships thus, understanding the interrelationship between different types of shapes such as squares being referred to as rectangles.Therefore, the finding that most tutors did not succeed in guiding pre-service teachers to effectively analyze properties of geometric shapes and see the interrelationship between different types of shapes was also not surprising.These findings may provide profound explanations to why studies (Armah et al., 2017;Baffoe & Mereku, 2010) found that class inclusion (which refers to seeing the interrelationship between different types of shapes) was solely lacking among learners. A major characteristic of learners operating at van Hiele Level 4 of geometric thinking is the ability to construct proofs, understand the role of axioms and definitions, and also know the meaning of necessary and sufficient conditions.The study found that only two tutors taught some deductive geometric proofs to pre-service teachers.However, this was done in an ineffective manner with less stimulating proof exercises been administered to preservice teachers.In a way, this did not encourage higher level thinking.Considering the level of learners involved in this study this was a disturbing phenomenon as pre-service teachers are being trained to take charge of pedagogical activities at the basic schools and thus required to exhibit geometrical thinking levels higher than their expected audience. Based on the findings the researchers concludes that the selected college of education mathematics tutors exhibit a good conceptual understanding of geometry in facilitating the teaching and learning of geometry that is consistent with van Hiele Levels 1 and 2. However, much of the geometry teaching and learning strategies of the mathematics tutors are not structured in a way that support the development of geometric thinking as described in van Hiele Levels 3 and 4. In the field of mathematics teacher training and development, presenting to pre-service/prospective teachers how to go about pedagogical activities and preparing teaching and learning materials may not be anything new.However, the design of lessons with geometrically-rich tasks by teachers is currently limited in the mathematics teacher education and teaching practices particularly in Ghana.This may be a contributing factor to the abysmally Armah & Kissi / Investigation of GeometryTeaching Strategies 10 / 13 low geometrical competencies of most Ghanaian pre-service teachers as reported in the literature (Armah et al, 2017;Gogoe, 2009).The findings in this study, therefore, support the advocacy in literature (Alex & Mammen, 2016;Howse & Howse, 2015;Ma, Lee, Lin, & Wu, 2015;Noh & Abdullah, 2016;Suwito, Yuwono, Parta & Irawati, 2017;Tieng & Eu, 2014) for the use of the van Hiele framework by teachers in the construction and delivery of lessons. RECOMMENDATIONS It is therefore recommended that teacher education institutions should integrate the van Hiele theory in the teaching of geometry concept; college of education mathematics tutors should design appropriate hands-on activities for pre-service teachers to explore properties of geometric shapes.College tutors should also guide preservice teachers in effectively exploring the interrelationship between shapes.Moreover, deductive geometric proofs should be taught in colleges of education and should be done in a thorough manner involving more challenging and thought-provoking exercises to enable pre-service teachers to operate at higher van Hiele level of geometric conceptualization.The integration of van Hiele model in teaching at teacher training institutions would not only improve upon pre-service teachers geometric thinking levels as reported in Erdogan and Durmus (2009) but also help build their competencies and craft knowledge to enable them plan geometrically-rich lessons for instructions at the basic school level later when they finish their teacher training programmes. This study was limited to using the van Hiele theory in investigating teaching of plane shapes.Hence, it is important to indicate that these conclusions are based purely on the indicators of the van Hiele theory.It is also worth bearing in mind that although the results of the classroom observations provided information on geometry teaching strategies in the participating colleges, the observed lessons can only be partial representations of the whole process of instruction in these colleges.The findings concerning teaching strategies presented in this study are thus, tentative.Further studies could investigate teaching of other areas of geometry such as three-dimensional shapes, circles and coordinate geometry using the van Hiele theory.Also, as the present study involved only small number of college tutors in the geometry classroom with only two lessons observed, a study that would explore the geometry practices of more college tutors is very imperative to give a more general picture of college tutors' geometry teaching strategies in Ghana. Table 2 . Framework of the van Hiele phases of Teaching Phase Descriptions
2019-05-13T13:03:34.543Z
2019-01-31T00:00:00.000
{ "year": 2019, "sha1": "86cd92f8f86aa1db57958bd76cf367266ed25d81", "oa_license": "CCBY", "oa_url": "https://www.ejmste.com/download/use-of-the-van-hiele-theory-in-investigating-teaching-strategies-used-by-college-of-education-7647.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "86cd92f8f86aa1db57958bd76cf367266ed25d81", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
3353816
pes2o/s2orc
v3-fos-license
Subgroup dairy products consumption on the risk of stroke and CHD: A systematic review and meta-analysis Background: There is no global consensus about the relationship between dairy consumption and cardiovascular diseases (CVD). This study aimed at integrating the results of several studies to predict the dairy effects on CVD, e.g. stroke and CHD. Methods: In the present study, some major databases such as Scopus, Science Direct, and PubMed were searched up to September 2014. All prospective cohort studies dealing with dairy products consumption and CVD were surveyed regardless of their publication date or language. This reference population includes all individuals without any delimitation with regard to age, gender, or race. The quality of the study was evaluated using STROBE Checklist. Study selection and data extraction were done by 2 independent researchers separately. The indices in this study were RR and HR. The random model was used to combine the results. Results: Out of 6234 articles, 11 were included in the meta-analysis. No relationship was found between stroke and consumption of milk, cream, and butter, and the results are as follow: RR = 0.91 (95%CI: 0.81-1.01) for milk, RR = 0.97 (95%CI: 0.88-1.06) for cream, and RR = 0.95 (95%CI: 0.85-1.07) for butter. However, cheese was found to decrease stroke risk: RR = 0.93 (95%CI: 0.88-0.99). The relationship of CHD with consumption of milk, cheese, cream, and butter are as follows, respectively: RR = 1.05 (95% CI: 0.96- 1.15), RR = 0.90 (95%CI: 0.81-1.01), RR = 0.96 (95% CI: 0.87-1.06), and RR = 0.99 (95%CI: 0.89-1.11). In other words, no relationship existed between dairy products and CHD. Conclusion: No relationship was found between consumption of various dairy products and CHD or stroke, except for cheese that decreased stroke risk by 7%. Considering the small number of studies, the result of the present study is not generalizable and more studies need to be conducted Introduction Cardiovascular diseases (CVD) are one of the 10 most important causes of death worldwide. According to World Health Organization (WHO) statistics, 17.5 million people died of CVD around the world in 2012, making up 31% of the total mortality rate worldwide. Of these death cases, 7.4 million were due to CHD (coronary heart disease) and 6.7 million were caused by stroke. It is predicted that the mortality from CVD will increase to 23.6 million cases in 2030 (1). Dairy products are rich in minerals (calcium, potassium, and magnesium), proteins (casein and whey), and vitamins (riboflavin and Vitamin B12); thus, dairy products can have positive effects on CVD (2). On the other hand, intake of saturated fat increases low-density lipoprotein cholesterol (LDL-C), and in turn, raises the incidence of CVD, e.g. stroke and coronary heart diseases (3). Because full fat dairy products contain saturated fat, it is recommended that they be replaced by low-fat dairy; for example, margarine could be replaced by butter to decrease the intake of saturated fat (4,5). Some evidence shows that low fat dairy products lower blood pressure (6,7). It has been proven in some studies that Dietary Approaches to Stop Hypertension (DASH) dietary pattern, including high amount of vegetables, fruit, nuts, fish, and low fat dairy, decrease blood pressure; it can be partly put down to low fat dairy products, which play a role in cutting down on CVD (8). European handbooks also recommend DASH diet and low fat dairy products to prevent CVD; yet, it has not been fully documented with appropriate evidence (9). Different studies have yielded different results as to the link between dairy products and stroke or CHD. Several prospective cohort studies indicated that milk, cheese, butter, and cream stand in an inverse relationship to stroke or CHD (10)(11)(12). However, some other prospective studies revealed a direct relationship between dairy products and stroke or CHD (13)(14)(15). A meta-analysis of 17 cohort studies in 2011 showed that milk consumption has an inverse relationship with overall CVD, whereas it bears no relationship to CHD and stroke (16). As no meta-analysis was conducted to find the relationship between consumption of various dairy products and stroke and CHD and as each dairy product's mechanism acts differently in causing CVD, the need for performing a meta-analysis on this subject is highly felt to determine the effect of each dairy product one by one. Methods Our search was done using the strategy of combining the following keywords: ("milk" OR "cheese" OR "butter" OR "cream") AND ("cerebrovascular disease" OR "cardiovascular diseases" OR "stroke" OR "cerebral infarction" OR "coronary heart disease" OR "myocardial infarction" OR "MI" OR "ischemic heart disease" OR "IHD). The search was limited to the following databases: PubMed, April 1945 to September 2014; Science Direct, April 1823 to September 2014; and Scopus, April 1973 to September 2014. Systematic browsing studies were used to search the journal articles. The search strategy for Medline was developed first and was then adapted for the remaining databases. Inclusion criteria All cohort studies that examined the relationship between dairy products and stroke or CHD were included regardless of their publication date and language. Our population of interest included all individuals irrespective of their age, gender, or race. Our main focus was all dairy products and the result was incidence or mortality (stroke or CHD). Based on WHO International Classification of Diseases, ICD-10 stroke includes the following: I60-I69 (ICD)-10, and CHD is defined as acute myocardial infarction, angina pectoris, and other ischemic heart diseases (www.who.int/classifications/icd/en). Data collection and validity assessment Two researchers (FG and MK) independently took the responsibility of article selection to ensure that articles were selected properly, relevant to research topic, and matched the inclusion criteria. Authors' names, journals' names, and the results were available for the 2 researchers. Any disagreement between the 2 researchers was settled via consultation with a third researcher (YA). The 2 researchers extracted the data out of the selected studies. Variables for data analysis consisted of the corresponding author's name, study title, publication date, place of the study, baseline age, population size, number of cases, follow-up period, gender, total dairy, low fat dairy, high fat dairy, the result focused (incidence or mortality of CHD or Stroke), RR (95% CI) or HR (95% CI) for the ratio of the highest group to the lowest group, and the variables adjusted for the analysis. In case a study was repeated more than once, the study with higher number of desired results were selected. STROBE checklist was used to assess quality of the studies (17). Two researchers (FG & MK) assessed the quality of the studies independently. The assessment items were as follow: (a) accurate study plan (here prospective), (b) clear and precise explanation of measurement method of dairy products, (c) precise description of measurement method of the incidence or mortality of CHD or Stroke, (d) stating the data gathering time span, (e) follow-up period, (f) referring to inclusion and exclusion criteria, and (g) explaining how loss to follow-up was addressed ( Fig. 1). Measures of exposure effect and data analysis The general index for the assessment of the strength of RR and HR was determined with 95% confidence interval. Relative risk or hazard ratio was defined as the risk rate of 3 stroke or CHD in individuals with the highest rate of consumption of dairy products in proportion to the risk rate of stroke or CHD in those with the lowest rate of consumption of dairy products. This index weight depends on the average weight of all studies based on inverse square root of studies variance. The results for males and females were reported separately. The Stata software Version 12 was used for data analysis. The data were reported via random model method. The p value less than 0.05 was considered significant. Heterogeneity, publication bias, and sensitivity analysis Heterogeneity was quantified via I 2 , for which Higgins classification was used, with 25% showing low heterogeneity, 50% mid heterogeneity, and 75% high heterogeneity (18). Funnel diagram was used for publication bias (19) and was statistically examined using Egger test (20). Sensitivity meta-analysis was used to exclude any study which would remarkably change the study results compared to when that study was not included (21). Results In the present research, 6234 studies were found in databases. After title-based assessment, 150 articles were selected and 70 were chosen after the deletion of repeated studies. After abstract examination, 25 articles with complete texts were found, and 14 studies were deleted, because they did not contain the relevant data on the relationship between dairy products and CHD and stroke. Finally, 11 studies were entered into the meta-analysis (Fig. 2). We found 9 studies with 10 separate results in which milk was included in CHD meta-analysis of 212 767 individuals with 4866 CHD cases. Ten studies were included, with 11 separate results in which milk was included in stroke meta-analysis of 440 397 individuals, with 22 946 stroke cases. To estimate dairy products in 10 studies, FFQ questionnaire was used, and a 7-day household inventory method was used only in 1 study (22). The follow-up period of all studies was equal to or more than 10 years. Nine studies were adapted with respect to the main variables such as age, gender, cigarette smoking, alcohol, total energy intake, and body mass index (BMI); 2 studies were not adapted to the mentioned variables (22,23). Seven studies were of high quality, 2 of mid quality (13,24), and 2 had low quality (11,25) (Table 1). 4 It was not possible to execute a meta-regression to detect heterogeneity sources, because there were not enough studies in each subgroup. However, considering the results of Egger test, there were not any publication bias about the relationship of subgroup dairy products and CHD. Egger test results for milk was p = 0.60, it was p = 0.38 for cheese, p = 0.44 for cream, and p= 0.45 for butter, but the funnel plot was asymmetric only in the relationship between milk and CHD because of small study effects in 4 studies, which were deleted (11,22,24,26). The study index of RR = 1.06 (95% CI: 0.97-1.16) was obtained, which did not differ remarkably from the main index for CHD (Fig. 5). For stroke, Egger test-based publication bias results was p = 0.45 for milk, p = 0.65 for cheese, p = 0.30 for cream, and p = 0.87 for butter, so all of them were meaningless. However, the funnel plot was asymmetric only in the relationship between milk and stroke due to small study effects in 3 studies (11,22,24), which were deleted, and a general estimation of the study index of RR = 0.91 (95% CI: 0.81-1.02) was obtained, which did not differ remarkably from the main index (Fig. 5). The results of milk sensitivity analysis revealed that the general conclusion of the study did not differ significantly with the deletion of each study, and its index was 1.05 (95% CI: 0.93 -1.17). Moreover, the results for stroke revealed that the parameter was 0.91 (0.95% CI: 0.79-1.03). After excluding the ATBC study (13), the study index changed remarkably RR = 0.85(95% CI: 0.78-0.91), and the heterogeneity decreased from 71.4% to 12.3%. The results of sensitivity analysis on the relationship between consumption of cheese, butter, and milk with CHD and stroke indicated that deletion of each study did not considerably change the main index. Discussion As there is a great adverse effect of CVD on public health, implementing strategies to identify changeable factors that are capable of minimizing the likelihood of these diseases is highly important. Therefore, attempting to assess the possible associations between the rate of diary intake and CHD and stroke, we used all accessible data reported by prospective cohort studies. Despite using the data of observational studies, our results indicated no significant effect of diary intake on rate of CVD. In this meta-analysis, no relationship was found between stroke or CHD event and consumption of milk, cheese, cream and butter. Nevertheless, in 2014, the meta-analysis of 15 cohort studies showed a non-linear relationship between milk and stroke. It was found that consumption of fermented milk (200 mL/day) decreases the incidence of stroke by 18%. This study expressed the relative risk of cheese consumption on stroke: [0.94(0.89-0.995)]. However, no relationship was found between the risk of stroke and consumption of non-fermented milk, cream and butter (28). Similarly, the meta-analysis done in year 2004 revealed that drinking milk has a slight effect on stroke or heart disease (29). However, in dose-response meta-analysis of prospective cohort studies, no relationship was found between milk intake and CHD or stroke (16). Many cohort studies have conducted researches on the relationship between diary subgroups and stroke and CVD, 6 among which a population-based study (2013) merits special remark; it showed that none of the dairy subgroups have any connection with CHD or stroke (12). In the Malmö Diet and Cancer Cohort, an interaction was observed between cheese and gender; and it was found that cheese consumption significantly decreases the risk of CVDs in females, but not in males (10). In addition, in Elwood prospective study on males, it was found that those with mid or high milk intake experienced stroke = 0.52 (0.27-0.99) and ischemic heart disease = 0.88 (0.56-1.40) (30). In contrast with these findings, in the Cohort Study of Male Stroke that included males aged 50 to 69 years, it was found that drinking milk increases intracerebral hemorrhage. (Disease incidence was 41% higher in the highest quintuple compared to the lowest one: RR=1.41 95% CI=1.02-1.96.) Meanwhile, no relationship was found between consumption of cheese, butter, and dairy subtypes and stroke (13). However, it should be taken into account that the population of the mentioned study was male smokers, a group who are more exposed to the risk of CVDs; thus, the findings of that study is not generalizable. Considering the findings, if modified or processed milk products replace milk, it will be possible to lower the incidence of CHD. This is because processed milk products decrease hypercholesterolemia. Moreover, on the ground that cholesterol and triglyceride levels are 2 main factors causing CHD, so reducing their level will decrease CHD incidence. Additionally, an expert panel concluded that whole milk may not affect blood lipids as predicted from its fat content and fat composition (31). In Rossouw's study it was found that intake of skim milk and yoghurt lowers cholesterol level, but consuming milk has no effect on plasma cholesterol (32), and that fermented-milk consumers are 15% less prone to CVDs (10). In addition, a double blind crossover study indicated that daily intake of 90 gr immune milk decreases total cholesterol by 5.2% (95%CI=2.5-7/9) and LDL by 7.4% (95%CI=4.1-10.7); this study claimed that immune milk can be efficient in nutritional management of those with hypercholesterolemia (33). Limitations of this study were as follow: (1) Although the relative risk obtained in this study was adjusted for many important variables, some unadjusted variables may still be present and residual confounding may also exist; (2) While cohort studies are firm in causality, nutritional habits may change in the course of time, and therefore, the observed connection cannot be authentically approved. Conclusions No relationship was found between various dairy products and CHD or stroke; only cheese decreased stroke risk by 7%. Moreover, considering the fewness of the studies, the results are not generalizable, and more studies need to be conducted.
2018-04-03T01:54:47.796Z
2017-01-10T00:00:00.000
{ "year": 2017, "sha1": "66fcb81bea5910fecfbdd36d584e179e744e90a5", "oa_license": "CCBYNC", "oa_url": "http://mjiri.iums.ac.ir/files/site1/user_files_e9487e/khoramdad-A-10-2946-1-67ef72e.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc73e821af2fe1a305852b51b8df1dbbec8229f3", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
174801556
pes2o/s2orc
v3-fos-license
Unsupervised and Supervised Principal Component Analysis: Tutorial This is a detailed tutorial paper which explains the Principal Component Analysis (PCA), Supervised PCA (SPCA), kernel PCA, and kernel SPCA. We start with projection, PCA with eigen-decomposition, PCA with one and multiple projection directions, properties of the projection matrix, reconstruction error minimization, and we connect to autoencoder. Then, PCA with singular value decomposition, dual PCA, and kernel PCA are covered. SPCA using both scoring and Hilbert-Schmidt independence criterion are explained. Kernel SPCA using both direct and dual approaches are then introduced. We cover all cases of projection and reconstruction of training and out-of-sample data. Finally, some simulations are provided on Frey and AT&T face datasets for verifying the theory in practice. Introduction Assume we have a dataset of instances or data points {(x i , y i )} n i=1 with sample size n and dimensionality x i ∈ R d and y i ∈ R . The {x i } n i=1 are the input data to the model and the {y i } n i=1 are the observations (labels). We define R d×n X := [x 1 , . . . , x n ] and R ×n Y := [y 1 , . . . , y n ]. We can also have an out-of-sample data point, x t ∈ R d , which is not in the training set. If there are n t out-of-sample data points, {x t,i } nt 1 , we define R d×nt X t := [x t,1 , . . . , x t,nt ]. Usually, the data points exist on a subspace or sub-manifold. Subspace or manifold learning tries to learn this sub-manifold (Ghojogh et al., 2019b). Principal Component Analysis (PCA) (Jolliffe, 2011) is a very well-known and fundamental linear method for subspace learning and dimensionality reduction (Friedman et al., 2009). This method, which is also used for feature extraction (Ghojogh et al., 2019b), was first proposed by Pearson in 1901(Pearson, 1901. In order to learn a nonlinear sub-manifold, kernel PCA was proposed by (Schölkopf et al., 1997;1998). It maps the data to high dimensional feature space hoping to fall on a linear manifold in that space. PCA and kernel PCA are unsupervised methods for subspace learning. To use the class labels in PCA, supervised PCA was proposed (Bair et al., 2006) which scores the features of the X and reduces the features before applying PCA. This type of SPCA were mostly used in bioinformatics (Ma & Dai, 2011). Afterwards, another type of SPCA (Barshan et al., 2011) was proposed which has a very solid theory and PCA is actually a special case of it when we the labels are not used. This SPCA also has dual and kernel SPCA. It is noteworthy that parametric PCA (Levada, 2020) has also been proposed recently. PCA and SPCA have had many applications for example eigenfaces (Turk & Pentland, 1991a;b) and kernel eigenfaces (Yang et al., 2000) for face recognition and detecting orientation of image using PCA (Mohammadzade et al., 2017). There exist many other applications of PCA and SPCA in the literature. In this paper, we explain the theory of PCA, kernel SPCA, SPCA, and kernel SPCA and provide some simulations for verifying the theory in practice. project x onto the column space of U , denoted by Col(U ). The projection of x ∈ R d onto Col(U ) ∈ R p and then its representation in the R d (its reconstruction) can be seen as a linear system of equations: Principal Component Analysis where we should find the unknown coefficients β ∈ R p . If the x lies in the Col(U ) or span{u 1 , . . . , u p }, this linear system has exact solution, so x = x = U β. However, if x does not lie in this space, there is no any solution β for x = U β and we should solve for projection of x onto Col(U ) or span{u 1 , . . . , u p } and then its reconstruction. In other words, we should solve for Eq. (1). In this case, x and x are different and we have a residual: which we want to be small. As can be seen in Fig. 1, the smallest residual vector is orthogonal to Col(U ); therefore: It is noteworthy that the Eq. (3) is also the formula of coefficients in linear regression (Friedman et al., 2009) where the input data are the rows of U and the labels are x; however, our goal here is different. Nevertheless, in Section 2.4, some similarities of PCA and regression will be introduced. Plugging Eq. (3) in Eq. (1) gives us: We define: as "projection matrix" because it projects x onto Col(U ) (and reconstructs back). Note that Π is also referred to as the "hat matrix" in the literature because it puts a hat on top of x. If the vectors {u 1 , . . . , u p } are orthonormal (the matrix U is orthogonal), we have U = U −1 and thus U U = I. Therefore, Eq. (4) is simplified: So, we have: 2.1.2. PROJECTION AND RECONSTRUCTION IN PCA The Eq. (6) can be interpreted in this way: The U x projects x onto the row space of U , i.e., Col(U ) (projection onto a space spanned by d vectors which are pdimensional). We call this projection, "projection onto the PCA subspace". It is "subspace" because we have p ≤ d where p and d are dimensionality of PCA subspace and the original x, respectively. Afterwards, U (U x) projects the projected data back onto the column space of U , i.e., Col(U ) (projection onto a space spanned by p vectors which are d-dimensional). We call this step "reconstruction from the PCA" and we want the residual between x and its reconstruction x to be small. If there exist n training data points, i.e., {x i } n i=1 , the projection of a training data point x is: where: is the centered data point and: is the mean of training data points. The reconstruction of a training data point x after projection onto the PCA subspace is: where the mean is added back because it was removed before projection. Note that in PCA, all the data points should be centered, i.e., the mean should be removed first. The reason is shown in Fig. 2. In some applications, centering the data does not make sense. For example, in natural language processing, the data are text and centering the data makes some negative measures which is non-sense for text. Therefore, data is not sometimes centered and PCA is applied on the noncentered data. This method is called Latent Semantic Indexing (LSI) or Latent Semantic Analysis (LSA) (Dumais, 2004). If we stack the n data points column-wise in a matrix X = [x 1 , . . . , x n ] ∈ R d×n , we first center them: is the centered data and: is the centering matrix. See Appendix A for more details about the centering matrix. The projection and reconstruction, Eqs. (7) and (10), for the whole training data are: where X = [ x 1 , . . . , x n ] and X = [ x 1 , . . . , x n ] are the projected data onto PCA subspace and the reconstructed data, respectively. We can also project a new data point onto the PCA subspace for X where the new data point is not a column of X. In other words, the new data point has not had impact in constructing the PCA subspace. This new data point is also referred to as "test data point" or "out-of-sample data" in the literature. The Eq. (13) was for projection of X onto its PCA subspace. If x t denotes an out-of-sample data point, its projection onto the PCA subspace ( x t ) and its reconstruction ( x t ) are: where: is the centered out-of-sample data point which is centered using the mean of training data. Note that for centering the out-of-sample data point(s), we should use the mean of the training data and not the out-of-sample data. If we consider the n t out-of-sample data points, R d×nt X t = [x t,1 , . . . , x t,nt ], all together, the projection and reconstruction of them are: respectively, where: The squared length (squared 2 -norm) of this reconstructed vector is: where (a) is because u is a unit (normal) vector, i.e., u u = ||u|| 2 2 = 1, and are the centered data. The summation of the squared lengths of ConsideringX = [x 1 , . . . ,x n ] ∈ R d×n , we have: where S is called the "covariance matrix". If the data were already centered, we would have S = XX . Plugging Eq. (23) in Eq. (22) gives us: Note that we can also say that u Su is the variance of the projected data onto PCA subspace. In other words, u Su = Var(u X ). This makes sense because when some non-random thing (here u) is multiplied to the random data (hereX), it will have squared (quadratic) effect on variance, and u Su is quadratic in u. Therefore, u Su can be interpreted in two ways: (I) the squared length of reconstruction and (II) the variance of projection. We want to find a projection direction u which maximizes the squared length of reconstruction (or variance of projection): where the constraint ensures that the u is a unit (normal) vector as we assumed beforehand. Using Lagrange multiplier (Boyd & Vandenberghe, 2004), we have: Taking derivative of the Lagrangian and setting it to zero gives: The Eq. (26) is the eigen-decomposition of S where u and λ are the leading eigenvector and eigenvalue of S, respectively (Ghojogh et al., 2019a). Note that the leading eigenvalue is the largest one. The reason of being leading is that we are maximizing in the optimization problem. As a conclusion, if projecting onto one PCA direction, the PCA direction u is the leading eigenvector of the covariance matrix. Note that the "PCA direction" is also called "principal direction" or "principal axis" in the literature. The dimensions (features) of the projected data onto PCA subspace are called "principal components". (14), if p > 1, we are projectingx orX onto PCA subspace with dimensionality more than one and then reconstruct back. If we ignore adding the mean back, we have: It means that we project every column ofX, i.e.,x, onto a space spanned by the p vectors {u 1 , . . . , u p } each of which is d-dimensional. Therefore, the projected data are pdimensional and the reconstructed data are d-dimensional. The squared length (squared Frobenius Norm) of this reconstructed matrix is: where tr(.) denotes the trace of matrix, (a) is because U is an orthogonal matrix (its columns are orthonormal), and (b) is because tr(X U U X ) = tr(XX U U ) = tr(U XX U ). According to Eq. (23), the S =XX is the covariance matrix; therefore: We want to find several projection directions {u 1 , . . . , u p }, as columns of U ∈ R d×p , which maximize the squared length of reconstruction (or variance of projection): where the constraint ensures that the U is an orthogonal matrix as we assumed beforehand. Using Lagrange multiplier (Boyd & Vandenberghe, 2004), we have: where Λ ∈ R p×p is a diagonal matrix diag([λ 1 , . . . , λ p ] ) including the Lagrange multipliers. The Eq. (29) is the eigen-decomposition of S where the columns of U and the diagonal of Λ are the eigenvectors and eigenvalues of S, respectively (Ghojogh et al., 2019a). The eigenvectors and eigenvalues are sorted from the leading (largest eigenvalue) to the trailing (smallest eigenvalue) because we are maximizing in the optimization problem. As a conclusion, if projecting onto the PCA subspace or span{u 1 , . . . , u p }, the PCA directions {u 1 , . . . , u p } are the sorted eigenvectors of the covariance matrix of data X. RANK OF THE COVARIANCE MATRIX We consider two cases forX ∈ R d×n : 1. If the original dimensionality of data is greater than the number of data points, i.e., d ≥ n: In this case, rank(X) = rank(X ) ≤ n. Therefore, rank(S) = rank(XX ) ≤ min rank(X), rank(X ) − 1 = n − 1. Note that −1 is because the data are centered. For example, if we only have one data point, it becomes zero after centering and the rank should be zero. 2. If the original dimensionality of data is less than the number of data points, i.e., d ≤ n − 1 (the −1 again is because of centering the data): In this case, rank(X) = rank(X ) ≤ d. Therefore, rank(S) = rank(XX ) ≤ min rank(X), rank(X ) = d. So, we either have rank(S) ≤ n − 1 or rank(S) ≤ d. TRUNCATING U Consider the following cases: 1. If rank(S) = d: we have p = d (we have d non-zero eigenvalues of S), so that U ∈ R d×d . It means that the dimensionality of the PCA subspace is d, equal to the dimensionality of the original space. Why does this happen? That is because rank(S) = d means that the data are spread wide enough in all dimensions of the original space up to a possible rotation (see Fig. 3). Therefore, the dimensionality of PCA subspace is equal to the original dimensionality; however, PCA might merely rotate the coordinate axes. In this case, U ∈ R d×d is a square orthogonal matrix so that R d×d U U = U U −1 = I and R d×d U U = U −1 U = I because rank(U ) = d, rank(U U ) = d, and rank(U U ) = d. That is why in the literature, PCA is also referred to as coordinate rotation. 2. If rank(S) < d and n > d: it means that we have enough data points but the data points exist on a subspace and do not fill the original space wide enough in every direction. In this case, U ∈ R d×p is not square and rank(U ) = p < d (we have p non-zero eigenvalues of S). Therefore, R d×d U U = I and R p×p U U = I because rank(U ) = p, rank(U U ) = p < d, and rank(U U ) = p. 3. If rank(S) ≤ n−1 < d: it means that we do not have enough data points to properly represent the original space and the points have an "intrinsic dimensionality". For example, we have two three-dimensional points which are one a two-dimensional line (subspace). So, similar to previous case, the data points exist on a subspace and do not fill the original space wide enough in every direction. The discussions about U , U U , and U U are similar to previous case. Note that we might have rank(S) = d and thus U ∈ R d×d but want to "truncate" the matrix U to have U ∈ R d×p . Truncating U means that we take a subset of best (leading) eigenvectors rather than the whole d eigenvectors with nonzero eigenvalues. In this case, again we have U U = I and U U = I. The intuition of truncating is this: the variance of data might be noticeably smaller than another direction; in this case, we can only keep the p < d top eigenvectors (PCA directions) and "ignore" the PCA directions with smaller eigenvalues to have U ∈ R d×p . Figure 4 illustrates this case for a 2D example. Note that truncating can also be done, when U ∈ R d×p , to have U ∈ R d×q where p is the number of non-zero eigenvalues of S and q < p. From all the above analyses, we conclude that as long as the columns of the matrix U ∈ R d×p are orthonormal, we always have U U = I regardless of the value p. If the orthogonal matrix U is not truncated and thus is a square matrix, we also have U U = I. RECONSTRUCTION IN LINEAR PROJECTION If we center the data, the Eq. (2) becomes r =x − x because the reconstructed data will also be centered according to Eq. (10). According to Eqs. (2), (8), and (10), we have: Figure 5 shows the projection of a two-dimensional point (after the data being centered) onto the first principal direc- tion, its reconstruction, and its reconstruction error. As can be seen in this figure, the reconstruction error is different from least square error in linear regression. For n data points, we have: where R d×n R = [r 1 , . . . , r n ] is the matrix of residuals. If we want to minimize the reconstruction error subject to the orthogonality of the projection matrix U , we have: The objective function can be simplified: Using Lagrange multiplier (Boyd & Vandenberghe, 2004), we have: containing the Lagrange multipliers. Equating the derivative of Lagrangian to zero gives: which is again the eigenvalue problem (Ghojogh et al., 2019a) for the covariance matrix S. We had the same eigenvalue problem in PCA. Therefore, PCA subspace is the best linear projection in terms of reconstruction error. In other words, PCA has the least squared error in reconstruction. RECONSTRUCTION IN AUTOENCODER We saw that PCA is the best in reconstruction error for linear projection. If we have m > 1 successive linear projections, the reconstruction is: which can be seen as an undercomplete autoencoder (Goodfellow et al., 2016) with 2m layers without activation function (or with identity activation functions f (x) = x). The µ x is modeled by the intercepts included as input to the neurons of autoencoder layers. Figure 6 shows this autoencoder. As we do not have any non-linearity between the projections, we can define: The Eq. (36) shows that the whole autoencoder can be reduced to an undercomplete autoencoder with one hidden layer where the weight matrix isÜ (see Fig. 6). In other words, in autoencoder neural network, every layer excluding the activation function behaves as a linear projection. Comparing the Eqs. (14) and (36) shows that the whole autoencoder is reduced to PCA. Therefore, PCA is equivalent to an undercomplete autoencoder with one hidden layer without activation function. Therefore, if we trained weights of such an autoencoder by back-propagation (Rumelhart et al., 1986) are roughly equal to the PCA directions. Moreover, as PCA is the best linear projection in terms of reconstruction error, if we have an undercomplete autoencoder with "one" hidden layer, it is best not to use any activation function; this is not noticed by some papers in the literature, unfortunately. We saw that an autoencoder with 2m hidden layers without activation function reduces to linear PCA. This explains why in autoencoders with more than one layer, we use nonlinear activation function f (.) as: PCA Using Singular Value Decomposition The PCA can be done using Singular Value Decomposition (SVD) ofX, rather than eigen-decomposition of S. Consider the complete SVD ofX (see Appendix B): where the columns of U ∈ R d×d (called left singular vectors) are the eigenvectors ofXX , the columns of V ∈ R n×n (called right singular vectors) are the eigenvectors ofX X , and the Σ ∈ R d×n is a rectangular diagonal matrix whose diagonal entries (called singular values) are the square root of eigenvalues ofXX and/orX X . See Proposition 1 in Appendix B for proof of this claim. According to Eq. (23), theXX is the covariance matrix S. In Eq. (29), we saw that the eigenvectors of S are the principal directions. On the other hand, here, we saw that the columns of U are the eigenvectors ofXX . Hence, we can apply SVD onX and take the left singular vectors (columns of U ) as the principal directions. An interesting thing is that in SVD ofX, the columns of U are automatically sorted from largest to smallest singular values (eigenvalues) and we do not need to sort as we did in using eigenvalue decomposition for the covariance matrix. Determining the Number of Principal Directions Usually in PCA, the components with smallest eigenvalues are cut off to reduce the data. There are different methods for estimating the best number of components to keep (denoted by p), such as using Bayesian model selection (Minka, 2001), scree plot (Cattell, 1966), and comparing the ratio λ j / d k=1 λ k with a threshold (Abdi & Williams, 2010) where λ i denotes the eigenvalue related to the j-th principal component. Here, we explain the two methods of scree plot and the ratio. The scree plot (Cattell, 1966) is a plot of the eigenvalues versus sorted components from the leading (having largest eigenvalue) to trailing (having smallest eigenvalue). A threshold for the vertical (eigenvalue) axis chooses the components with the large enough eigenvalues and removes the rest of the components. A good threshold is where the eigenvalue drops significantly. In most of the datasets, a significant drop of eigenvalue occurs. Another way to choose the best components is the ratio (Abdi & Williams, 2010): for the j-th component. Then, we sort the features from the largest to smallest ratio and select the p best components or up to the component where a significant drop of the ratio happens. Dual Principal Component Analysis Assume the case where the dimensionality of data is high and much greater than the sample size, i.e., d n. In this case, consider the incomplete SVD ofX (see Appendix where here, U ∈ R d×p and V ∈ R n×p contain the p leading left and right singular vectors ofX, respectively, where p is the number of "non-zero" singular values ofX and usually p d. Here, the Σ ∈ R p×p is a square matrix having the p largest non-zero singular values ofX. As the Σ is a square diagonal matrix and its diagonal includes non-zero entries (is full-rank), it is invertible (Ghodsi, 2006). Therefore, Projection Recall Eq. (13) for projection onto PCA subspace: X = U X . On the other hand, according to Eq. (40), we have: According to Eqs. (13) and (41), we have: The Eq. (42) can be used for projecting data onto PCA subspace instead of Eq. (13). This is projection of training data in dual PCA. Reconstruction According to Eq. (40), we have: Plugging Eq. (43) in Eq. (14) gives us: The Eq. (44) can be used for reconstruction of data instead of Eq. (14). This is reconstruction of training data in dual PCA. Out-of-sample Projection Recall Eq. (15) for projection of an out-of-sample point x t onto PCA subspace. According to Eq. (43), we have: where (a) is because Σ −1 is diagonal and thus symmetric. The Eq. (46) can be used for projecting out-of-sample data point onto PCA subspace instead of Eq. (15). This is outof-sample projection in dual PCA. Considering all the n t out-of-sample data points, the projection is: Out-of-sample Reconstruction Recall Eq. (16) for reconstruction of an out-of-sample point x t . According to Eqs. (43) and (45), we have: The Eq. (48) can be used for reconstruction of an out-ofsample data point instead of Eq. (16). This is out-of-sample reconstruction in dual PCA. Considering all the n t out-of-sample data points, the reconstruction is: 3.5. Why is Dual PCA Useful? The dual PCA can be useful for two reasons: 1. As can be seen in Eqs. (42), (44), (46), and (48), the formulae for dual PCA only include V and not U . The columns of V are the eigenvectors ofX X ∈ R n×n and the columns of U are the eigenvectors of XX ∈ R d×d . In case the dimensionality of data is much high and greater than the sample size, i.e., n d, computation of eigenvectors ofX X is easier and faster thanXX and also requires less storage. Therefore, dual PCA is more efficient than direct PCA in this case in terms of both speed and storage. Note that the results of PCA and dual PCA are exactly the same. 2. Some inner product forms, such asX x t , have appeared in the formulae of dual PCA. This provides opportunity for kernelizing the PCA to have kernel PCA using the so-called kernel trick. As will be seen in the next section, we use dual PCA in formulation of kernel PCA. Kernel Principal Component Analysis The PCA is a linear method because the projection is linear. In case the data points exist on a non-linear sub-manifold, the linear subspace learning might not be completely effective. For example, see (c) Applying the linear PCA, which takes Euclidean distances into account, on the nonlinear data where the found subspace has ruined the manifold so the far away red and green points have fallen next to each other. The credit of this example is for Prof. Ali Ghodsi. In order to handle this problem of PCA, we have two options. We should either change PCA to become a nonlinear method or we can leave the PCA to be linear but change the data hoping to fall on a linear or close to linear manifold. Here, we do the latter so we change the data. We increase the dimensionality of data by mapping the data to feature space with higher dimensionality hoping that in the feature space, it falls on a linear manifold. This is referred to as "blessing of dimensionality" in the literature (Donoho, 2000) which is pursued using kernels (Hofmann et al., 2008). This PCA method which uses the kernel of data is named "kernel PCA" (Schölkopf et al., 1997). Kernels and Hilbert Space Suppose that φ : x → H is a function which maps the data x to Hilbert space (feature space). The φ is called "pulling function". In other words, x → φ(x). Let t denote the dimensionality of the feature space, i.e., φ(x) ∈ R t while x ∈ R d . Note that we usually have t d. If X denotes the set of points, i.e., x ∈ X , the kernel of two vectors x 1 and x 2 is k : X × X → R and is defined as (Hofmann et al., 2008;Herbrich, 2001): which is a measure of "similarity" between the two vectors because the inner product captures similarity. We can compute the kernel of two matrices X 1 ∈ R d×n1 and X 2 ∈ R d×n2 and have a "kernel matrix" (also called "Gram matrix"): where Φ(X 1 ) := [φ(x 1 ), . . . , φ(x n )] ∈ R t×n1 is the matrix of mapped X 1 to the feature space. The Φ(X 2 ) ∈ R t×n2 is defined similarly. We can compute the kernel matrix of dataset X ∈ R d×n over itself: where Φ(X) := [φ(x 1 ), . . . , φ(x n )] ∈ R t×n is the pulled (mapped) data. Note that in kernel methods, the pulled data Φ(X) are usually not available and merely the kernel matrix K(X, X), which is the inner product of the pulled data with itself, is available. There exist different types of kernels. Some of the most well-known kernels are: where c 1 , c 2 , c 3 , and σ are scalar constants. The Gaussian and Sigmoid kernels are also called Radial Basis Function (RBF) and hyperbolic tangent, respectively. Note that the Gaussian kernel can also be written as exp −γ||x 1 −x 2 || 2 2 where γ > 0. It is noteworthy to mention that in the RBF kernel, the dimensionality of the feature space is infinite. The reason lies in the Maclaurin series expansion (Taylor series expansion at zero) of this kernel: where r := ||x 1 − x 2 || 2 2 , which is infinite dimensional with respect to r. It is also worth mentioning that if we want the pulled data Φ(X) to be centered, i.e.: we should double center the kernel matrix (see Appendix A) because if we use centered pulled data in Eq. (52), we have:Φ = HK x H, which is the double-centered kernel matrix. Thus: whereK x denotes the double-centered kernel matrix (see Appendix C). Projection We apply incomplete SVD on the centered pulled (mapped) dataΦ(X) (see Appendix B): where U ∈ R t×p and V ∈ R n×p contain the p leading left and right singular vectors ofΦ(X), respectively, where p is the number of "non-zero" singular values ofΦ(X) and usually p t. Here, the Σ ∈ R p×p is a square matrix having the p largest non-zero singular values ofΦ(X). However, as mentioned before, the pulled data are not necessarily available so Eq. (59) cannot be done. The kernel, however, is available. Therefore, we apply eigendecomposition (Ghojogh et al., 2019a) to the doublecentered kernel:K where the columns of V and the diagonal of Λ are the eigenvectors and eigenvalues ofK x , respectively. The columns of V in Eq. (59) are the right singular vectors ofΦ(X) which are equivalent to the eigenvectors of Φ(X) Φ (X) =K x , according to Proposition 1 in Appendix B. Also, according to that proposition, the diagonal of Σ in Eq. (59) is equivalent to the square root of eigenvalues ofK x . Therefore, in practice where the pulling function is not necessarily available, we use Eq. (60) in order to find the V and Σ in Eq. (59). The Eq. (60) can be restated as: to be compatible to Eq. (59). It is noteworthy that because of using Eq. (61) instead of Eq. (59), the projection directions U are not available in kernel PCA to be observed or plotted. Similar to what we did for Eq. (42): where Σ and V are obtained from Eq. (61). The Eq. (62) is projection of the training data in kernel PCA. Reconstruction Similar to what we did for Eq. (44): Therefore, the reconstruction is: However, theΦ(X) is not available necessarily; therefore, we cannot reconstruct the training data in kernel PCA. Out-of-sample Projection Similar to what we did for Eq. (46): where (a) is because Σ −1 is diagonal and thus symmetric and thek t ∈ R n is calculated by Eq. (139) in Appendix C. The Eq. (65) is the projection of out-of-sample data in kernel PCA. Considering all the n t out-of-sample data points, X t , the projection is: whereK t is calculated by Eq. (138). Out-of-sample Reconstruction Similar to what we did for Eq. (48): where thek t ∈ R n is calculated by Eq. (139) in Appendix C. Considering all the n t out-of-sample data points, X t , the reconstruction is: whereK t is calculated by Eq. (138). In Eq. (67), theΦ(X) appeared at the left of expression, is not available necessarily; therefore, we cannot reconstruct an out-of-sample point in kernel PCA. According to Eqs. (64) and (67), we conclude that kernel PCA is not able to reconstruct any data, whether training or out-of-sample. Supervised Principal Component Analysis Using Scoring The older version of SPCA used scoring (Bair et al., 2006). In this version of SPCA, PCA is not a special case of SPCA. The version of SPCA, which will be introduced in the next section, is more solid in terms of theory where PCA is a special case of SPCA. In SPCA using scoring, we compute the similarity of every feature of data with the class labels and then sort the features and remove the features having the least similarity with the labels. The larger the similarity of a feature with the labels, the better that feature is for discrimination in the embedded subspace. Consider the training dataset R d×n X = [x 1 , . . . , x n ] = [x 1 , . . . , x d ] where x i ∈ R d and x j ∈ R n are the i-th data point and the j-th feature, respectively. This type of SPCA is only for classification task so we can consider the dimensionality of the labels to be one, = 1. Thus, we have Y ∈ R 1×n . We define R n y := Y . The score of the j-th feature, x j , is: After computing the scores of all the features, we sort the features from largest to smallest score. Let X ∈ R d×n denote the training dataset whose features are sorted. We take the q ≤ d features with largest scores and remove the other features. Let: be the training dataset with q best features. Then, we apply PCA on the X ∈ R q×n rather than X ∈ R d×n . Applying PCA and kernel PCA on X results in SPCA and kernel PCA, respectively. This type of SPCA was mostly used and popular in bioinformatics for genome data analysis (Ma & Dai, 2011). Supervised Principal Component Analysis Using HSIC Hilbert-Schmidt Independence Criterion Suppose we want to measure the dependence of two random variables. Measuring the correlation between them is easier because correlation is just "linear" dependence. According to (Hein & Bousquet, 2004), two random variables are independent if and only if any bounded continuous functions of them are uncorrelated. Therefore, if we map the two random variables x and y to two different ("separable") Reproducing Kernel Hilbert Spaces (RKHSs) and have φ(x) and φ(y), we can measure the correlation of φ(x) and φ(y) in Hilbert space to have an estimation of dependence of x and y in the original space. The correlation of φ(x) and φ(y) can be computed by the Hilbert-Schmidt norm of the cross-covariance of them (Gretton et al., 2005). Note that the squared Hilbert-Schmidt norm of a matrix A is (Bell, 2016): and the cross-covariance matrix of two vectors x and y is (Gubner, 2006;Gretton et al., 2005): Using the explained intuition, an empirical estimation of the Hilbert-Schmidt Independence Criterion (HSIC) is introduced (Gretton et al., 2005): whereK x and K y are the kernels over x and y, respectively. In other words,K x = φ(x) φ(x) and K y = φ(y) φ(y). We are usingK x rather than K x because K x is going to be used in kernel SPCA in the next sections. The term 1/(n − 1) 2 is used for normalization. The H is the centering matrix (see Appendix A): The HK y H double centers the K y in HSIC. The HSIC (Eq. (71)) measures the dependence of two random variable vectors x and y. Note that HSIC = 0 and HSIC > 0 mean that x and y are independent and dependent, respectively. The greater the HSIC, the greater dependence they have. Supervised PCA Supervised PCA (SPCA) (Barshan et al., 2011) uses the HSIC. We have the data X = [x 1 , . . . , x n ] ∈ R d×n and the labels Y = [y 1 , . . . , y n ] ∈ R ×n , where is the dimensionality of the labels and we usually have = 1. However, in case the labels are encoded (e.g., one-hot-encoded) or SPCA is used for regression (e.g., see (Ghojogh & Crowley, 2019)), we have > 1. SPCA tries to maximize the dependence of the projected data points U X and the labels Y . It uses a linear kernel for the projected data points: and an arbitrary kernel K y over Y . For classification task, one of the best choices for the K y is delta kernel (Barshan et al., 2011) where the (i, j)-th element of kernel is: where δ y i ,y j is the Kronecker delta which is one if the x i and x j belong to the same class. Another good choice for kernel in classification task in SPCA is an arbitrary kernel (e.g., linear kernel K y = Y Y ) over Y where the columns of Y are one-hot encoded. This is a good choice because the distances of classes will be equal; otherwise, some classes will fall closer than the others for no reason and fairness between classes goes away. The SPCA can also be used for regression (e.g., see (Ghojogh & Crowley, 2019)) and that is one of the advantages of SPCA. In that case, a good choice for K y is an arbitrary kernel (e.g., linear kernel K y = Y Y ) over Y where the columns of the Y , i.e., labels, are the observations in regression. Here, the distances of observations have meaning and should not be manipulated. The HSIC in SPCA case becomes: where U ∈ R d×p is the unknown projection matrix for projection onto the SPCA subspace and should be found. The desired dimensionality of the subspace is p and usually p d. We should maximize the HSIC in order to maxzimize the dependence of U X and Y . Hence: where the constraint ensures that the U is an orthogonal matrix, i.e., the SPCA directions are orthonormal. Using Lagrangian (Boyd & Vandenberghe, 2004), we have: where (a) is because of the cyclic property of trace and Λ ∈ R p×p is a diagonal matrix diag([λ 1 , . . . , λ p ] ) including the Lagrange multipliers. Setting the derivative of Lagrangian to zero gives: which is the eigen-decomposition of XHK y HX where the columns of U and the diagonal of Λ are the eigenvectors and eigenvalues of XHK y HX , respectively (Ghojogh et al., 2019a). The eigenvectors and eigenvalues are sorted from the leading (largest eigenvalue) to the trailing (smallest eigenvalue) because we are maximizing in the optimization problem. As a conclusion, if projecting onto the SPCA subspace or span{u 1 , . . . , u p }, the SPCA directions {u 1 , . . . , u p } are the sorted eigenvectors of XHK y HX . In other words, the columns of the projection matrix U in SPCA are the p leading eigenvectors of XHK y HX . Similar to what we had in PCA, the projection, projection of out-of-sample, reconstruction, and reconstruction of outof-sample in SPCA are: respectively. In SPCA, there is no need to center the data as the centering is already handled by H in HSIC. This gets more clear in the following section where we see that PCA is a special case of SPCA. Note that in the equations of SPCA, although not necessary, we can center the data and in that case, the mean of embedding in the subspace will be zero. Considering all the n t out-of-sample data points, the projection and reconstruction are: respectively. PCA is a special case of SPCA! Not considering the similarities of the labels means that we do not care about the class labels so we are unsupervised. if we do not consider the similarities of labels, the kernel over the labels becomes the identity matrix, K y = I. According to Eq. (77), SPCA is the eigen-decomposition of XHK y HX . In this case, this matrix becomes: which is the covariance matrix whose eigenvectors are the PCA directions. Thus, if we do not consider the similarities of labels, i.e., we are unsupervised, SPCA reduces to PCA as expected. Dual Supervised PCA The SPCA can be formulated in dual form (Barshan et al., 2011). We saw that in SPCA, the columns of U are the eigenvectors of XHK y HX . We apply SVD on K y (see Appendix B): where Q ∈ R n×n includes left or right singular vectors and Ω ∈ R n×n contains the singular values of K y . Note that the left and right singular vectors are equal because K y is symmetric and thus K y K y and K y K y are equal. As Ω is a diagonal matrix with non-negative entries, we can decompose it to Ω = Ω 1/2 Ω 1/2 = Ω 1/2 (Ω 1/2 ) where the diagonal entries of Ω 1/2 ∈ R n×n are square root of diagonal entries of Ω. Therefore, we can decompose K y into: where: Therefore, we have: where: We apply incomplete SVD on Ψ (see Appendix B): where U ∈ R d×p and V ∈ R d×p include the p leading left or right singular vectors of Ψ, respectively, and Σ ∈ R p×p contains the p largest singular values of Ψ. We can compute U as: The projection of data X in dual SPCA is: Note that Σ and H are symmetric. Similarly, out-of-sample projection in dual SPCA is: Considering all the n t out-of-sample data points, the projection is: Reconstruction of X after projection onto the SPCA subspace is: where (a) is because of Eqs. (88) and (89). Similarly, reconstruction of an out-of-sample data point in dual SPCA is: Considering all the n t out-of-sample data points, the reconstruction is: Note that dual PCA was important especially because it provided opportunity to kernelize the PCA. However, as it is explained in the next section, kernel SPCA can be obtained directly from SPCA. Therefore, dual SPCA might not be very important for the sake of kernel SPCA. The dual SPCA has another benefit similar to what we had for dual PCA. In Eqs. (89), (90), (92), and (93), U is not used but V exists. In Eq. (87), the columns of V are the eigenvectors of Ψ Ψ ∈ R n×n , according to Proposition 1 in appendix B. On the other hand, in direct SPCA, we have eigen-decomposition of XHK y HX ∈ R d×d in Eq. (77) which is then used in Eqs. (78), (79), (80), and (81). In case we have huge dimensionality, d n, decomposition of the n × n matrix is faster and needs less storage so dual SPCA will be more efficient. Kernel Supervised PCA The SPCA can be kernelized by two approaches, using either direct SPCA or dual SPCA (Barshan et al., 2011). 6.5.1. KERNEL SPCA USING DIRECT SPCA According to the representation theory (Alperin, 1993), any solution (direction) u ∈ H must lie in the span of "all" the training vectors mapped to H, i.e., Φ(X) = [φ(x 1 ), . . . , φ(x n )] ∈ R t×n (usually t d). Note that H denotes the Hilbert space (feature space). Therefore, we can state that: where θ ∈ R n is the unknown vector of coefficients, and u ∈ R t is the kernel SPCA direction in Hilbert space here. The directions can be put together in R t×p U := [u 1 , . . . , u p ]: where Θ := [θ 1 , . . . , θ p ] ∈ R n×p . The Eq. (75) in the feature space becomes: The tr(Φ(X) U U Φ(X)HK y H) can be simplified as: tr(Φ(X) U U Φ(X)HK y H) = tr(U U Φ(X)HK y HΦ(X) ) = tr(U Φ(X)HK y HΦ(X) U ) Plugging Eq. (95) in Eq. (96) gives us: where: Note that the Eqs. (98) and (73) are different and should not be confused. Moreover, the constraint of orthogonality of projection matrix, i.e., U U = I, in the feature space becomes: Therefore, the optimization problem is: where the objective variable is the unknown Θ. which is the generalized eigenvalue problem (K x HK y HK x , K x ) (Ghojogh et al., 2019a). The Θ and Λ, which are the eigenvector and eigenvalue matrices, respectively, can be calculated according to (Ghojogh et al., 2019a). Note that in practice, we can naively solve Eq. (101) by left multiplying K −1 x (hoping that it is positive definite and thus not singular): which is the eigenvalue problem (Ghojogh et al., 2019a) for HK y HK x , where columns of Θ are the eigenvectors of it and Λ includes its eigenvalues on its diagonal. If we take the p leading eigenvectors to have Θ ∈ R n×p , the projection of Φ(X) ∈ R t×n is: where R n×n K x := Φ(X) Φ(X). Similarly, the projection of out-of-sample data point φ(x t ) ∈ R t is: where k t is Eq. (140). Considering all the n t out-of-sample data points, X t , the projection is: where K t is Eq. (135). As we will show in the following section, in kernel SPCA, as in kernel PCA, we cannot reconstruct data, whether training or out-of-sample. 6.5.2. KERNEL SPCA USING DUAL SPCA The Eq. (86) in t-dimensional feature space becomes: where Φ(X) = [φ(x 1 ), . . . , φ(x n )] ∈ R t×n . Applying SVD (see Appendix B) on Ψ of Eq. (106) is similar to the form of Eq. (87). Having the same discussion which we had for Eqs. (59) and (61), we do not necessarily have Φ(X) in Eq. (106) so we can obtain V and Σ as: whereK x := HK x H∆ and the columns of V are the eigenvectors of (see Proposition 1 in Appendix B): where (a) is because of Eqs. (106) and (122). It is noteworthy that because of using Eq. (107) instead of Eq. (106), the projection directions U are not available in kernel SPCA to be observed or plotted. Similar to equations (87) and (88), we have: where V and Σ are obtained from Eq. (107). The projection of data Φ(X) is: Note that Σ and H are symmetric. Similarly, out-of-sample projection in kernel SPCA is: where k t is Eq. (140). Considering all the n t out-of-sample data points, X t , the projection is: where K t is Eq. (135). Reconstruction of Φ(X) after projection onto the SPCA subspace is: where (a) is because of Eqs. (108) and (109). Similarly, reconstruction of an out-of-sample data point in dual SPCA is: where k t is Eq. (140). However, in Eqs. (112) and (113), we do not necessarily have Φ(X); therefore, in kernel SPCA, as in kernel PCA, we cannot reconstruct data, whether training or outof-sample. Eigenfaces This section introduces one of the most fundamental applications of PCA and its variants -facial recognition. Projection Directions of Facial Images PCA and kernel PCA can be trained using images of diverse faces, to learn the most important facial features, which account for the variation between faces. Here, two facial datasets, i.e. the Frey dataset and the AT&T (ORL) face dataset, are used to illustrate this concept. The AT&T dataset has been used twice, i.e., (1) with human subjects as its classes and (2) with having and not having eye glasses as its classes. Figure 9 demonstrates the top ten PCA directions for the PCA trained on these datasets. As demonstrated, the projection directions of a facial dataset are some facial features which are like ghost faces in terms of appearance. That is why the facial projection directions are also referred to as "ghost faces". The ghost faces in PCA are also referred to as "eigenfaces" (Turk & Pentland, 1991a;b) because PCA uses eigenvalue decomposition of the covariance matrix. In Fig. 9, the projection directions have captured different facial features that discriminate the data with respect to the maximum variance. The captured features are eyes, nose, cheeks, chin, lips, and eyebrows, which are the most important facial features. This figure does not include projection directions of the kernel PCA because in kernel PCA the projection directions are not available. Note that the facial recognition using the kernel PCA is referred to as "kernel eigenfaces" (Yang et al., 2000). The ghost faces (facial projection directions) of SPCA can be referred to as the "supervised eigenfaces". Facial recognition using the kernel SPCA can also be referred to as "kernel supervised eigenfaces". Figure 9 does not include projection directions of the kernel SPCA because the projection directions are not available in kernel SPCA. Comparison of PCA and SPCA directions demonstrates that both PCA and SPCA are capturing eye glasses as important discriminators. However, some Haar wavelet 1 like features (Stanković & Falkowski, 2003) are captured as the projection directions in SPCA. Haar wavelets are important in face recognition and detection; for example, they have been used in the Viola-Jones face detector (Wang, 2014). As demonstrated in Fig. 9, both PCA and SPCA have captured eyes as discriminators; however, SPCA has also focused on the frame of eye glasses because of the usage of class labels. Where PCA has also captured other distracting facial features, such as forehead, cheeks, hair, mustache, etc, because it is not aware that the two classes are different, in terms of glasses, and sees the dataset as a whole. Projection of Facial Images Using the obtained projection directions, the facial images can be projected onto the PCA subspace. Similarly, projected images using kernel PCA can also be obtained. Figure 8 demonstrates the projection of both training and outof-sample facial images, of the Frey dataset onto the PCA, dual PCA, and kernel PCA subspaces. The used kernels were linear, RBF, and cosine. As can be seen, the out-of-sample data, although were not seen in the training phase, are projected very well. The model, somewhat, has extrapolated the projections so it has learned generalizable subspaces. Reconstruction of Facial Images The facial images can be reconstructed after projection onto PCA and SPCA subspaces. The reconstruction of training and test images, in Frey and AT&T datasets, are depicted in Fig. 10. Reconstructions have occurred using all and also two top projection directions. As expected, the reconstructions using all projection directions are very similar to the original images. However, reconstruction using two leading projection directions is not prefect. Although, most important facial features are reconstructed because the leading projection directions carry most of the information. Conclusion and Future Work In this paper, the PCA and SPCA were introduced in details of theory. Moreover, kernel PCA and kernel SPCA were covered. The illustrations and experiments on Frey and AT&T face datasets were also provided in order to analyze the explained methods in practice. The calculation of K y ∈ R n×n in SPCA might be challenging for big data in terms of speed and storage. The Supervised Random Projection (SRP) addresses this problem by approximating the kernel matrix K y using Random Fourier Features (RFF) (Rahimi & Recht, 2008). As a future work, we will write a tutorial on SRP. Moreover, the sparsity is very effective because of the "bet on sparsity" principal: "Use a procedure that does well in sparse problems, since no procedure does well in dense problems (Friedman et al., 2009;Tibshirani et al., 2015)." Another reason for the effectiveness of the sparsity is Occam's razor (Domingos, 1999) stating that "simpler solutions are more likely to be correct than complex ones" or "simplicity is a goal in itself". Therefore, the sparse methods such as sparse PCA (Zou et al., 2006;Shen & Huang, 2008), sparse kernel PCA (Tipping, 2001), and Sparse Supervised Principal Component Analysis (SSPCA) (Sharifzadeh et al., 2017) have been proposed. We will defer these methods to future tutorials. A. Centering Matrix Consider a matrix A ∈ R α×β . We can show this matrix by its rows, A = [a 1 , . . . , a α ] or by its columns, where a i and b j denotes the i-th row and j-th column of A, respectively. Note that the vectors are column vectors. The "left centering matrix" is defined as: where 1 = [1, . . . , 1] ∈ R α and I ∈ R α×α is the identity matrix. Left multiplying this matrix to A, i.e., HA, removes the mean of rows of A from all of its rows: where the column vector µ rows ∈ R β is the mean of rows of A. The "right centering matrix" is defined as: where 1 = [1, . . . , 1] ∈ R β and I ∈ R β×β is the identity matrix. Right multiplying this matrix to A, i.e., AH, removes the mean of columns of A from all of its columns: where the column vector µ cols ∈ R α is the mean of columns of A. We can use both left and right centering matrices at the same time: This operation is commonly done for a kernel (see appendix A in (Schölkopf et al., 1998) and Appendix C in this tutorial paper). The second term removes the mean of rows of A according to Eq. (115) and the third term removes the mean of columns of A according to Eq. (117). The last term, however, adds the overall mean of A back to it where the matrix µ all ∈ R α×β whose all elements are the overall mean of A is: where A(i, j) is the (i, j)-th element of A and µ all (., .) is every element of A. Therefore, "double centering" for A is defined as: which removes both the row and column means of A but adds back the overall mean. Note that if the matrix A is a square matrix, the left and right centering matrices are equal with the same dimensionality as the matrix A. In computer programming, usage of centering matrix might have some precision errors. therefore, in computer programming, we have: with a good enough approximation. Moreover, the centering matrix is symmetric because: The centering matrix is also idempotent: where k is a positive integer. The proof is: ))))))) = H. Q.E.D. For illustration, we provide a simple example: A = 1 2 3 4 3 1 , whose row mean, column mean, and overall mean matrix are: B. Singular Value Decomposition Consider a matrix A ∈ R α×β . Singular Value Decomposition (SVD) (Stewart, 1993) is one of the most well-known and effective matrix decomposition methods. It has two different forms, i.e., complete and incomplete. There are different methods for obtaining this decomposition, one of which is Jordan's algorithm (Stewart, 1993). Here, we do not explain how to obtain SVD but we introduce different forms of SVD and their properties. The "complete SVD" decomposes the matrix as: where the columns of U and the columns of V are called "left singular vectors" and "right singular vectors", respectively. In complete SVD, the Σ is a rectangular diagonal matrix whose main diagonal includes the "singular values". In cases α > β and α < β, this matrix is in the forms: respectively. In other words, the number of singular values is min(α, β). The "incomplete SVD" decomposes the matrix as: where (Golub & Reinsch, 1970): and the columns of U and the columns of V are called "left singular vectors" and "right singular vectors", respectively. In incomplete SVD, the Σ is a square diagonal matrix whose main diagonal includes the "singular values". The matrix Σ is in the form: Note that in both complete and incomplete SVD, the left singular vectors are orthonormal and the right singular vectors are also orthonormal; therefore, U and V are both orthogonal matrices so: If these orthogonal matrices are not truncated and thus are square matrices, e.g., for complete SVD, we also have: Proposition 1. In both complete and incomplete SVD of matrix A, the left and right singular vectors are the eigenvectors of AA and A A, respectively, and the singular values are the square root of eigenvalues of either AA or A A. Proof. We have: which is eigen-decomposition (Ghojogh et al., 2019a) of AA where the columns of U are the eigenvectors and the diagonal of Σ 2 are the eigenvalues so the diagonal of Σ are the square root of eigenvalues. We also have: which is the eigen-decomposition (Ghojogh et al., 2019a) of A A where the columns of V are the eigenvectors and the diagonal of Σ 2 are the eigenvalues so the diagonal of Σ are the square root of eigenvalues. Q.E.D. C. Centring the Kernel Matrix for Training and Out-of-sample Data This appendix is based on (Schölkopf et al., 1997) and Appendix A in (Schölkopf et al., 1998). The kernel matrix for the training data, {x i } n i=1 or X ∈ R d×n , is: whose (i, j)-th element is: We want to center the pulled training data in the feature space:φ If we center the pulled training data, the (i, j)-th element of kernel matrix becomes: Therefore, the double-centered training kernel matrix is: = HKH, (134) where R n×n 1 n×n := 1 n 1 n and R n 1 n := [1, . . . , 1] . The Eq. (134) is the kernel matrix when the pulled training data in the feature space are centered. In Eq. (134), the dimensionality of both centering matrices are H ∈ R n×n . The kernel matrix for the trainign data and the out-ofsample data, {x t,i } nt i=1 or X t ∈ R d×nt , is: whose (i, j)-th element is: We want to center the pulled training data in the feature space, i.e., Eq. (133). Moreover, the out-of-sample data should be centered using the mean of training (and not outof-sample) data: If we center the pulled training and out-of-sample data, the (i, j)-th element of kernel matrix becomes: where (a) is because of Eqs. (133) and (137). Therefore, the double-centered kernel matrix over training and out-ofsample data is: where R n×n 1 n×n := 1 n 1 n , R n×nt 1 n×nt := 1 n 1 nt , R n 1 n := [1, . . . , 1] , and R nt 1 nt := [1, . . . , 1] . The Eq. (138) is the kernel matrix when the pulled training data in the feature space are centered and the pulled out-ofsample data are centered using the mean of pulled training data.
2019-06-01T23:26:38.000Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "f34f6e2234669b59a06a780c913c3a9776f2c6d9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d10669758bd3b935fe58290c270886510092e2e4", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
10695350
pes2o/s2orc
v3-fos-license
Methodological challenges and solutions in auditory functional magnetic resonance imaging Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI. INTRODUCTION Over the past 20 years, functional magnetic resonance imaging (fMRI) has become the workhorse of cognitive scientists interested in noninvasively measuring localized human brain activity. Although the benefits provided by fMRI have been substantial, there are numerous ways in which it remains an imperfect technique. This is perhaps nowhere more true than in the field of auditory neuroscience due to the substantial acoustic noise generated by standard fMRI sequences. In order to study brain function using fMRI, auditory researchers face what can seem like an unappealing array of methodological decisions that impact the acoustic soundscape, cognitive performance, and imaging data characteristics to varying degrees. Here I review the challenges faced in auditory fMRI studies, possible solutions, and prospects for future improvement. Much of the information regarding the basic mechanics of noise in fMRI can be found in previous reviews (Amaro et al., 2002;Talavage et al., 2014); although I have repeated the main points for completeness, I focus on more recent theoretical perspectives and methodological advances. Table 1 summarizes several factors that contribute to the degradation of acoustic signals during fMRI. Echoplanar imaging (EPI) sequences commonly used to detect the blood oxygen level dependent (BOLD) signal in fMRI require radiofrequency (RF) pulses that excite tissue and gradient coils that help encode spatial position by altering the local magnetic field. During EPI the gradient coils switch between phase encoding and readout currents, producing Lorentz forces that act on the coils and connecting wires. These vibrations travel as compressional waves through the scanner hardware and eventually enter the air as acoustic sound. This gradient-induced vibration produces the most prominent acoustic noise during fMRI, and can continue for up to approximately 0.5 s after the gradient activity ceases (Ravicz et al., 2000). Because the Lorentz force is proportional to the main magnetic field strength (B 0 ) and the gradient current, both high B 0 and high gradient amplitudes generally increase the amount of acoustic noise generated . For example, increasing field strength from 0.2 to 3 T will bring maximum acoustic noise from ∼85 to ∼130 dB SPL Ravicz et al., 2000;Price et al., 2001). Source Approximate noise level (dB SPL) Gradient coils 85-130 Helium pump and air circulating 57-76 In-ear foam earplugs -Sub-optimal headphones -Although the noise generated by gradient switching is the most obvious (i.e., loudest) source of acoustic noise during fMRI, it is not the only source of acoustic interference. RF pulses contribute additional acoustic noise, and noise is also present as a result of air circulation systems and helium pumps in the range of 57-76 dB SPL (Ravicz et al., 2000). Because RF and helium pump noise is substantially quieter than that generated by gradient coils it probably provides a negligible contribution when scanning is continuous, but may be more relevant in sparse or interleaved silent steady state (ISSS) imaging sequences (described in a later section) when gradient-switching noise is absent. Auditory clarity can also be reduced as a result of in-ear hearing protection and sub-optimal headphone systems. Separately or together, these noise sources provide a level of acoustic interference that is significantly higher than that found in a typical behavioral testing environment. In the next section I turn to the more interesting question of the various ways in which this cacophony may impact auditory neuroscience. CHALLENGES OF ACOUSTIC NOISE IN AUDITORY fMRI Acoustic noise can influence neural response through at least three independent pathways, illustrated schematically in Figure 1. The effects will vary depending on the specific stimuli, population being studied, and brain networks being examined. Importantly, though, in many cases the impact of noise on brain activation can be seen outside of auditory cortex. In this section I review the most pertinent challenges caused by acoustic scanner noise. ENERGETIC MASKING Energetic masking refers to the masking of a target sound by a noise or distractor sound that obscures information in the target. That is, interference occurs at a peripheral level of processing, with the masker already obscuring the target as the sound enters the eardrum (and thus at the most peripheral levels of the auditory system). The level of masking is often characterized by the signal-to-noise ratio (SNR), which reflects the relative loudness of the signal and masker. For example, an SNR of +5 dB indicates that on average the target signal is 5 dB louder than the masker. If scanner noise at a subject's ear is 80 dB SPL, achieving a moderately clear SNR of +5 would require presenting a target signal at 85 dB SPL. When considering the masking effects of noise it is important to note that the characteristics of the noise are also important: noise that has temporal modulation can permit listeners to glean information from the "dips" in the noise masker. Energetic masking highlights the most obvious challenge of using auditory stimuli in fMRI: Subjects may not be able to perceive auditory stimuli due to scanner noise. If stimuli are inaudible-or less than fully perceived in some way-interpreting the subsequent neural responses can be problematic. , reducing sensitivity to experimental stimuli. Second, successfully processing degraded stimuli may require additional executive processes (such as verbal working memory or performance monitoring). These executive processes are frequently found to rely on regions of frontal and premotor cortex, as well as the cingulo-opercular network. Finally, scanner noise may increase attentional demands, even for non-auditory tasks, an effect that is likely exacerbated in more sensitive subject populations. Although the specific cognitive and neural consequences of these challenges may vary, the critical point is that scanner noise can alter both cognitive demand and the patterns of brain activity observed through multiple mechanisms, affecting both auditory and non-auditory brain networks. (but related) sort of energetic masking challenge arises in experiments in which subjects are required to make vocal responses, as scanner noise can interfere with an experimenter's understanding of subject responses; in some cases this can be ameliorated by offline noise reduction approaches (e.g., Cusack et al., 2005). In addition, the presence of acoustic noise may also change the quality of vocalizations produced by subjects (Junqua, 1996). Acoustic noise thus impacts not only auditory perception, but speech production, which may be important for some experimental paradigms. Two ways of ascertaining the degree to which energetic masking is a problem are (1) to ask participants about their subjective experience hearing stimuli or (2) to include a discrimination or recall test that can empirically verify the degree to which auditory stimuli are perceived. Given individual differences in hearing level and ability to comprehend stimuli in noise, these are likely best done for each subject, rather than, for example, audibility being verified solely by the experimenter. It is also important to test audibility using stimuli representative of those used in the experiment, as the masking effects of scanner noise can be influenced by specific acoustic characteristics of the target stimuli (for example, being more detrimental to perception of birdsong than speech). Although it is naturally important for subjects to be able to hear experimental stimuli (and for experimenters to hear subject responses, if necessary), the requirement of audibility is obvious enough that it is often taken into account when designing a study. However, acoustic noise may also cause more pernicious challenges, to which I turn in the following sections. AUDITORY ACTIVATION A natural concern regarding acoustic noise during fMRI relates to the activation along the auditory pathway resulting from the scanner noise. If brain activity is modulated in response to scanner noise, might this reduce our ability to detect signals of interest? To investigate the effect of scanner noise on auditory activation, Bandettini et al. (1998) acquired data with and without EPI-based acoustic stimulation, enabling them to compare brain activity that could be attributed to scanner noise. They found that scanner noise results in increased activity bilaterally in superior temporal cortex (see also Talavage et al., 1999). Notably, this activity was not observed only in primary auditory cortex, but in secondary auditory regions as well. The timecourse of activation to scanner noise peaks 4-5 s after stimulus onset, returning to baseline by 9-12 s , and is thus comparable to that observed in other regions of cortex (Aguirre et al., 1998). Scanner-related activation in primary and secondary auditory cortex limits the dynamic range of these regions, producing weaker responses to auditory stimuli (Shah et al., 1999;Talavage and Edmister, 2004;Langers et al., 2005;Gaab et al., 2007). In addition to overall changes in magnitude or spatial extent of auditory activation, scanner noise can affect the level at which stimuli need to be presented for audibility, which can in turn affect activity down to the level of tonotopic organization (Langers and van Dijk, 2012). Thus, if activity along the auditory pathway proper is of interest, the contribution of scanner noise must be carefully considered when interpreting results. It is worth noting that while previous studies have investigated the effect of scanner noise on overall (univariate) response magnitude, the degree to which this overall change in gain may affect multivariate analyses is unclear. Again, this is true for activity in both auditory cortex and regions further along the auditory processing hierarchy (Davis and Johnsrude, 2007;Peelle et al., 2010b). COGNITIVE EFFORT DURING AUDITORY PROCESSING Although acoustic noise can potentially affect all auditory processing, most of the research on the cognitive effects of acoustic challenge has occurred in the context of speech comprehension. There is increasing consensus that decreased acoustic clarity requires listeners to engage additional cognitive processing to successfully understand spoken language. For example, after hearing a list of spoken words, memory is worse for words presented in noise, even though the words themselves are intelligible (Rabbitt, 1968). When some words are presented in noise (but are still intelligible), subjects have difficulty remembering not only the words in noise, but prior words (Rabbitt, 1968;Cousins et al., 2014), suggesting an increase in cognitive processing for degraded speech that lasts longer than the degraded stimulus itself and interferes with memory (Miller and Wingfield, 2010). Additional evidence supporting the link between acoustic challenge and cognitive resources comes from pupillometry Zekveld and Kramer, 2014) and visual tasks which relate to individual differences in speech perception ability (Zekveld et al., 2007;Besser et al., 2012). The additional cognitive resources required are not specific to acoustic processing but appear to reflect more domain-general processes (such as verbal working memory) recruited to help with auditory processing Rönnberg et al., 2013). Thus, acoustic challenge can indirectly impact a wide range of cognitive operations. Consistent with this shared resource view, behavioral effects of acoustic clarity are reliably found on a variety of tasks. Van Engen et al. (2012) compared listeners' recognition memory for sentences spoken in conversational speech compared to those spoken in a clear speaking style (with accentuated acoustic features), and found that memory was superior for the acoustically-clearer sentences. Likewise, listeners facing acoustic challenge-due to background noise, degraded speech, or hearing impairmentperform poorer than listeners with normal hearing on auditory tasks ranging from sentence processing to episodic memory tasks (Pichora-Fuller et al., 1995;Surprenant, 1999;Murphy et al., 2000;McCoy et al., 2005;Tun et al., 2010;Heinrich and Schneider, 2011;Lash et al., 2013). Converging evidence for the neural effects of effortful listening comes from fMRI studies in which increased neural activity is seen for degraded speech relative to unprocessed speech (Scott and McGettigan, 2013), illustrated in Figure 2. Davis and Clear speech Degraded speech core speech network core speech network + executive support FIGURE 2 | Listening to degraded speech requires increased reliance on executive processing and a more extensive network of brain regions. When speech clarity is high, neural activity is largely confined to traditional frontotemporal "language" regions including bilateral temporal cortex and left inferior frontal gyrus. When speech clarity is reduced, additional activity is frequently seen in frontal cortex, including middle frontal gyrus, premotor cortex, and the cingulo-opercular network (consisting of bilateral frontal operculum and anterior insula, as well as dorsal anterior cingulate) (Dosenbach et al., 2008). Johnsrude (2003) presented listeners with sentences that varied in their intelligibility, with speech clarity ranging from unintelligible to fully intelligible. They found greater activity for degraded speech compared to fully intelligible speech in the left hemisphere, along both left superior temporal gyrus and inferior frontal cortex. Importantly, increased activity in frontal and prefrontal cortex was greater for moderately distorted speech than either fully intelligible or fully unintelligible speech (i.e., an inverted U-shaped function), consistent with its involvement in recovering meaning from degraded speech (as distinct from a simple acoustic response). Acoustic clarity (i.e., SNR) also impacts the brain networks supporting semantic processing during sentence comprehension , possibly reflecting increased use of semantic context as top-down knowledge during degraded speech processing (Obleser et al., 2007;Obleser and Kotz, 2010;Sohoglu et al., 2012). Additional studies using various forms of degraded speech have also found difficulty-related increases in regions often associated with cognitive control or performance monitoring, such as bilateral insula and anterior cingulate cortex (Eckert et al., 2009;Adank, 2012;Wild et al., 2012;Erb et al., 2013;Vaden et al., 2013). The stimuli used in these studies are typically less intelligible than unprocessed speech (e.g., 4-or 6-channel vocoded 1 speech, or low-pass filtered speech). Thus, although the increased recruitment of cognitive and neural resources to handle degraded speech is frequently observed, the specific cognitive processes engagedand thus the pattern of neural activity-depend on the degree of acoustic challenge presented. An implication of this variability is that it may be hard to predict a priori the effect of acoustic challenge on the particular cognitive system(s) of interest. In summary, there is clear evidence that listening to degraded speech results in increased cognitive demand and altered patterns of brain activity. The specific differences in neural activity depend on the degree of the acoustic challenge, and thus may differ between moderate levels of degradation (when comprehension accuracy remains high and few errors are made) and more severe levels of degradation (when comprehension is significantly decreased). It is important to note that effort-related differences in brain activity can be seen both within the classic speech comprehension network and in regions less typically associated with speech comprehension, and depend on the nature of both the stimuli and the task. Furthermore, the way in which these effort-related increases interact with other task manipulations has received little empirical attention, and thus the degree to which background noise may influence observed patterns of neural response for many specific tasks is largely unknown. Finally, although most of the research on listening effort has been focused on speech comprehension, it is reasonable to think that many of these same principles might transfer to other auditory domains, such as music or environmental sounds. And, 1 Noise vocoding (Shannon et al., 1995) involves dividing the frequency spectrum of a stimulus into bands, or channels. Within each channel, the amplitude envelope is extracted and used to modulate broadband noise. Thus, the number of channels determines the spectral detail present in a speech signal, with more channels resulting in a more detailed (and for speech, more intelligible) signal (see Figure 2 in Peelle and Davis, 2012). as covered in the next section, effects of acoustic challenge need not even be limited to auditory tasks. EFFECTS OF ACOUSTIC NOISE IN NON-AUDITORY TASKS Although the interference caused by acoustic noise is most obvious when considering auditory tasks, it may also affect subjects' performance on non-auditory tasks (for example, by increasing demands on attention systems). The degree to which noise impacts non-auditory tasks is an important one for cognitive neuroscience. Unfortunately, there have been relatively few studies addressing this topic directly. Using continuous EPI, Cho et al. (1998a) had subjects perform simple tasks in the visual (flickering checkerboard) and motor (finger tapping) domains, with and without additional scanner noise played through headphones. The authors found opposite effects in visual and motor modalities: activity in visual cortex was increased with added acoustic noise, whereas activity in motor cortex was reduced. To investigate the effect of scanner noise on verbal working memory, Tomasi et al. (2005) had participants to perform an n-back task using visually-displayed letters. The loudness of the EPI scanning was varied by approximately 12 dB by selecting two readout bandwidths to minimize (or maximize) the acoustic noise. No difference in behavioral accuracy was observed as a function of noise level. However, although the overall spatial patterns of task-related activity were similar, brain activity differed as a function of noise. The louder sequence was associated with increased activity in several regions including large portions of (primarily dorsal) frontal cortex and cerebellum, and the quieter sequence was associated with greater activity in (primarily ventral) regions of frontal cortex and left temporal cortex. Behaviorally, recorded scanner noise has been shown to impact cognitive control (Hommel et al., 2012); additional effects of scanner noise have been reported in fMRI tasks of emotional processing (Skouras et al., 2013) and visual mental imagery (Mazard et al., 2002). Thus, MRI acoustic noise influences brain function across a number of cognitive domains. It is not only the loudness of scanner noise that is an issue, but also the characteristics of the sound: whether an acoustic stimulus is pulsed or continuous, for example, can significantly impact both auditory and attentional processes. Haller et al. (2005) had participants perform a visual n-back task, using either a conventional EPI sequence or one with a continuous sound (i.e., not pulsed). Although behavioral performance did not differ across sequence, there were numerous differences in the detected neural response. These included greater activity in cingulate and portions of frontal cortex for the conventional EPI sequence, but greater activity in other portions of frontal cortex and left middle temporal gyrus for the continuous noise sequence. As with conventional EPI sequences, scanner noise is once again found to impact neural processing in areas beyond auditory cortex (see also Haller et al., 2009). It is worth noting that not every study investigating this issue has observed effects of acoustic noise in non-auditory tasks: Elliott et al. (1999), using participants performing visual, motor, and auditory tasks, found that scanner noise resulted in decreased activity uniquely during the auditory condition. Nevertheless, the Frontiers in Neuroscience | Brain Imaging Methods August 2014 | Volume 8 | Article 253 | 4 number of instances in which scanner noise has been found to affect neural activity on non-auditory tasks is high enough that the issue should be taken seriously: Although exactly how much of the difference in neural response can be attributed to scanner noise is debatable, converging evidence indicates that the effects of scanner noise frequently extend beyond auditory cortex (and auditory tasks). These studies suggest that (1) a lack of behavioral effect of scanner noise does not guarantee equivalent neural processing; (2) both increases and decreases in neural activity are seen in response to scanner noise; and (3) the specific regions in which noise-related effects are observed vary across study. OVERALL SUBJECT COMFORT AND SPECIAL POPULATIONS An additional concern regarding scanner noise is that it may increase participant discomfort. Indeed, acoustic noise can cause anxiety in human subjects (Quirk et al., 1989;Meléndez and McCrank, 1993), a finding which may also extend to animals. Scanner noise presents more of a challenge for some subjects than others, and it may be possible to improve the comfort of research subjects (and hopefully their performance) by reducing the amount of noise during MRI scanning. Additionally, if populations of subjects differ in a cognitive ability such as auditory attention, the presence of scanner noise may affect one group more than another. For example, age can significantly impact the degree to which subjects are bothered by environmental noise (Van Gerven et al., 2009); similarly, individual differences in noise sensitivity may contribute to (or reflect) variability in the effects of scanner noise on neural response (Pripfl et al., 2006). These concerns may be particularly relevant in clinical or developmental studies with children, participants with anxiety or other psychiatric condition, or participants who are particularly bothered by auditory stimulation. A CAUTIONARY NOTE REGARDING INTERACTIONS One argument sometimes made in auditory fMRI studies using standard EPI sequences is that although acoustic noise may have some overall impact, because noise is present during all experimental conditions it cannot influence the results when comparing across conditions (which is often of most scientific interest). Given the ample amount of evidence for auditory-cognitive interactions, such an assumption seems tenuous at best. If anything, there is good reason to suspect interactions between acoustic noise and task difficulty, which may manifest differently depending on particular stimuli, listeners, and statistical methods (for example, univariate vs. multivariate analyses). In the absence of empirical support to the contrary, claims that acoustic noise is unimportant should be treated with skepticism. SOLUTIONS FOR AUDITORY fMRI Although at this point the prospects for auditory neuroscience inside an MRI scanner may look bleak, there is still cause for optimism. In this section I provide an overview of several methods for dealing with scanner noise that have been employed, noting advantages and disadvantages of each. These approaches are listed in Table 2, a subset of which is shown in Figure 3. PASSIVE HEARING PROTECTION Subjects in MRI studies typically wear over-ear hearing protection that attenuates acoustic noise by approximately 30 dB. Subjects may also wear insert earphones, or foam earplugs that can provide additional reduction in acoustic noise of 25-28 dB, for a combined reduction of approximately 40 dB (Ravicz and Melcher, 2001). Although hearing protection can reduce the acoustic noise perceived during MRI, it cannot eliminate it completely: Even if perfect acoustic isolation could be achieved at the outer ear, sound waves still travel to the cochlea through bone conduction. Thus, hearing protection is only partial solution, and some degree of auditory stimulation during conventional fMRI is unavoidable. In addition, passive hearing protection may change the frequency spectrum of stimuli, affecting intelligibility or clarity. CONTINUOUS SCANNING USING A STANDARD EPI SEQUENCE One approach in auditory fMRI is to present stimuli using a conventional continuous scanning paradigm, taking care to ensure that participants are able to adequately hear the stimuli ( Figure 4A). This approach generally assumes that, because scanning noise is consistent across experimental condition, it is unlikely to systematically affect comparisons among conditions (typically what is of interest). I have already noted above the danger of this assumption with respect to additional task effects and ubiquitous interactions between perceptual and cognitive factors. However, for some paradigms a continuous scanning paradigm FIGURE 3 | Schematic illustration of the relationship between temporal resolution and acoustic noise during stimulus presentation for various MRI acquisition approaches. Although the details for any specific acquisition depend on a combination of many factors, in general significant reductions in acoustic noise are associated with poorer temporal resolution. may be acceptable. From an imaging perspective continuous imaging will generally provide the largest quantity of data, and no special considerations are necessary when analyzing the data. Continuous EPI scanning has been used in countless studies to identify brain networks responding to environmental sounds, speech, and music. The critical question is whether the cognitive processes being imaged are actually the ones in which the experimenter is interested 2 . SPARSE IMAGING When researchers are concerned about acoustic noise in fMRI, by far the most widely used approach is sparse imaging, also referred to as clustered volume acquisition (Scheffler et al., 1998;Eden et al., 1999;Edmister et al., 1999;Hall et al., 1999;Talavage and Hall, 2012). In sparse imaging, illustrated in Figure 4B, the repetition time (TR) is set to be longer than the acquisition time (TA) of a single volume. Slice acquisition is clustered toward the end of a TR, leaving a period in which no data are collected. This intervening period is relatively quiet due to the lack of gradient switching, and permits stimuli to be presented in more favorable acoustic conditions. Because of the inherent lag of the hemodynamic response (typically 4-7 s to peak), the scan following stimulus presentation can still measure responses to stimuli, including the peak response if presentation is timed appropriately. The primary disadvantage of sparse imaging is that due to the longer TR, less information is available about the timecourse of the response (i.e., there is a lower sampling rate). In addition to reducing the accuracy of the response estimate, the reduced sampling rate also means that differences in timing of response may be interpreted as differences in magnitude. An example of this is shown in Figure 4B, in which hemodynamic responses that differ in magnitude and timing will give different results, depending on the time at which the response is sampled. The lack of timecourse information in sparse imaging can be ameliorated in part by systematically varying the delay between the stimulus and volume collection (Robson et al., 1998;Belin et al., 1999), illustrated in Figure 4D. In this way, the hemodynamic response can be sampled at multiple time points relative to stimulus onset over different trials. Thus, across trials, an accurate temporal profile for each category of stimulus can be estimated. Like all event-related fMRI analyses this approach assumes a consistent response for all stimuli in a given category. It also may require prohibitively long periods of scanning to sample each stimulus at multiple points; this requirement has meant that in practice varying presentation times relative to data collection is done infrequently. Many studies incorporating sparse imaging use an eventrelated design, along with TRs in the neighborhood of 16 s or greater, in order to allow scanner-induced BOLD response to return to near baseline levels on each trial. Although this may be particularly helpful for experiments in which activity in primary auditory areas is of interest, it is not necessary for all studies, and in principle sparse designs can use significantly shorter TRs (e.g., <5 s). Sometimes referred to as "fast" sparse designs, sparse designs with shorter TRs enable researchers to take advantage of a faster stimulus presentation rate and acquire more data for a given period of time, and for many experiments may be a more efficient approach (Perrachione and Ghosh, 2013). Cardiac gating Cardiac gating addresses problems caused by the fact that heartbeat and associated changes in blood flow can displace brainstem structures, making activity in these regions difficult to detect. With cardiac gating, researchers monitor a subject's heart rate, and then adjust volume acquisition to be synchronized to the heart rate (i.e., occurring at a consistent time in the heart rate cycle) (Guimaraes et al., 1998). Because heart rate will not perfectly align with a chosen TR, using cardiac gating results in a variable TR (± approximately ½ heart rate). (With relatively long TRs, the variability in sampling rate is typically not a significant problem, as the response to one trial is unlikely to overlap the response to another trial). Cardiac gating reduces data variability due to cardiac pulse motion artifacts and can thus improve ability to detect activity in subcortical structures prone to these artifacts, such as the inferior colliculus and medial geniculate body (Harms and Melcher, 2002;Overath et al., 2012). INTERLEAVED SILENT STEADY STATE (ISSS) IMAGING The main disadvantages in traditional sparse imaging come from the lack of information about the timecourse of the hemodynamic response, and the relatively small amount of data collected . . . FIGURE 4 | Different approaches to imaging auditory stimuli provide varying compromises between temporal resolution and acoustic noise. Example BOLD responses are shown in blue and red; these could reflect different responses across individuals or experimental conditions. (A) Continuous EPI provides relatively good temporal resolution, but with a high level of continuous acoustic noise. (B) Sparse imaging includes a period in which no data is collected, allowing the presentation of stimuli in relative quiet (due to the absence of gradient switching noise). The delay in the hemodynamic response enables the peak response to be collected after stimulus presentation has finished. The reduced temporal resolution of a traditional sparse imaging sequence may obscure differences in response latency or shape. In the hypothetical example, the blue response peaks higher than the red response; however, at the time when the sparse data point is collected, the red response is higher. (C) With interleaved silent steady state (ISSS) imaging, stimuli can also be presented in the absence of gradient switching noise, but a greater amount of data can be collected after presentation compared to sparse imaging. The delay in the hemodynamic response enables peak responses to be collected with relatively good temporal resolution. (D) By varying the time at which stimuli are presented relative to data collection across trials, non-continuous imaging can still provide information about the timecourse of the average response to a category of stimuli. Note how a different part of the BOLD response is sampled on each trial. (leading to potentially less accurate parameter estimates and fewer degrees of freedom in first-level analyses). Although in principle multiple volumes can be acquired following each silent period, the equilibrium state of the brain tissue changes during these silent periods: The additional scans do not reflect steady-state longitudinal magnetization, and thus vary over time. The lack of steady-state longitudinal magnetization adds variance to the data that can be challenging to account for in timeseries statistical models. Schwarzbauer et al. (2006) developed a solution to this problem by implementing a sequence with continuous excitation RF pulses, but variable readout gradients. The excitation pulses maintain steady state longitudinal magnetization but produce relatively little acoustic noise. As in traditional sparse imaging, an ISSS sequence permits stimuli to be presented in quiet and the peak BOLD activity to be captured due to the delay in hemodynamic response. However, with ISSS any number of volumes can be obtained following a silent period, as illustrated in Figure 4C. Although technically the temporal resolution is reduced relative to continuous scanning-as there are times when no data is being collected-the effective temporal resolution can be nearly as good as continuous scanning because data collection can capture much of the BOLD response following stimulus presentation: The ability of the sequence to capture the early hemodynamic response is limited solely by the length of the stimuli (with shorter stimuli permitting data collection to start closer to stimulus onset). ISSS thus combines advantages of continuous and sparse imaging, allowing the presentation of stimuli in relative quiet, while still providing information on the timing of the hemodynamic response. Variations of ISSS fMRI have now been used successfully in numerous studies of auditory processing (Doehrmann et al., 2008(Doehrmann et al., , 2010Bekinschtein et al., 2011;Davis et al., 2011;Engel and Keller, 2011;Mueller et al., 2011;Rodd et al., 2012;Yoo et al., 2012). Compared to continuous or sparse imaging data, ISSS data can be challenging to analyze because the data are discontinuousthat is, the sampling rate is not consistent. Because of this added wrinkle, below I briefly review two examples of analyzing ISSS data, illustrated in Figure 5. No doubt with increasing experience ISSS data analysis can be further refined. These descriptions are based on an imaginary event-related fMRI study with two conditions (A and B) and a TR of 2 s. Each trial involves presenting a single stimulus during a period of 4 s of silence, followed by 8 s of data acquisition. With a TR of 2 s, this results in 4 volumes of data per trial. Analyzing ISSS fMRI data using a finite impulse response (FIR) model Perhaps the most straightforward approach to analyzing ISSS fMRI data is to use a finite impulse response (FIR) model, shown in Figure 5A. A typical FIR model would consider only the scans on which data was collected. The model would thus FIGURE 5 | Two examples of ISSS fMRI data analysis. The example experiment is illustrated in the top row and identical for both approaches. No data are collected during stimulus presentation; following each silent period 4 scans are collected. (A) In the finite impulse response (FIR) model, scans are concatenated, and each time bin following an event is modeled using a separate regressor. The modeled scans have temporal discontinuities, but accurately represent all of the data collected. (B) By incorporating dummy scans in the modeled timeseries, the original temporal structure of the true data is preserved, facilitating the use of basis functions such as a canonical HRF. Regressors for experimental conditions should be set to 0 during the period of the dummy scans; the dummy scans themselves can be modeled with a single regressor. However, the modeled scans now overestimate the amount of data collected, artificially inflating the degrees of freedom in single-subject (first-level) models. have 4 regressors for condition A (one for each volume following stimulus presentation), and 4 regressors for condition B. These regressors would model the response at each time bin following a stimulus, making no assumptions about the shape of the response. As with any FIR Model, given the multiple regressors for each condition, there are several ways of summarizing the response to a condition, including an F-test over all 4 columns for a condition (asking: is there any effect at any time point?) or a t-test over all 4 columns (on average is there an increased response?). Similar options exist for comparing response between conditions. Because the ISSS scans are not continuous, care must be taken when implementing temporal filtering, including typical highpass filtering done on fMRI timeseries data. Omitting highpass filtering may make an analysis particularly susceptible to the influence of low-frequency (non-acoustic) noise. One way to help mitigate this issue is to ensure trials of different conditions are not too far apart in time so that comparisons across conditions are not confounded with low-frequency fluctuations in the signal. Analyzing ISSS fMRI data using dummy scans to mimic a continuous timeseries An alternative approach is to ensure that rows of the design matrix correspond to a continuous timeseries, illustrated in Figure 5B. To accomplish this, dummy volumes can be included in the design matrix during the period in which no data were actually collected. A straightforward option is to use the mean EPI image across all (real) volumes in a session, although any identical image will work: Using an identical image for all dummy images means that all dummy images can be perfectly modeled using a single regressor (0 for real scans, 1 for dummy scans). With this model it is then possible to use a canonical HRF (or any other basis set) for events of interest; the parameter estimates for these regressors are not influenced by the dummy scans. It is important to set the values for the non-dummy regressors to zero during the dummy scans to preserve estimation of the parameter estimate, and to rescale the regressors so that the maximum values are matched after these adjustments. It is not actually necessary to use dummy scans in order to take advantage of timeseries properties, such as highpass filtering or using an informed basis function (e.g., a canonical HRF); an appropriate design matrix that takes into account the discontinuous nature of the data could be constructed. However, the use of dummy scans facilitates constructing design matrices within common fMRI analysis software packages, which are typically designed to work with continuous timeseries data. When dummy scans are included in the final design matrix, the default degrees of freedom in the model will be incorrectly high, as the dummy scans should not be counted as observations. Thus, for first-level (single subject) analyses, an adjustment to the degrees of freedom should be made for valid statistical inference. For group analyses using a two-stage summary statistics procedure, however, adjusting for first-level degrees of freedom is not necessary. ACTIVE NOISE CONTROL A different approach to reducing the impact of acoustic noise in the MRI scanner is to change the way this sound is perceived by listeners using active noise control (Hall et al., 2009). As typically implemented, active noise control involves measuring the properties of the scanner noise, and generating a destructive acoustic signal (also known as "antinoise") which is sent to the headphones that cancels a portion of the scanner noise (Chambers et al., 2001(Chambers et al., , 2007Li et al., 2011). The destructive signal is based on estimates of scanner noise that can either be fixed, or adjusted over the course of a scanning session to accommodate changes in the scanner noise. Adjusting over time may be important in the context of fMRI as subjects may move their heads over the course of a scanning session, which affects the acoustic characteristics of the noise reaching their ears. In addition to sound presentation hardware, active noise control also requires an MR-compatible method for measuring the acoustic noise in the scanner, used to shape the destructive noise pulses. Whether sound is generated in the headset, or passed through a tube, the timing of this canceling sound is critical, as it must arrive with the specified phase relationship to the scanner noise. Active noise control can reduces the level of acoustic noise by 30-40 dB, and subjective loudness by 20 dB (the difference between these measures likely reflecting the contribution of bone conducted vibration) (Hall et al., 2009;Li et al., 2011). Particularly relevant is that when using relatively simple auditory stimuli (pure tone pitch discrimination), (1) behavioral performance in the scanner was significantly better and (2) activity in primary auditory regions was significantly greater under conditions of active noise control compared to normal presentation (Hall et al., 2009). USING CONTINUOUS fMRI SEQUENCES WITH REDUCED ACOUSTIC NOISE Software modifications to EPI sequences intended to reduce the effects of acoustic scanner noise can be broadly grouped into two approaches: changing the nature of the acoustic stimulation and reducing the overall sound levels. One approach to reducing sound levels of a standard EPI sequence is to modify the gradient pulse shape (Hennel et al., 1999). Typically, gradient pulses are trapezoidal, to increase the speed and efficiency of gradient encoding. By using sinusoidal pulses, acoustic noise can be reduced during BOLD fMRI (Loenneker et al., 2001), with some increase in the spatial smoothness of the reconstructed data. Building on the idea of modified pulse shape, another type of quiet fMRI sequence was introduced by Schmitter et al. (2008). Their quiet continuous EPI sequence takes advantage of two key modifications to reduce acoustic noise. The first involves collecting data using a sinusoidal traversal of k space, enabling more gradual gradient switching (readout gradients are purely sinusoidal) and reducing the acoustic noise produced. The second modification addresses the fact that a large component of the acoustic noise during EPI comes from the resonance of the scanner hardware to the gradient switching. This reflects specific physical properties of each scanner, and varies across different speeds of gradient switching. Thus, it is possible to perform scanner-specific measurements of the acoustic noise generated for different readout gradient switching frequencies, and select a combination of parameters that is relatively quiet, but does not unacceptably compromise signal quality. In Peelle et al. (2010a), we chose a bandwidth of 1220 Hz/Px and an echo time of 44 ms (compared to a standard sequence with values of 2230 Hz/Px and 30 ms, respectively). As might be expected, the longer echo time lead to moderate increases in signal dropout in regions prone to susceptibility artifact, such as inferior temporal and ventromedial frontal cortex. Together, these modifications produce approximately a 20 dB reduction in acoustic noise for the scanner, and using this sequence results in greater activity in several auditory regions compared to a standard continuous sequence (Schmitter et al., 2008;Peelle et al., 2010a). Taking another approach to reducing the impact of scanner noise on observed activation, Seifritz et al. (2006) developed a continuous-sound EPI sequence to reduce the auditory stimulation caused by rapid acoustic pulses (Harms and Melcher, 2002), as found in conventional EPI. In their sequence the RF excitation pulses, phase-encoding gradients, and readout gradients are divided into short trains. The resulting repetition rate is fast enough that the acoustic noise is perceived as a continuous sound, rather than the pulsed sound perceived in conventional EPI. Using sparse imaging, the authors compared neural activity in response to audio recordings of conventional EPI compared to the "continuous sound" sequence. They found that the continuous sound sequence resulted in reduced activity in auditory cortex due to scanner noise, and increased activity to experimental manipulations. SCANNER HARDWARE MODIFICATION Although it may not be practical for most research groups to significantly modify scanner hardware, by changing the physical configuration of the MRI scanner it is possible to significantly reduce the amount of acoustic noise generated. Some approaches have included the use of rotating coils to reduce gradient switching (Cho et al., 1998b), placing the gradient coils in a vacuum to reduce noise propagation (Katsunuma et al., 2001), or altering the coil structure (Mansfield and Haywood, 2000). By combining multiple approaches and focusing on the largest contributors to acoustic noise, substantial reductions in noise levels can be achieved (Edelstein et al., 2002). In the future, commercial applications of these approaches may help to limit the impact of scanner noise during fMRI, particularly when combined with some of the other solutions outlined above. AUDITORY fMRI IN NONHUMAN ANIMALS Although my focus has been on fMRI of human subjects, many of these same challenges and solutions apply equally when using fMRI with animals (Petkov et al., 2009;Hall et al., 2014). As with human listeners, the choice of scanning protocol will depend on a researcher's primary interests and the acceptable level of tradeoff between data quality, temporal resolution, and acoustic noise. Although some concerns about attention and cognitive challenge may be mitigated when dealing with sedated animals, in the absence of empirical support it is probably not safe to assume that one protocol will prove optimal in all situations. In addition, the timing parameters of any non-continuous sequence will naturally need to be optimized for the HRF of the animal being studied (Brown et al., 2013). CHOOSING THE APPROPRIATE SOLUTION As discussed above, different solutions for auditory fMRI have intrinsic strengths and weaknesses, and thus any chosen approach involves a degree of compromise with respect to acoustic noise auditory psychological MRI FIGURE 6 | Choosing the best method for auditory fMRI involves considering a number of dimensions. These dimensions are not independent: for example, using a modified EPI sequence may change the properties of the MRI data, the acoustic properties of the scanning noise, and resulting impact on psychological processes. The focus of optimization will depend on the acoustic characteristics of the stimuli and the neural processes of interest. (loudness or quality), psychological impact, and MRI data characteristics. It may be useful to think about this in a framework of multidimensional optimization, as illustrated in Figure 6. Because these dimensions are not independent, it is impossible to optimize for everything simultaneously (for example, approaches that have the lowest acoustic noise also tend to have poorer temporal resolution, forcing a researcher to choose between noise level and temporal resolution). It is therefore important to identify the dimensions that are most important for a given study. These will depend on the specific stimuli and scientific question at hand. Although there are exceptions, as a general rule it is probably safest to prioritize the auditory and psychological aspects of data collection. If the processing of stimuli is affected by scanner noise (through masking or increased perceptual effort), the resulting neural processing may differ from what the researchers are interested in. In this case increased image quality will not help in identifying neural activity of interest. Thus, a sparse imaging sequence is nearly always preferable to continuous sequences because it presents the lowest level of background noise, and is straightforward to implement. If possible, an ISSS sequence presents an even stronger solution as it permits the presentation of stimuli in relative quiet, while not sacrificing temporal resolution to the same degree as a traditional sparse sequence. When it is not feasible to present stimuli in the absence of scanner noise, considering the acoustic characteristics of the stimuli is critical. For example, if speech prosody, voice/speaker perception, or musical timber is of interest, spectral cues may be particularly important, and thus the spectrum of the scanner noise may be a deciding factor. In contrast, for other stimuli (such as musical beat, or other aspects of speech perception) temporal factors may dominate. That being said, from a practical standpoint the majority of researchers will be constrained by available sequences and equipment, and thus the most common choice will be between a continuous EPI sequence and a traditional sparse sequence. In this case, adapting a paradigm and stimuli to work with the sparse sequence is almost always a safer choice. RELYING ON CONVERGING EVIDENCE TO SUPPORT CONCLUSIONS Although it is no doubt important to optimize fMRI acquisition and analysis parameters for auditory studies, the strongest inferences will always be drawn based on converging evidence from multiple modalities. With respect to auditory processing, this includes functional neuroimaging methods that allow the measuring of neural response in the absence of external noise such as positron emission tomography (PET), electroencephalography (EEG), magnetoencelphalography (MEG), electrocorticography (ECoG), or optical imaging, as well as studying behavior in people with differences in brain structure (e.g., as a result of stroke or neurodegenerative disease). CONCLUSIONS AND RECOMMENDATIONS Echoplanar fMRI is acoustically noisy and poses considerable challenges for researchers interested in studying auditory processing. Although it is impossible to fully resolve the tension between the acoustic noise produced during fMRI and the desired experimental environment, the following steps will often be helpful in optimizing auditory responses and our interpretation of them: (1) Address, rather than ignore, the possible effects of background noise on activity seen in fMRI studies. Considering scanner noise is particularly important when using auditory stimuli, but may apply to non-auditory stimuli as well. (2) When possible, use methods that limit the impact of acoustic noise during fMRI scanning. (3) Provide empirical demonstrations of the effect of scanner noise on specific paradigms and analyses. It is an exciting time for auditory neuroscience, and continuing technical and methodological advances suggest an even brighter (though hopefully quieter) future.
2016-06-17T18:46:50.160Z
2014-08-21T00:00:00.000
{ "year": 2014, "sha1": "7b9cb04571dfdfefd59419aca0fe811fb9672d6c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2014.00253/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75111ac29145586589a648a9d8e43d7364a6493c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
218501624
pes2o/s2orc
v3-fos-license
Novel spatiotemporal feature extraction parallel deep neural network for forecasting confirmed cases of coronavirus disease 2019 The coronavirus disease 2019 pandemic continues as of March 26 and spread to Europe on approximately February 24. A report from April 29 revealed 1.26 million confirmed cases and 125 928 deaths in Europe. To refer government and enterprise to arrange countermeasures. The paper proposes a novel deep neural network framework to forecast the COVID-19 outbreak. The COVID-19Net framework combined 1D convolutional neural network, 2D convolutional neural network, and bidirectional gated recurrent units. COVID-19Net can well integrate the characteristics of time, space, and influencing factors of the COVID-19 accumulative cases. Three European countries with severe outbreaks were studied—Germany, Italy, and Spain—to extract spatiotemporal features and predict the number of confirmed cases. The prediction results acquired from COVID-19Net are compared to those obtained using a CNN, GRU, and CNN-GRU. The mean absolute error, mean absolute percentage error, and root mean square error, which is commonly used model assessment indices, were used to compare the accuracy of the models. The results verified that COVID-19Net was notably more accurate than the other models. The mean absolute percentage error generated by COVID-19Net was 1.447 for Germany, 1.801 for Italy, and 2.828 for Spain, which was considerably better than those of the other models. This indicated that the proposed framework could accurately predict the accumulated number of confirmed cases in the three countries and serve as an essential reference for devising public health strategies. And also indicated that COVID-19 has high spatiotemporal relations, it suggests us to keep a social distance and avoid unnecessary trips. Introduction CORONAVIRUS disease 2019 (COVID-19) is caused by a highly contagious novel virus and was first discovered in Wuhan City in Hubei, China, where the pandemic initially began. The pandemic quickly spread across China and then the entire world. Because of the high contagion rate of COVID-19 and the likelihood of an outbreak, a lockdown was imposed on Wuhan City, which has a population of nearly 11 million people. Meanwhile, the Chinese government implemented a series of strict measures to control the pandemic. Fig. 1 illustrates the timeline of the COVID-19 pandemic. In December 2019, a pneumonia case of unknown cause was discovered. Shortly afterward, patients with similar symptoms appeared across China and substantially burdened the healthcare system. On January 12, 2020, the World Health Organization (WHO) named the pneumonia 2019-nCoV. The disease was then confirmed to be contagious between people by Zhong Nanshan, an academician of the Chinese Academy of Engineering, on January 20, 2020. Ten days later (January 30), the WHO designated 2019-nCoV a public health emergency of international concern. On February 11, 2020, WHO Director-General Tedros Adhanom Ghebreyesus officially renamed the disease COVID-19 during an international meeting in Geneva. On March 11, 2020, Tedros announced a global outbreak of COVID-19. Mahase [1] reported in the BMJ that UK Prime Minister Boris Johnson warned citizens to avoid contact with people. Imperial College London indicated that COVID-19 is highly contagious to older adults aged over 70 years and pregnant women. As of 11:00 a.m. Beijing Time on March 16, the accumulated number of confirmed cases in Italy has reached 24 938, with 20 794 active cases, 1518 severe cases, 2335 fully recovered cases, and 1809 deaths. Italy (60.43 million people) has only a 20% larger total population than does South Korea (51.63 million people). However, the number of confirmed cases in Italy is three times of that South Korea. The number of deaths presents the greatest difference; that of Italy (1809) is 24 times that of Korea (75). Overall, the COVID-19 mortality rate in Italy exceeds 7%, which is substantially higher than the global average mortality rate (3.7%) and is the highest rate worldwide. Italy implemented measures against the pandemic earlier than did other countries. The day after COVID-19 was designated a public health emergency of international concern by the WHO, the Italian government became the first among European Union countries to announce a national health emergency and ceased all flights to and from China. On February 22, 2020, Italy imposed lockdowns on multiple cities, and the military was deployed to assist with pandemic prevention 3 days later. Recently, Italy has deployed healthcare personnel across the country to contain the pandemic. Italy's severe situation and its worldwide leading mortality rate may be attributable to the fast spread of the virus and numerous citizens' unwillingness to reduce social activities despite warnings from the government. Meanwhile, an increasing number of countries have implemented strict pandemic prevention measures. After Italy, Spain imposed multiple strict measures in which the central government announced a state of emergency for 14 days, during which all people were requested to remain indoors unless commuting to work or purchasing daily necessities. In addition to schools, which were closed previously, all shops and commercial areas unrelated to sustaining daily necessities were required to cease operation. On March 14, 2020, a lockdown was imposed on Madrid. Madrid, Spain on Saturday, March 14th became the second European country to impose a nationwide lockdown, ordering its 47 million people to mostly stay in their homes as coronavirus cases surge and the government announced that the wife of the Spanish prime minister had tested positive for the virus [2]. Staring from the next day, the police began to use unmanned drones for patrols and ask crowds in parks to leave and remain at home. However, the pandemic in Spain has not subsided. The accumulated number of confirmed cases in Spain has reached 74 386, and this number is higher than that of the five most affected Chinese provinces excluding Hubei combined (i.e., Guangdong, Henan, Zhejiang, Hunan, and Anhui), despite the total population of Spain being 12% of the combined population of these five provinces. The incremental rate of confirmed cases in Spain is also greatly concerning. The accumulated number of confirmed cases was 3146 on March 13, 5232 on March 14, and 7753 on March 15, which presents a daily incremental rate of 50%. In addition, the number deaths in Spain reached 288 and a 3.7% mortality rate, which is the highest after that of Italy. On February 9, 2020, Fernando Simón, the Director of the Center for Coordination of Health Alerts and Emergencies in Spain, stated that the country would only have a few confirmed cases. However, a report by the center on March 6 warned that the number of deaths in nearby Italy was increasing rapidly and that the pandemic may spread to Spain. At that time, the number of confirmed cases in Spain was roughly 280, with four deaths. Two weeks later, the number of deaths in Spain exceeded 7000, three times that disclosed by Iran. The statistics reported by the Spanish government on March 30 indicated that 813 people died because of COVID-19 in the past 24 h, which caused the number of deaths in Spain to reach 7340, during which the number of confirmed cases increased by 6398. Consequently, the number of accumulated confirmed cases reached 85 195 in Spain, the third highest number worldwide. The pandemic in Spain appeared to have subsided in the past day. However, a sudden increase in the number of confirmed cases and deaths forced the Spanish government to impose further lockdown measures: Since March 30, all people have been required to remain at home for at least 2 weeks, except for those engaging in essential occupations. On March 11, 2020, the Chancellor of Germany, Angela Merkel, hosted a press conference jointly with Jens Spahn, the Federal Minister of Health, and Lothar Wieler, the President of the Robert Koch Institute. For the first time, Merkel publicly addressed her opinion of COVID-19 and stated that the public is currently not immune to this disease. Merkel warned that because no vaccine or medication has been developed for COVID-19, 60%-70% of the German population is expected to be infected at the current rate according to expert analysis. A few hours after the press conference, the WHO characterized COVID-19 as a pandemic, when the accumulated number of confirmed cases in Germany reached 2000, with four cases of death. This marks the zero-death record held by Germany while the pandemic spreads across Europe. In the past week, the number of confirmed cases in Germany has increased considerably, with daily increases of over 1000 cases. To contain the pandemic, the German government closed its borders with France, Switzerland, and Austria at 08:00 on March 16. The German railway also progressively reduced the number of domestic trains in service. Social activities in each German state are being gradually reduced. Events with more than 50 participants must be canceled; schools, bars, and nightclubs have been closed; some companies and governmental units allow their employees to work from home; and restrictions were imposed on foreign travel. In response to COVID-19, the German government has conducted research in the Robert Koch Institute since January 6, and the scale of the research continues to increase. In addition to its comprehensive laboratory system and highly trained personnel, Germany began conducting screening tests early to identify patients in advance. After the contagion rate and number of patients are reduced, the national healthcare system can allocate more capacity to provide treatment for all patients instead of passively conducting screening tests after patients experience severe symptoms and require hospitalization. Germany currently has the lowest COVID-19 mortality rate among European countries with populations of more than 10 million people. According to 2017 Organization for Economic Cooperation and Development statistics, the number of ward beds per person in Germany was the highest among European countries, and an average of 8 beds are available for every 1000 people. A crucial factor in containing pandemics is the density of intensive care units (ICUs). Patients with severe COVID-19 symptoms must receive treatment in ICU wards and generally require artificial ventilation. According to the German Federal Ministry of Health, Germany currently has 28 000 intensive care beds, 25 000 of which are equipped with artificial ventilation systems. Assume Germany's healthcare system possesses enough facilities to contain the pandemic for the total population. However, many of these beds are already occupied. Consequently, this assumption is valid only if the national healthcare system does not collapse. A critical factor in preventing this pandemic is reducing the viral contagion rate and not overwhelming the healthcare system, which can be extremely difficult. The number of intensive care beds in Italy is roughly 5000. If the pandemic continues or becomes more severe, Germany's large number of available beds will be a notable advantage in containing the epidemic. As of 16:00 on March 20, the accumulated number of confirmed cases in Germany is 19 356, with 52 deaths and a mortality rate of 0.26%. However, the situation in Germany is not optimistic. Political and academic figures, including Minister Spahn, warned that the pandemic has not peaked. Before the COVID-19 outbreak, the German healthcare system already faced labor shortages. Experts warned that when the real peak of the pandemic arrives, Germany may lack nurses despite having sufficient ventilators. Local health departments lack epidemiology surveyors; hence, during the pandemic peak, whether Germany can continue to effectively track the contact history of cases and stop chains of infection remain questionable. Concerns have emerged regarding European countries with close interactions with Germany, including the United Kingdom and Switzerland. If these countries fail to contain the pandemic, Germany may face continuous streams of imported cases because the strict border controls currently imposed in Europe cannot continue indefinitely. In response to COVID-19, which is considered a "never before threat" to Germany, the German government clearly stated that additional restrictions may be imposed on the daily life of citizens. Merkel also emphasized that the government is ready to take any necessary measures to help the country overcome the pandemic. As the country with the largest population, economic activity, and number of European Parliament seats in the European Union, Germany is a critical reference for the global community to assess the progression of COVID-19. Currently, the pandemic in China appears to have subsided considerably, while outbreaks continue in other countries, particularly European countries such as Italy, Spain, and Germany. On March 19, 2020, the number of newly confirmed cases in Italy surpassed 4000, and that in Spain and Germany each surpassed 2000. The COVID-19 outbreak substantially threatens the global health system and considerably affects the global economy. Decreases in European and US stock markets have led to notable economic losses. Extracting information from the spatiotemporal data of the pandemic is increasingly crucial to helping countries determine next-stage pandemic prevention measures and planning resource allocation and aid among cities and countries. The remaining sections of this paper are structured as follows. Section 2 presents a literature review on articles recently published by relevant scholars. Section 3 details the model proposed in this study and the principles behind models. Section 4 discusses the analysis results, and Section 5 presents its conclusions. Related works COVID-19 had a severe impact on economic, social, and production in the world. Nicola et al. [3] gave a summarization of the socio-economic effects of COVID-19 on individual aspects of the world economy, and it indicated the necessity of a broad socio-economic development plan. COVID-19 disease prompts the government to impose drastic lockdown measures of the economy and society, to let the economy recover while preventing and controlling the epidemic. Ocampo et al. [4] applied the decision-making trial and evaluation laboratory (DEMATEL) method with intuitionistic fuzzy (IF) sets to the domain of public health and the emerging COVID-19 pandemic. People warned to avoid all non-essential travel, COVID-19 has caused considerable losses to tourism in many countries. COVID-19 has also hit the stock market. In Spain's stock market [5] (IBEX 35), the most significant one-day drop in its history appears, which experienced a decline of up to 14% at the closing of shares. Some researchers analyzed the opportunities and challenges in their research areas during the epidemic. Javaid et al. [6] found that Industry 4.0 not only could fulfill the requirements for custom face masks, gloves but also could collect information from the healthcare system to control and treat COVID-19 patients properly. Pu et al. [7] discussed the impact of COVID-19 on agricultural production in China. The research indicated that unreasonable restrictions would block the outflow channels of farm products, hinder necessary production inputs, destroy production cycles, and finally undermine production capacity. Singh et al. [8] researched the Internet of Things (IoT), and the research indicated that the IoT healthcare system was useful for proper monitoring of COVID-19 patients. The use of interconnected networks can help improve patient satisfaction and reduces readmission rates in the hospital. Above all, whatever the stock market or others, total affect by COVID-19. There are opportunities as well as challenges, and this is bound to have some impact on the economy. A large number of researchers have found that the prediction of COVID-19 cases has become more and more critical. Nikolopoulos et al. [9] found that due to travel restrictions and other measures in various countries, short-term real-time prediction of the epidemic has become an essential management and decision-making task. Accurately predicting the development of new cases can more effectively manage the resulting excess demand in the entire supply chain. Zeroual et al. [10] found that effective management of infected patients has become a challenging problem facing hospitals. Accurate short-term prediction of the number of new infections and recovered cases is essential to optimize available resources and prevent or slow the development of such diseases. Shastri et al. [11] believe that predicting COVID-19 cases can help reduce morbidity and mortality. The proposed COVID-19Net can forecast next day total confirmed cases and provide a reference for enterprises to develop production plans and response plans during the epidemic. It helps enterprises seize opportunities and meet challenges. Commonly used COVID-19 prediction models are divided into two types: those using mathematics or artificial intelligence (AI) black box algorithms to construct models and analyze the progression and contagion of the pandemic. Li et al. [12] used a mathematical approach to construct a function model and describe the daily number of newly confirmed cases and deaths in Hubei. They used nonlinear regression to obtain essential parameter values, summarize the pandemic trend in Hubei, and predict that the pandemic will end in Hubei on March 10. Nishiura et al. [13] estimated the serial intervals of COVID-19 patients. Using a truncated dataset with days as the unit, they employed a double-interval Bayes likelihood function to estimate serial intervals. Under a 95% confidence interval, their results revealed a mean of 3.9 d between successive cases in a chain of transmission. Tang et al. [14] used a temporal dynamic model to simulate COVID-19 transmission in China. Markov chain Monte Carlo methods were used to fit the pandemic data of the country, and Geweke's diagnostic was used to assess model convergence. Their experiment suggested that 58%-76% of contagion sources must be isolated to contain the pandemic. Fan et al. [15] used data on the population distribution and flow of Wuhan City to predict COVID-19 cases imported from nearby cities and provinces. Their results revealed that the population of other cities or provinces was highly correlated with the daily number of confirmed cases in Wuhan. Roosa et al. [16] predicted pandemic progression by using three mathematical models: a generalized logistic growth model, the Richards growth model, and a sub-epidemic wave model. Their results verified that the sub-epidemic model most accurately predicted the pandemic situation in Hubei on February 15. Zhong et al. [17] used the susceptible-infected-removed model and epidemic data combined with parameter estimation to estimate the accumulated number of confirmed cases in China. They predicted that the COVID-19 outbreak in China will gradually subside between early May and late June. The Diamond Princess cruise is considered a major epidemic zone of COVID-19 and has received considerable attention from the global community. Nishiura [18] used the infection timeline of the Diamond Princess to back-calculate the incidence rate under a 95% confidence interval and revealed that implementing isolation measures on February 5 reduced the number of confirmed cases to 776. Currently, Internet and big data platforms are gradually improving to incorporate comprehensive functions. This indicates the feasibly of using AI to resolve disease-related problems [19]. For example, in response to the increasing need for resources during the severe acute respiratory syndrome (SARS) outbreak from China in 2002, Yu et al. [20] used a multitarget and multicycle mixed-integer programming model to determine the optimal allocation and provision of temporary facilities and resources. They used Language for Interactive General Optimizer to optimize facility site selection and suggested that selecting the sites of temporary medical facilities built in late February substantially affects subsequent pandemic prevention. McAleer [21] employed the Global Health Security Index to assess 195 countries worldwide and compared COVID-19 with SARS and Middle East respiratory syndrome. The results indicated that the global community could better contain COVID-19 than the other two diseases, with the index score increasing to 51.9. For deep learning, Metsky et al. [22] employed the CRISPR-Cas13 system to compare COVID-19 virus nucleic acid sequences with those of other representative viruses. They also tested various COVID-19 screening methods and identified ADAPT as the most accurate. Guo et al. [23] used a bi-path convolutional neural network (CNN) to construct a virus host prediction model. They used the basic local alignment search tool to compare the gene sequences of two virus hosts and determined that the area under curve approach most accurately described human hosts. Yang et al. [24] combined a long short-term memory (LSTM) model and the Susceptible-Exposed-Infected-Resistant (SEIR) model and used the population flow data obtained near January 23, 2020 and COVID-19 epidemic parameters to predict the pandemic peak in China. Comparing the prediction curves derived using these two models and that plotted using actual data indicated that the number of infected patients would reach a peak of 4000 between February 4 and February 7. Anastassopoulou et al. [25] used the susceptible-infectious-recovered-dead model to examine the medical data of Hubei acquired between January 11 and February 10, 2020 and predict mortality and recovery rates through 95% confidence intervals [26]. Their results showed that the accumulated number of confirmed cases could reach 45 000-180 000 on February 29, and the number of deaths could exceed 2700. Kucharski et al. [27] used a random transmission model to predict the number of confirmed cases in Wuhan and imported cases from Wuhan. Their results indicated that viral transmission could subside in Wuhan in late January 2020; however, new outbreaks could occur in regions with transmission potential similar to that of Wuhan. Huang et al. [28] proposed to use the deep neural network CNN architecture to predict the cumulative number of confirmed cases in seven regions of Hubei, Shenzhen and Wenzhou. By comparing with the training effects of MLP, LSTM and GRU models, it is found that the overall effect of CNN in this experiment is better than other models. Few scholars have used deep learning to predict pandemic trends because this approach requires much data. To fully utilize available pandemic data, the time-variant characteristics of pandemic progression and the strong spatial correlation between neighboring countries and cities with high transmission potential must be considered. Accordingly, a parallel mixed deep learning network was employed in this study to extract the spatiotemporal features of COVID-19 transmission and predict subsequent pandemic trends. Methodology Data were acquired from the daily situation reports disclosed by the WHO [29] and information in the GitHub website [30]. To ensure the integrity and rigorousness of research procedures, we used data from January 22 to April 28, during which 98 pieces of data were collected daily and divided into 93 samples. The first 83 samples were used as the training set, and the remaining 10 samples were used as the testing set. Data accumulated over a 5-d period were used to predict the accumulated number of confirmed cases of the subsequent day. The data were from three European countries with severe situations: Germany, Italy, and Spain. The data were divided into two parts, as shown in Fig. 2. • The first part consisted of six characteristic factors from each country: the daily number of newly confirmed cases, deaths, and recovered cases and the accumulated number of confirmed cases, deaths, and recovered cases. • The second part consisted of three characteristic factors-the accumulated number of confirmed cases from the three countries. Based on the different types of these specific factors, a new hybrid spatiotemporal architecture called COVID-19Net employ predict the total number of confirmed cases on the next day, as shown in step 2 of Fig. 2. The proposed architecture is mainly composed of CNN and bidirectional gated recurrent unit (BiGRU). The experimental COVID-19Net flow chart also shown in advance below. The detailed explanations of this method will give in sections 3 and 4. Fig. 3 illustrates the geographical locations of the three countries. Many studies did not consider the spatial characteristics of the epidemic. For the first time, COVID-19Net has extracted and integrated the temporal and spatial (Germany, Italy, Spain) characteristics and influencing factors of cumulative COVID-19 cases, and performed well as shown in Fig. 2. COVID-19Net makes full use of the feature extraction capabilities of CNN to extract spatiotemporal features of COVID-19 cases and extract relevant features from each feature factor. Convolutional neural network CNN's are a typical class of deep neural networks. They are used wildly in many fields, such as image classification [32], semantic segmentation [33], time-series forecasting [34] et al. CNN is divided into one-dimensional CNN (1D-CNN) and two-dimensional CNN (2D-CNN) according to different application data dimensions. The movement direction of the 1D-CNN filter is unidirectional, but while the movement direction of the 2D-CNN filter is two-directional. Both 1D-CNN and 2D-CNN have a strong feature-learning ability. A CNN can be divided into layers to extract features from a dataset. Fig. 4 presents the structural diagram of a CNN. A CNN mainly comprises convolutional, pooling, and fully connected layers. Convolutional layers are the core structure where most computations occur. When data pass through the kernel, features are extracted and sent to pooling layers, which ensure that features remain invariant. The resulting feature maps are then compressed to reduce computation loads. Finally, fully connected layers transform all features into onedimensional (1D) vectors and send them to the dense layer for outputs. Bidirectional long short-term memory neural network & bidirectional gated recurrent unit A bidirectional LSTM (BiLSTM) model [35] presents an improved framework compared with LSTM [36]. The neuron structure of BiLSTM is similar to that of LSTM, as illustrated in Fig. 5(a) and (b). LSTM comprises an input gate, output gate, and forget gate. The input gate is expressed using (1), in which current input x t and previous information and memory (i.e., h t− 1 and c t− 1 ) are selectively retained according to corresponding weights and are sent to the forget gate. The corresponding weight matrices are W xi , W hi , and W ci , and b i denotes the bias matrix of the input gate. The forget gate is expressed using (2) and (3). In (2), current input x t and previous information and memory (i.e., h t− 1 and c t− 1 ) are selectively forgotten according to weight matrices W xf , W hf , and W cf , and b f is the bias matrix of the forget gate. In (3), information i t retained in the input gate is applied to the tanh function and added to previously forgotten information f t c t− 1 to derive the target retained information c t , where W xc and W hc are the weight matrices of the forget gate, and b c is the bias matrix of the forget gate. The output gate is acquired by passing current input x t , previous information h t− 1 , and currently retained memory c t through the sigmoid layer. Then, c t is standardized using tanh and multiplied to the output of the sigmoid layer to derive output information h t . In (4) and (5), W xo , W ho , and, W co are the weight matrices of the output gate, and b o is the bias matrix of the output gate. A BiLSTM model is an LSTM model with backpropagation learning, as presented in Fig. 5(b). A gated recurrent unit (GRU) [37] resolves the vanishing gradient problem in a recurrent neural network and uses an update gate and reset gate, as displayed in Fig. 6(a). These two gates control output information, retain previous information, and require fewer parameters than does an LSTM model. Accordingly, a GRU-based neural network is more efficient than an LSTM model. The update gate controls information sent previously. In (6), the The reset gate selectively forgets previous information h t− 1 . The computation of (7) is identical to that of (6), and W (r) and U (r) are the weight matrices of the reset gate. Reset gate output r t is subjected to matrix multiplication with Uh t− 1 , as expressed in (8). A BiGRU [38] is similar to a BiLSTM [39] model in which backpropagation learning is introduced on the basis of GRU, as shown in Fig. 6(b). Step 1: Required data are selected from the aforementioned data sources to perform correlation analysis and identify highly correlated data. Proposed method Step 2: All data are divided into testing and training sets according to the data collection time. Step 3: Data are refactored into a matrix with six factors and five time steps named Input 1. Data are refactored into a matrix with three regions and five time steps named Input 2. Step 4: Because Input 1 is mainly used to extract temporal features, a 1D-CNN and BiGRU-based parallel deep learning network is employed. Because Input 2 is mainly used to extract spatiotemporal features and each country has a specific geographical location and order, a two-dimensional (2D) CNN is employed. The training set refines the models. Step 5: The testing set is used to test the models and calculate the mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE). The COVID-19Net algorithm proposed in this study is parallelly connected using a 1D-CNN [40], a 2D-CNN [41], and BiGRUs to form a mixed deep learning network. Fig. 8 illustrates this framework. COVID-19Net was used to process spatiotemporal and temporal data separately. Because Input 1 contained data from each country regarding the daily number of newly confirmed cases, deaths, and recovered cases and the accumulated number of these cases in the past 5 days, temporal features related to the accumulated number of confirmed cases were extracted from these factors. Given the outstanding performance of BiGRUs in learning from time series data, 1D-CNN is combined with BiGRU to extract temporal features from Input 1, as shown in the left branch in Fig. 7. The highly contagious nature of COVID-19 and frequent population flows among European countries may result in a strong spatial correlation between their pandemic trends. Therefore, extracting spatial features considerably increased the accuracy of the prediction model. This indicated that features related to changes in the accumulated number of confirmed cases in each country are highly crucial. Based on the parallelly combined 1D-CNN and BiGRUs used for extracting temporal features from Input 1, a 2D-CNN was constructed to extract the spatiotemporal features of the three countries from Input 2, as shown in the right branch in Fig. 7. The 1D-CNN had 16, 32, 32, 64 convolutional kernels, respectively, each with a kernel size of 3. The size of the corresponding maximum pooling layer was 2. In the 2D-CNN model, the two convolutional layers had 32 and 64 convolution kernels, respectively, and the sizes of the convolution kernels were 3 × 3 and 4 × 1, respectively. Because the COVID-19 data employed in this study were insufficient, a dropout layer was used to prevent overfitting. The COVID-19Net algorithm proposed in this study parallelly combines CNNs and BiGRUs to concurrently process spatiotemporal and spatial features. The proposed algorithm can extract temporal and spatiotemporal features, which provides an alternative for researching data with both temporal and spatiotemporal correlation. In this experimental, COVID-19Net were coded using Tensorflow backend and trained using 16 GB memory with an Intel i7-9700K CPU and single NVIDIA RTX2080 GPU. The experimental environment is the 'keras' module in Python 3.7 (Table 1). There are about 65 433 parameters in the training step, 100 epochs, and each time takes about 100 s. The hyperparameters are selected by grid-search for each model as shown in appendix Table 5. Experimental and discussion The MAE, MAPE, and RMSE equations used in this study are expressed as follows: The recovered cases confirmed cases and deaths indirectly reflect the local medical load, prevention, and control efforts, and they will have a more significant impact on the total confirmed cases. Hence, experimental data were collected from Germany, Italy, and Spain regarding six factors: the daily number of newly confirmed cases, deaths, and recovered cases and the accumulated number of confirmed cases, deaths, and recovered cases. Fig. 9 presents the correlation heatmaps of these factors. The results revealed that the accumulated number of confirmed cases was strongly correlated with other factors in Italy, with all correlation coefficients greater than 0.62 ( Fig. 9(b)). Fig. 9(a) and (c) displays the correlation heatmaps of Germany and Spain, respectively. The pandemic situations in these countries were less severe than that of Italy. Therefore, the correlation between accumulated confirmed cases and daily recovered cases was low, with correlation coefficients of 0.78 and 0.65. However, the correlation between the remaining factors remained high, with correlation coefficients exceeding 0.54. The results verify the necessity of using these factors as one input of the COVID-19Net parallel network. It also suggests that people can analyze these factors to evaluate whether the government has done enough to prevent and control the disease. A separate correlation heatmap was created for the accumulated number of confirmed cases in the three countries (Fig. 10). The results indicated correlation coefficients greater than 0.99. The correlation is striking, and this shows that it is necessary to extract the spatial characteristics of the epidemic, this is a reason to use it as other input to the COVID-19Net parallel network. The geographical proximity of these countries and their frequent interaction lead to faster disease transmission. Hence, if the epidemic is to be contained, some travel restrictions should be applied, and social distance should be kept. This study used a CNN, a GRU, a CNN-GRU, and COVID-19Net to predict the accumulated number of confirmed cases in Germany, Italy, and Spain (Fig. 11). The results showed that the predictions produced by CNN-GRU were the least accurate and could not reflect any features or patterns. GRU showed an unstable growth trend; therefore, it could not be used as a reliable reference. The prediction results produced by COVID-19Net show better than CNN-GRU and GRU. This was because the three countries have obvious spatial characteristics in geographical location, and it was unreasonable to consider only the time factor. Overall, the prediction result produced by the proposed COVID-19Net model was more accurate than that of the other models, and more precise predictions realized. It can provide a reference for the government to arrange resources such as the number of hospital beds, masks, etc., and for enterprises to make production plans and response plans during the epidemic. Tables 2-4 list the MAE, MAPE, and RMSE values of the three countries obtained from the various models. Comparing these values verified that the proposed model was considerably more accurate than the other models and exhibited an MAPE value greater than 3 for the three countries. This indicated that COVID-19Net can accurately predict the accumulated number of confirmed cases in each country. It also highlights the importance of spatial characteristics in prediction. Fig. 12 presents the box-plots of test set forecasting errors based on (a) Germany, (b) Italy, (c) Spain's COVID-19 prediction model. Judging from the level of the box plot, the average error of COVID-19Net is small. At the same time, the boundary range formed by the upper quartile and the lower quartile indicates that the error range of COVID-19Net is the smallest, that is, the COVID-19Net model has a short fluctuation range and the model is more stable. It can see from the median line Q2 that the performance of the GRU and CNN-GRU models in this experiment is similar. Still, after comparing the boundaries of the box diagram, the GRU is more accurate than the CNN-GRU hybrid architecture. It can see that the proposed network architecture is sufficient to meet related computing requirements, and the performance is better than the deeper network architecture such as CNN-GRU. The residual test also is applied to test COVID-19Net; the results of the three countries in Fig. 12 are 0.378, 0.582, and 0.326, respectively. These indicate that COVID-19Net is qualified to predict the accumulative number of confirmed cases. Fig. 13 presents the bar chars of the MAE, MAPE, and RMSE values generated from COVID-19Net, CNN, GRU, and CNN-GRU for Germany, Italy, and Spain. Fig. 13(a), (b), and 13(c) respectively correspond to Tables 2-4. The results indicated that the CNN-GRU model was the least accurate, and the GRU and CNN exhibited less favorable performance than did COVID-19Net. The models in descending order of prediction accuracy were COVID-19Net, CNN, GRU, and CNN-GRU, which verified that the proposed algorithm was the most accurate. Above all, it is necessary to analyze six factors to evaluate whether the government has done enough to prevent and control the disease. If the epidemic is to be contained, some travel restrictions should be applied, and social distance should be kept. And It is necessary to extract the spatial characteristics of the epidemic to improve the accuracy of prediction. COVID-19 poses considerable challenges to healthcare systems worldwide. By using the proposed algorithm, predicted that the demand for ward beds and ICU beds in Germany, Italy, and Spain would increase substantially, particularly before the pandemic peaks. If problems from lacking healthcare resources and social distancing cannot be resolved, demand will increase. COVID-19 may overwhelm the capacity of hospitals, particularly ICU nurses. The predicted values produced in this study can help countries develop and implement disease prevention measures and reduce gaps between the strategies employed by countries, including reducing services unrelated to COVID-19 prevention and temporarily increasing the capacity of the healthcare system. Based on the estimation results of Zhang et al. [26], It can be concluded that COVID-19 will stabilize after the beginning of June (about four weeks), during which healthcare resources will be in heavy demand. However, this demand will also depend on social distancing measures implemented and other measures already imposed by each country. During this pandemic, relevant disease prevention measures must be maintained, and the importance of these measures must be highlighted to reduce the deaths of civilians and healthcare personnel. COVID-19 also have a severe impact on economic, social, and production in the world. COVID-19Net can provide a reference for enterprises to develop production plans and response plans during the epidemic. It helps enterprises seize opportunities and face challenges. Conclusions The likelihood of flattening the epidemic curve, as discussed in Western media, is overly optimistic because this entails no increase in COVID-19 cases. Currently, only China claims to have achieved this after implementing 2 months of lockdowns and strict measures. In this study, COVID-19Net is proposed to predict the accumulated number of confirmed cases in Germany, Italy, and Spain, which are heavily affected by the pandemic. The accumulated numbers of confirmed cases, deaths, and recovered cases and the daily numbers of newly confirmed cases, deaths, and recovered cases in the past 5 days were used to predict accumulated confirmed cases the next day. By comparing the prediction results and evaluation indicators of COVID-19Net, CNN, GRU and CNN-GRU, it could be verified that the accuracy of COVID-19Net was the best one. The MAPE values of the three countries were 1.447, 1.801 and 2.828, respectively. This means that if you only considered time series factors, you could not extract features from the data of these countries. The effect of the hybrid model CNN-GRU was also very poor. Although the accuracy of CNN was slightly higher than that of GRU and CNN-GRU, it was still impossible to predict the cumulative number of confirmed cases in these three countries. COVID-19Net was verified to be more accurate than the other three models because COVID-19 trend data contain spatiotemporal features that can be extracted using the deep neural network of COVID-19Net. The results of this study can not only serve as an essential reference for devising public health strategies against COVID-19. The proposed model provides to improve the allocation of hospital resources but also a reference for enterprises to develop production plans and response plans during the epidemic. It helps enterprises seize opportunities and meet challenges. The experimental results indicate that it is necessary to analyze six factors to evaluate whether the government has done enough to prevent and control the disease. If people want to control the epidemic effectively, people should implement stay home isolation, travel restrictions and keep their social distance. Some researchers consider that the temperature will affect the spread of COVID-19 [42]. The coronavirus also has some mutations, along with the development of the outbreak. The prevention and control efforts will continue to strengthen. The explosion of circulation from the space will reduce relatively, so the space correlation will also have a certain reduce these will affect the learned pattern. In future work, the research will carry out on these aspects.
2020-05-05T19:01:51.289Z
2020-05-05T00:00:00.000
{ "year": 2020, "sha1": "035a9b1d0696fe18a8514a49a1c6b353c44f56ce", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.seps.2020.100976", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "90e4ad42724b8c0c6c24ebecd347551380f0338b", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Mathematics" ] }
18381331
pes2o/s2orc
v3-fos-license
An ancient and conserved function for Armadillo‐related proteins in the control of spore and seed germination by abscisic acid Summary Armadillo‐related proteins regulate development throughout eukaryotic kingdoms. In the flowering plant Arabidopsis thaliana, Armadillo‐related ARABIDILLO proteins promote multicellular root branching. ARABIDILLO homologues exist throughout land plants, including early‐diverging species lacking true roots, suggesting that early‐evolving ARABIDILLOs had additional biological roles. Here we investigated, using molecular genetics, the conservation and diversification of ARABIDILLO protein function in plants separated by c. 450 million years of evolution. We demonstrate that ARABIDILLO homologues in the moss Physcomitrella patens regulate a previously undiscovered inhibitory effect of abscisic acid (ABA) on spore germination. Furthermore, we show that A. thaliana ARABIDILLOs function similarly during seed germination. Early‐diverging ARABIDILLO homologues from both P. patens and the lycophyte Selaginella moellendorffii can substitute for ARABIDILLO function during A. thaliana root development and seed germination. We conclude that (1) ABA was co‐opted early in plant evolution to regulate functionally analogous processes in spore‐ and seed‐producing plants and (2) plant ARABIDILLO germination functions were co‐opted early into both gametophyte and sporophyte, with a specific rooting function evolving later in the land plant lineage. Introduction Plant lifecycles undergo an alternation of generations between a haploid gametophyte and a diploid sporophyte phase (Hofmeister, 1851). In the earliest-diverging land plants, the bryophytes, the gametophyte generation is dominant in the lifecycle, with the sporophyte being a relatively transient structure that remains attached to and largely dependent on the gametophyte (Glime, 2013). By contrast, in extant vascular plants, the sporophyte has assumed the dominant role, the most extreme example being in seed plants (spermatophytes), where the gametophyte is reduced to a few cells that are fully surrounded by sporophyte tissues (Evert & Eichhorn, 2012). Despite the different origins of the plant body in bryophytes and flowering plants, both lineages possess rooting and shooting structures, as well as dispersal structures (spores or seeds) that allow transition from one generation to the next and enable species propagation and distribution (Pires & Dolan, 2012). The evolution of rooting systems was a key innovation enabling plants to be sessile, allowing absorption of nutrients and water, anchorage of the plant to its substrate, and responses to internal and external signals. Bryophytes possess simple hair-like rooting structures called rhizoids (Jones & Dolan, 2012). Work in model bryophytes, the moss Physcomitrella patens (Prigge & Bezanilla, 2010) and liverwort Marchantia polymorpha (Shimamura, 2015), has demonstrated that rhizoid development has mechanistic similarity with the development of epidermal root hairs on the multicellular roots of the flowering plant Arabidopsis thaliana (Menand et al., 2007;Proust et al., 2016). This suggests that nonhomologous, but functionally similar, epidermal structures (i.e. tip-growing cells with a rooting function) are regulated by genes that were co-opted into both gametophyte and sporophyte (Menand et al., 2007;Proust et al., 2016). However, it is likely that a 'rewiring' of the root hair/rhizoid gene regulatory network occurred between the bryophyte gametophyte and the flowering plant sporophyte (Yi et al., 2010;Pires et al., 2013;Tam et al., 2015). One example of an 'intrinsic' regulator of root branching in A. thaliana is the ARABIDILLO protein family, which shares structural similarity with the key animal Armadillo/b-catenin developmental regulators (Coates, 2003). We have previously demonstrated a role for A. thaliana ARABIDILLO proteins in promoting LR formation (Coates et al., 2006). ARABIDILLO proteins are unstable, being turned over by the proteasome (Nibau et al., 2011), and their sequences are highly conserved across land plants and charophyte algae, including species that lack LRs, namely P. patens, Selaginella moellendorffii and Klebsormidium flaccidum (Nibau et al., 2011;Moody et al., 2012;Hori et al., 2014). By contrast, homologues of the ARABIDILLO-interacting transcription factor AtMYB93, which is part of a negative feedback loop that inhibits LR development, are not found outside flowering plants (Gibbs & Coates, 2014;. These data suggest that ARABIDILLO proteins had additional, early-evolving functions in plants. Here, we addressed this possibility by examining the function of ARABIDILLO homologues (PHYSCODILLO genes) in P. patens. Physcomitrella patens has three PHYSCODILLO genes that probably have redundant functions (Moody et al., 2012). We define novel functions for PHYSCODILLO proteins in regulating spore germination in response to abscisic acid (ABA), an ancient hormone found across eukaryotes (Hanada et al., 2011;Takezawa et al., 2011). Furthermore, we show that A. thaliana ARABIDILLO proteins perform the analogous function in seeds. Our data suggest that ARABIDILLO homologues were co-opted into both the sporophyte and gametophyte very early in land plant evolution to regulate germination processes via a network involving ABA, and that early-diverging ARABIDILLO homologues already had the potential to regulate multicellular root development, a later-evolving function of this protein family requiring interaction with flowering plantspecific proteins. Moss growth and culture Physcomitrella patens (Hedw.) B.S.G. ssp patens, ecotype 'Gransden 2004' was obtained from Dr Andrew Cuming (University of Leeds, Leeds, UK). The physcodillo2 deletion mutant has been described previously (Moody et al., 2012). Protonemata, gametophores and germinating spores were cultured as in Moody et al. (2012). The spore germination medium was additionally supplemented with 10 mM CaCl 2 and 5 mM ammonium tartrate. The physcodillo2 deletion mutant was maintained on BCD medium additionally supplemented with 20 lg ml Àl hygromycin B. The physcodillo1a/1b/2 triple deletion mutant was maintained on BCD medium additionally supplemented with 20 lg ml Àl hygromycin B and 50 lg ml Àl G418. Arabidopsis thaliana growth and culture For in vitro culture, Arabidopsis thaliana (L.) Heynh. seeds were sterilized in 20% Parozone bleach (Jeyes, Thetford, UK) for 15 min and then washed three times in sterile water. Seedlings were grown in long-day conditions on 0.59 Murashige and Skoog (MS) medium, 1% agar, pH 5.7 AE 50 lg ml Àl kanamycin. Mature A. thaliana plants were grown in Levington M3 compost/vermiculite (Levington Horticulture, Ipswich, UK) in the glasshouse at 22°C under long days before harvesting of mature siliques. Physcomitrella patens transformation Transformation was carried out as in Moody et al. (2012). Stable transformants were selected using 20 lg ml Àl hygromycin B and 50 lg ml Àl G418. Preparation of DNA and RNA Physcomitrella patens genomic DNA was prepared as in Moody et al. (2012) and used directly in PCR reactions. Transforming plasmid DNA was prepared using the Qiagen Plasmid Midi Kit according to the manufacturer's instructions. Physcomitrella patens, Selaginella moellendorffii (hieron) and A. thaliana RNA was prepared using the RNeasy plant mini-prep kit (Qiagen). RNA was treated with TURBO TM DNase (Life Technologies, Waltham, MA, USA) and then converted to cDNA using Superscript TM II reverse transcriptase and Oligo dT (Invitrogen). Generation of PHYSCODILLO-GFP transgenic plants For stable expression, pHSP::GFP, pHSP::PHYSCODILLO1-GFP and pHSP::PHYSCODILLO2-GFP were transformed into wild-type protoplasts and successful integration confirmed following two rounds of G418 selection. Full details of construct generation are given in Methods S1. Protein expression analysis For protein gel analysis, P. patens protonemal tissues expressing pHSP::GFP, pHSP::PHYSCODILLO1-GFP and pHSP:: PHYSCODILLO2-GFP were harvested into liquid BCD protonemal medium and incubated at 22 (control), 34 or 38°C for 1 h before returning to room temperature. For MG132 experiments, tissue in liquid BCD was pretreated with 50 lM MG132 for 1 h before heat shock. Samples were collected at different times postinduction and flash-frozen in liquid nitrogen before extraction. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and western blotting were carried out using standard procedures as in Nibau et al. (2011). For microscopy analysis, tissues were grown on BCD medium under standard growth conditions, and protein expression was induced for 1 h at 38°C before returning to 22°C for different lengths of time. For MG132 experiments, tissue in liquid BCD was pretreated with 50 lM MG132 for 1 h before heat shock. Confocal images were captured using a Leica SP2 inverted confocal microscope. Other images were captured using a Nikon SMZ1000 dissecting microscope and NIS-ELEMENTS software (Nikon, Tokyo, Japan). Construction of physcodillo1a/1b/2 triple deletion mutants and screening procedure The PHYSCODILLO1A/1B double deletion construct was generated by cloning 5 0 and 3 0 homologous flanking sequences from P. patens genomic DNA and inserting them into the pMBL10a vector either side of a G418 resistance cassette (see Fig. S3a later). The resulting construct was transformed into physcodillo2 mutant protoplasts (Moody et al., 2012). Two rounds of G418 selection were carried out to identify putative transformants. To verify the presence of a G418 resistance cassette within the PHYSCODILLO1A/1B locus and confirm the generation of physcodillo1a/1b/2 triple deletion mutants, PCR was carried out using GoTaq DNA Polymerase (Promega). 5 0 integration was confirmed using the primers P1 + 3KO5 0 F and G418.R.319 and 3 0 integration was confirmed using the primers P1 + 3KO3 0 R and G418.F.341. PCR products were sequenced (see Fig. S3 later) and RT-PCR was carried out to confirm loss of PHYSCODILLO1 mRNA expression. Primer sequences are detailed in Table S1. Physcomitrella patens spore germination assays and protonemal area measurement Sporangia were sterilized in 20% Parozone bleach for 15 min at room temperature and then washed four times in sterile water. Spores were released from sporangia by perforating them using a sterile pipette tip in a final volume of 1 ml of sterile water. Spore suspension (200 ll) was spread onto each of five Petri dishes containing cellophane-overlaid spore regeneration medium. Spores were allowed to germinate at 22°C in long days (16 h : 8 h, light : dark). The percentage of germinating spores was calculated at regular intervals until all of the control spores had fully germinated. For each data point, > 200 spores were counted on each of three plates and the mean percentage germination AE SE of the mean was calculated. Each experiment was repeated multiple times. Plants were photographed on a Nikon SMZ1000 dissecting microscope, and the protonemal area was imaged and measured using Nikon's NIS-ELEMENTS software package. Root assays LR assays were carried out as in Coates et al. (2006). Seedling LR density was defined as the number of emerged LRs cm À1 of primary root. A minimum of 50 roots were assayed for each genotype in a single experiment and each experiment was repeated multiple times. Arabidopsis thaliana seed germination assays Freshly harvested seeds from wild-type, mutant and transgenic plants were surface-sterilized in 5% (v/v) bleach for 5 min then washed with sterile water before plating (three to four replicates; n = 50) onto water agarose (1%) supplemented with relevant concentrations of ABA (Sigma). After 4 d of chilling, seeds were incubated at 22°C under continuous light for 7 d, and germination was assessed as endosperm rupture by the radicle. Each assay was repeated multiple times. Results Arabidopsis thaliana ARABIDILLO proteins localize to the nucleus, where they exert their control of LR development through physically interacting with flowering plant-specific MYB transcription factors (Coates et al., 2006;Nibau et al., 2011;. However, ARABIDILLO homologues with a high degree of protein identity to one another also exist in earlydiverging land plants, which lack both LRs and relevant MYB homologues (Nibau et al., 2011;Moody et al., 2012;Fig. 1a). To analyse the behaviour of ARABIDILLO proteins in an early-diverging land plant, we examined the localization of ARABIDILLO homologues in transiently transformed P. patens protoplasts using a series of fusion proteins with green fluorescent protein (GFP) directly fused in-frame to the C-terminus of ARABIDILLO homologues, driven from constitutive promoters (Fig. 1a). ARABIDILLO1-GFP, PHYSCODILLO1A/1B-GFP (which are identical to one another (Moody et al., 2012) and subsequently are collectively referred to as PHYS-CODILLO1), PHYSCODILLO2-GFP and SELAGIDILLO-GFP (the S. moellendorffii ARABIDILLO homologue; Moody et al., 2012) all show considerable sequence identity across the entire length of the protein (Fig. 1a). All the proteins localize to the nucleus (Fig. 1b): this was confirmed using a red fluorescent protein marker tagged with a nuclear localization signal (NLS-RFP; Fig. 1c), which colocalized with the PHYSCODILLO-GFP signal. To further examine the spatial and temporal localization and behaviour of PHYSCODILLO proteins, we attempted to generate stable transgenic lines expressing GFP-tagged PHYSCO-DILLO proteins under the control of the constitutive maize (Zea mays) Ubiquitin-1 promoter (pUBI). However, we found that overexpression of PHYSCODILLO-GFP in protoplasts was toxic, inhibiting the outgrowth of protonemal filaments and preventing protoplast regeneration, so no transformants could be recovered (Fig. 2). This is in accordance with the data of Nibau et al. (2011), showing the possible toxicity of overexpressed ARABIDILLO protein in A. thaliana. To circumvent this problem, we generated stable transgenic P. patens lines where inducible expression of PHYSCODILLO1 or -2 fused to GFP was driven from the soybean (Glycine max) heat-shock promoter (pHSP; Saidi et al., 2005) and induced at different stages of P. patens development. In both filamentous and leafy tissue, PHYSCODILLO-GFP is detected in the nucleus, while GFP alone localizes to both nucleus and cytosol (Fig. 1d). We determined that good induction of PHYSCODILLO-GFP expression occurs after just 1 h at 38°C (Figs 1e, S1). ARABIDILLOs are unstable proteins that are turned over by the proteasome (Nibau et al., 2011). Using the inducible PHYSCODILLO lines, we were able to show by confocal microscopy and western blotting that PHYSCODILLO proteins are also turned over by the proteasome (Figs 1f-h, S1), as, like ARABIDILLOs, they are stabilized by the proteasome inhibitor MG132 (Nibau et al., 2011). This fits with the previous observation that the key regions required for ARABIDILLO/ PHYSCODILLO instability, namely the F-box and leucine-rich repeat regions, are highly conserved (Nibau et al., 2011). This demonstrates similar protein characteristics for PHYSCO-DILLOs and ARABIDILLOs despite c. 420 million yr of evolutionary divergence (Hedges et al., 2006). To further investigate PHYSCODILLO behaviour and function, we investigated whether P. patens and S. moellendorffii ARABIDILLO homologues were able to substitute for A. thaliana ARABIDILLO function during LR formation. We generated stable transgenic A. thaliana lines expressing either PHYSCODILLO1 or SELAGIDILLO1 driven by the cauliflower mosaic virus (CaMV) 35S promoter, and found that both bryophyte and lycophyte ARABIDILLO homologues were able to complement the reduced LR phenotype of the arabidillo1/2 mutant (Fig. 3a,b), despite the facts that P. patens has no multicellular rooting structures, and that S. moellendorffii has tipbifurcating roots (Banks, 2009;Jones & Dolan, 2012). Thus, early-diverging ARABIDILLO homologues had the capacity to affect LR development before the evolution of true, branched rooting structures occurred, suggesting that they were functionally co-opted into pathways regulating novel structures that arose during the evolution of flowering plants. We previously showed that a single PHYSCODILLO2 knockout in moss has no obvious phenotype, suggesting that the homologues function redundantly (Moody et al., 2012), similarly to what is observed in A. thaliana (Coates et al., 2006). To investigate the function(s) of PHYSCODILLO proteins in P. patens, we therefore generated two independent triple physcodillo1A/1B/2 loss-of-function mutants by targeted gene replacement, deleting the entire 23-kb PHYSCODILLO1A/1B locus in a physcodillo2 mutant background (Moody et al., 2012;Figs S2a,b, S3). The replaced locus was confirmed by sequencing across the insertion site (Fig. S3). Using RT-PCR we confirmed the absence of PHYSCODILLO mRNAs in these knockout lines (Fig. S2c). Similarly to the physcodillo2 mutants (Moody et al., 2012), physcodillo1A/1B/2 mutant plants are overall morphologically similar to wild-type, producing chloronema, caulonema, gametophores with rhizoids (Fig. S4a,b) Research New Phytologist sporophytes in a similar time frame. However, we noticed that the spores of physcodillo1A/1B/2 mutants germinated more slowly than those of wild-type, suggesting a potential role for PHYSCODILLOs in regulating this process (Fig. 4a). In seed plants, ABA is a key inhibitor of germination and promotes seed dormancy (Finkelstein et al., 2008;Holdsworth et al., 2008). Some evidence also implicates ABA in the regulation of fern spore germination (Singh et al., 1990;Yao et al., 2008). Fig. 3 PHYSCODILLO and SELAGIDILLO can both rescue the Arabidopsis thaliana arabidillo1/2 mutant lateral root phenotype. (a) Mean lateral root density measured 8 d after germination for wild-type (black bar), arabidillo1/2 mutant (white bar) and two independent arabidillo1/2 mutant lines constitutively expressing PHYSCODILLO1-GFP from the 35S promoter (Rescue 1 and Rescue 2, grey bars). One-way ANOVA shows significant differences between genotypes (P < 0.0001). A Tukey post hoc test shows that the arabidillo mutant is significantly different from wild-type (P < 0.01) and from Rescue 1 (P < 0.01) and Rescue 2 (P < 0.05). Different lowercase letters indicate significant differences between genotypes. Error bars show AE SE of the mean. (b) Mean lateral root density 8 d after germination for wild-type (black bar), arabidillo1/2 mutant (white bar) and two independent arabidillo1/ 2 mutant lines constitutively expressing SELAGIDILLO-GFP from the 35S promoter (Rescue 1 and Rescue 2, grey bars). One-way ANOVA indicated that differences between genotypes were not quite significant (P = 0.057), although pairwise t-tests comparing the mutant and rescue lines with the wild-type indicated that the wild-type was different from the mutant (*, P < 0.05) but not different from either rescue line. Error bars show AE SE of the mean. However, neither the process of P. patens spore germination nor the effects of ABA on spores have previously been studied in detail (Glime, 2015). We showed a dose-dependent inhibition of the wild-type P. patens spore germination rate by ABA (Fig. 4b-d). This suggests that ABA has a similar negative regulatory role in both spore and seed germination, despite the different developmental origins of spores and seeds. We showed that P. patens spores showed much lower sensitivity to ABA than A. thaliana seeds, as 5 lM ABA, which would completely inhibit A. thaliana germination, significantly reduced the spore germination rate without inhibiting germination completely (Fig. 4b,c). We also examined the response of physcodillo1A/1B/2 mutants, and found that both physcodillo1A/1B/2 mutant alleles are less sensitive to the ABA-mediated inhibition of the spore germination rate than wild-type, implying that PHYSCODILLOs play a role in regulating the response to ABA during this process (Fig. 5a-d). When we examined other known ABA-mediated processes, namely response to desiccation/drought and freezing, we found that the physcodillo1A/1B/2 plants showed no obvious differences in their desiccation or freezing tolerance compared with wild-type plants (Fig. S4c,d), indicating a specific role in germination. To extend these findings, we examined the effects of PHYSCODILLO overexpression on ABA sensitivity in P. patens. Because of the lethality of constitutive PHYSCODILLO overexpression, we used heat-shock-inducible lines. We showed that heat-shock-inducible overexpression of the PHYSCODILLO2 protein can lead to ABA hypersensitivity during early growth, when spores are germinated in the presence of ABA (Fig. 5e,f). An inducible transgenic line expressing the b-glucuronidase (GUS) protein from the heat-shock promoter showed no such hypersensitivity, linking the observed phenotype to PHYSCODILLO overexpression (data not shown). We previously showed that arabidillo1/2 mutants in A. thaliana do not have altered sensitivities to ABA with regard to LR development (Nibau et al., 2011); however, we did not investigate other potential ABA functions for ARABIDILLOs in A. thaliana. To determine whether the role of PHYSCODILLOs in Research New Phytologist regulating ABA responses during germination is conserved, we examined seed germination in A. thaliana. Remarkably, we found that the A. thaliana arabidillo1/2 mutant is relatively insensitive to the ABA-mediated inhibition of seed germination compared with the wild-type. Moreover, ARABIDILLO1-overexpressing and 'rescue' lines displayed an opposite ABA-hypersensitive phenotype (Fig. 6a). Both mutant and overexpressing seeds germinated as wild-type in the absence of ABA (data not shown). These data imply a conserved role for ARABIDILLO homologues in ABA-regulated germination across land plants, as well as a unique role in LR development in flowering plants (Coates et al., 2006;. To further investigate this conserved germination function, we asked whether the early-diverging P. patens and S. moellendorffii ARABIDILLO homologues could rescue the A. thaliana arabidillo1/2 mutant germination phenotype. Reintroduction of either patens PHYSCODILLO-overexpressing line shows ABA hypersensitivity during early growth. (a) Germination of wild-type and physcodillo triple mutant line 16 spores on medium containing 5 lM ABA or solvent-only control. Kruskal-Wallis tests showed significant (P < 0.05) differences between genotypes/ treatments on days 4, 5, 6, 7, 8 and 11. Dunn's tests showed significant (P < 0.01) differences between wild-type spores with and without ABA on days 4, 5, 6, 7, 8 and 11, significant (P < 0.05) differences between ABA-treated wild-type and untreated physcodillo mutants on days 5, 6, 7, 8 and 11 and significant (P < 0.05) differences between wild-type and ABA-treated physcodillo mutants on days 5, 6, 7, 8 and 11. (b) Germination of wild-type and physcodillo triple mutant line 8 mutant spores germinated on medium containing 5 lM ABA or solvent-only control. Kruskal-Wallis tests showed significant (P < 0.05) differences between genotypes/treatments on days 4, 5, 6, 10 and 14. Dunn's tests showed significant (P < 0.01) differences between wild-type spores with and without ABA on all days and significant (P < 0.05) differences between ABA-treated wild-type and untreated physcodillo mutants on all days. (c) Insensitivity of physcodillo triple mutant 16 to 5 lM ABA. Data are shown for day 8. A Kruskal-Wallis test for differences between genotypes and treatments was significant (P < 0.05). Dunn's test identified differences between groups: a and b, P < 0.001; b and c, P < 0.05; a and c, P < 0.05. Error bars show AE SE of the mean. (d) Insensitivity of physcodillo triple mutant 8 to 5 lM ABA. Data are shown for day 14. A Kruskal-Wallis test for differences between genotypes and treatments was significant (P < 0.05). Dunn's test identified differences between groups: a and b, P < 0.001; b and c, P < 0.05; a and c, P < 0.05. Error bars show AE SE of the mean. (e) Transgenic pHSP::PHYSCODILLO2-GFP plants were either grown at 22°C on medium containing either solvent control or 25 lM ABA (left-hand bars), or exposed to daily 1-h heat shock and grown on medium containing either solvent control or ABA (right-hand bars). One-way ANOVA showed significant differences between treatments, with a Tukey PHYSCODILLO or SELAGIDILLO into the arabidillo mutant led to a complementation of the ABA-insensitive germination phenotype (Fig. 6b,c). Therefore, despite the different developmental origins of spores and seeds, P. patens and S. moellendorffii ARABIDILLO homologues can replace ARABIDILLOs during A. thaliana germination, demonstrating a novel and evolutionarily ancient function for the ARABIDILLO protein family regulating the germination of desiccation-resistant dispersal units in response to ABA. Discussion The production of dispersal units such as spores and seeds was a critical step enabling plant survival and movement on land. Germination of such structures is tightly regulated to ensure that plants establish themselves in the right place and at the right time under favourable conditions (Holdsworth et al., 2008). In flowering plants and gymnosperms, seeds are multicellular structures that protect the diploid embryo, which gives rise to the subsequent sporophyte generation. In bryophytes, unicellular spores arise by meiosis and give rise to the haploid gametophyte, and are therefore of different developmental origin from seeds (Pires & Dolan, 2012). Our studies reveal for the first time that the hormone ABA has evolved to regulate functionally equivalent germination processes in spore-and seed-producing plants. Research New Phytologist proteins represent a conserved node in a germination-regulatory network that includes ABA, in both seeds and spores. ABA has ancient evolutionary origins, being produced in cyanobacteria and all major eukaryote lineages: shared roles for ABA in stress tolerance have been proposed across these taxa (Takezawa et al., 2011(Takezawa et al., , 2015. ABA-mediated stomatal control has ancient origins Ruszala et al., 2011;Lind et al., 2015) and ABA has also been implicated in developmental transitions in land plants, green algae and the Apicomplexan Toxoplasma (Takezawa et al., 2011). It is tempting to speculate that ARABIDILLO and PHYSCODILLO proteins have conserved protein interactors and/or transcriptional targets in spore and seed germination, especially given their highly conserved sequences, domain structure and proteasomal regulation (Nibau et al., 2011;Fig. 1). Interestingly, in charophytes, ABA can inhibit germination in light-treated oospores (Takatori & Imahori, 1971). Whether charophyte ARABIDILLO homologues (which our searches suggest exist; Timme & Delwiche, 2010;Hori et al., 2014;Matasci et al., 2014;Wickett et al., 2014) are part of this regulation is currently unknown. We also propose that ARABIDILLOs acquired a novel ABA-independent function in flowering plants, regulating LR development, via interaction with flowering plant-specific MYB transcription factors (Gibbs & Coates, 2014;. Our work reveals a retention of ancient functions for plant ARABIDILLO proteins, despite them evolving to have novel additional roles in flowering plants. Furthermore, we have added a new dimension to the land plant ABA story, demonstrating the first conservation of molecular mechanisms between spore and seed germination. Supporting Information Additional Supporting Information may be found online in the supporting information tab for this article: Methods S1 Generation of PHYSCODILLO-GFP transgenic plants, construction of arabidillo mutant rescue lines, construction of physcodillo1a/1b/2 triple deletion mutants and screening procedure. Please note: Wiley Blackwell are not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing material) should be directed to the New Phytologist Central Office. New Phytologist is an electronic (online-only) journal owned by the New Phytologist Trust, a not-for-profit organization dedicated to the promotion of plant science, facilitating projects from symposia to free access for our Tansley reviews. Regular papers, Letters, Research reviews, Rapid reports and both Modelling/Theory and Methods papers are encouraged. We are committed to rapid processing, from online submission through to publication 'as ready' via Early View -our average time to decision is <27 days. There are no page or colour charges and a PDF version will be provided for each article. The journal is available online at Wiley Online Library. Visit www.newphytologist.com to search the articles and register for
2018-04-03T02:15:50.451Z
2016-04-04T00:00:00.000
{ "year": 2016, "sha1": "4d5010851c4dccf9691a15c02fe9df002e784a52", "oa_license": "CCBY", "oa_url": "https://nph.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nph.13938", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4d5010851c4dccf9691a15c02fe9df002e784a52", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
213215689
pes2o/s2orc
v3-fos-license
Effectiveness Of Learning Media Using Contextual Based Macromedia Flash for Junior School Students Al Hikmah Medan Macromedia Flash is a software for interactive learning media so that it can improve student learning achievement. 21st-century learning is the result of the 4.0 industrial revolution in the world of education, it is hoped that students can actively participate in learning. This is discussed with the 2013 curriculum, so students become the center of learning and learning can run effectively. The objectives of this study are: (1) to learn the 75% completeness of student learning outcomes with KKM 70; (2) for learning Improvement of student learning outcomes; (3) to learn the timeliness in learning. The study population denied 280 people consisted of 8 classes, while the class VIII-5 sample released 35 people. This research is a quasi-experimental study. Testing the research hypothesis using a one-party test and the cumulative frequency, paired t-test. The results showed (1) completeness of student learning outcomes classically that is 80%, indicating mastery achievement has been achieved; (2) the value of sig <α, then the student’s posttest learning outcomes are better than the pretest; (3) appropriate use of time in accordance with the planned use. Introduction Mathematics is one of the subjects that occupy an important role in education. In the field of education, mathematics plays an important role in the development of science and technology. Given its importance, mathematics is taught from elementary to secondary education. However, learning mathematics tends to still use a lecture or conventional methods so students find it difficult to understand the material and concepts provided. It also makes students less interested in learning mathematics and thinks mathematics is a difficult and boring subject. Based on observations made by researchers at Al-Hikmah Medan Private Middle School that student learning outcomes are still low, this can be seen from the results of daily tests of class VIII where out of 50 students only 18 students received grades in the range of 70-100. While 32 students scored below the KKM with a KKM score of 70. This caused researchers to feel concerned about the learning outcomes of class VIII which is still low, that is because students do not understand the material and concepts taught. Besides, the results of interviews with mathematics subject teachers also showed that students were less interested in learning. So that students are interested in learning, it is necessary to do the learning process by using interesting learning media so that learning objectives can be achieved. This is in line with Irvan's opinion [4] the role of teachers in the 21st century is to create independent and enjoyable learning, one of the alternatives is learning media. The 21st century is known by the age of information society, this can be seen from the emergence of the phenomenon of digital society. The industrial revolution 4.0 has an impact in the world of education, seen from technology-based learning media (ICT). Agustina, Akrim, Nasrudin, Ahmar, Rahim [1] applying learning media can improve students' motivation and ability to learn, because learning media provides text, images, video, audio, and animation. Media that are suitable with this problem are learning media using Macromedia Flash software. This corresponds to Sukamto [7] Macromedia Flash is a multimedia platform and software used for animation, games and internet enrichment applications that can be seen, played, and run on Adobe Flash Player. Macromedia Flash is software that is used to create animated images. Masykur [5] The use of macromedia flash as a learning medium, is useful for teachers as a tool in preparing teaching materials and organizing learning. This corresponds to Yudi's [9] This media can also stimulate student stimulus to be able to manipulate concepts and be able to know the real form of abstract mathematical concepts. Macromedia Flash has benefits that can improve the quality of mathematics learning because it can visualize the concepts of the material being taught and can explain the material with an attractive appearance so that students are encouraged to learn mathematics. Mudlofir [2] stated that multimedia comes from multi-words that mean a lot of variety and media words that means a tool to convey a message. Therefore, multimedia means a combination of various media such as text, graphics, audio, visuals, and so on. in one tool. A device can be called a multimedia system if it meets the following requirements: a) the device must be 0able to change analog form into digital form; b) interactive features that users can change the appearance as desired and can enter data according to their needs; c) be independent, in the sense of providing convenience and completeness of content in such a way that users can use without the guidance of others. The problems studied are: (1) whether Macromedia Flash learning media has reached the mastery of learning outcomes; (2) is the Macromedia Flash learning media effective against student learning achievement ?; (3) Is the use of time according to plan? Method and Material This research was conducted in the even semester T.P 2017/2018 at Al Hikmah Junior School Medan. The study population numbered 280 people consisting of 8 classes, while the sample class VIII-5 amounted to 35 people. This research is a quasi experimental research. The steps in conducting data testing are as follows: (1) Descriptive Test, to see the average test and standard deviation; (2) Test prerequisites, to see the prerequisites for carrying out the test of hypotheses namely normality; (3) Hypothesis Test, to see the completeness of student learning achievement by using the cumulative frequency test, and to see the difference and effectiveness it is done by paired t-test. Research Result This reseach consists of two variables, namely X1 (before) and X2 (after) and consists of one class, namely class VIII-5. Descriptive test results of the data, seen in table 1. Based on the table above, it can be explained that the data has a "normal" distribution. Because the significant value in Kolmogorov-Smirnov for pre-test> from the value of the post test variable has a "normal" distribution. Because the value is significant at Kolmogorov-Smirnov for post test > of the value    To find out the completeness of learning outcomes, it is seen in table 3. To find out the completeness of learning outcomes, it is seen in table 4. To find out the statistical test for correlation in Table 5. Table 5. Paired Samples Corelation Correlation: Correlation value between 2 variables: for the value of Sig. (0.00) <α (0.05), and the results of the correlation value of 0.866 means a strong and positive relationship. To find out the paired t statistical test in table 6. Table 6. Paired Samples t-test To see the effectiveness of Macromedia flash, it is seen in Sig. 2-tailed (0.00) <α (0.05). Meaning: There is a difference between before and after treatment. This means that post test student learning outcomes are better than pre-test. Research Discussion Moore D. Kenneth [8] the effectiveness of a measure that states how far the target (quantity, quality and time) has been achieved. Based on the results of the study showed that in quantity has effectively met the completeness of learning outcomes, with a value of 75 and the classic KKM 80% has been fulfilled that is 80.0% has been completed.There is a difference between before and after treatment. This means that posttest student learning outcomes are better than pretest. This shows that the quality has also been effective. The results of achieving learning time in the experimental class by using Macromedia Flash are four meetings or 8 x 45 minutes when compared to the usual learning done so far, there is no difference between the achievement of learning time in the experimental class with the achievement of normal learning time. Thus, it is known that the achievement of learning time in the experimental class using Macromedia Flash is the same as that of ordinary learning done with traditional methods, namely four 8 x 40 minutes meetings. This is under the criteria for learning time, namely the achievement of minimum learning time is the same as ordinary learning, thus the achievement of learning time in the experimental class using Macromedia Flash has been achieved. From the above data it can be proven that Macromedia flash multimedia has a good influence on learning outcomes. Strengthened by several theories including, according to Wati [6] stated that: In the learning process of teaching, multimedia functions as a messenger in the form of knowledge, skills, and attitudes to students. Multimedia learning can motivate students' thoughts, feelings, attention, and willingness to learn. Multimedia has interactive capabilities, so this media can be a good alternative as a tool in learning. Computers as multimedia can be used as learning media. Many computer programs can be used in learning mathematics. One of them is the Macromedia Flash program. One of the benefits of using instructional media in the teaching and learning process is that it can clarify the presentation of messages and information so that it can expedite and improve the learning process and outcomes. Putri [6] the use of multimedia learning is considered very important to be used by teachers to improve student learning achievement, one of which is the Macromedia flash program. Thus it can be concluded that learning using multimedia Macromedia flash affects the learning outcomes of mathematics and is better used in the learning process compared to traditional learning models
2020-01-16T09:05:38.974Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "57ffde55589db555241c373339adfdfe2a4f43e2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1429/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9a6118c357b8e80321f2fdc532e865671d152be2", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
86613344
pes2o/s2orc
v3-fos-license
THE PHOTOPERIODIC CONTROL OF GROWTH AND DEVELOPMENT OF CHENOPODIUM RUBRUM L . PLANTS IN VITRO Influence of the photoperiod on growth, flowering, and seed development in vitro of Chenopodium rubrum L., a short day annual, was examined. Chenopodium rubrum plants modify their growth and reproductive development in accordance with the photoperiod. With an increase of day length, growth was stimulated, flowering was delayed, seed development occurred earlier, and the plants produced more seeds. By altering photoperiods during induction and evocation of flowering, it is shown that the photoperiod experienced by seedlings during early reproductive development determines the pattern of plant growth to the end of ontogenesis, the time to flowering, and the course of seed development. It is therefore concluded that growth and reproductive development of C. rubrum are photoperiod-sensitive to during a precise short part of its life cycle. INTRODUCTION Being sessile organisms, plants cannot choose their surroundings and have to modify their growth and development according to the environment.They often respond to environmental variation with phenotypic plasticity (Galloway, 2004).The developmental cycle of annuals, starting with seed germination and ending with seed maturation, is well synchronized with seasonal changes and is completed before the start of the growth limiting season.Light is one of the most important environmental signals regulating plant development.Plants register the quantity, quality, periodicity and direction of light, according to which they modify many physiological processes, from germination to the architecture of adult plants and reproductive development (F r a n k l i n and Whitelam, 2004).For example, in 22 desert annuals from Israel, great differences in the number of leaves, leaf shape, type of branching, seed size, and seed coat color resulted from exposure to different day lengths (Guttermanand E v e n a r i, 1972).Previous results (M i t r o v i ć et al., 2002) showed that the photoperiod affects C. rubrum growth (Ž i v a n o v i ć et al., 1995) and seed size (M i t r o v i ć et al., 2002). Chenopodium rubrum L. sel.184 is a qualitatively short-day weedy annual sensitive to small changes in day length with a defined critical night length of 8 h (Tsuchiya and I s h i g u r i, 1981).It is sensitive to photoperiodic stimulation of flowering as early as at the cotyledonary stage (S e i d l o v á and O p a t r n á, 1978), when six adequate photoperiodic cycles are sufficient for photoperiodic flower induction.As an early-flowering species (C u m m i n g, 1967), it is a suitable model plant for studying ontogenesis in vitro.In vitro culture method was used because of the precise control over environmental (surrounding) conditions (S c o r z a, 1982). The aim of this study was to evaluate the effects of different photoperiodic conditions on C. rubrum growth, flowering, and seed development. Seed propagation Plants for seed propagation were grown in a greenhouse of the Siniša Stanković Institute for Biological Research in Belgrade (44º 49´ N, 20º 29´ E) under conditions of natural day length from February (10 h of Plants in vitro Experiments were carried out with intact C. rubrum plants derived from seeds sown in vitro.Seeds were surface-sterilized with 4% Na-hypochlorite for 2 min, washed with sterile distilled water, and aseptically sown on moistened filter paper in Petri dishes.Uniform germination was attained by exposure to temperature and dark/light cycles (24 h of darkness at 32°C, 24 h of darkness at 10°C and 48 h white light at 32°C).Four-day-old seedlings were transferred to glass jars containing 100 ml of MS (M u r a s h i g e and S k o o g, 1962) mineral solution supplemented with sucrose (5%) and gelled with agar (0.7%). Measurements and statistics Every 7 days during the 10 weeks, height of the plants was measured and the number of leaves, number of fully developed flowers, and number of plants with matured seeds were determined.At the end of the 10 th week, matured seeds were collected, dried for one month, and measured (four replicates of 100 seeds).Treatment effects were determined using analysis of variance (AN-OVA) combined with multiple range tests (significance level of p<0.05). Effect of the photoperiod on growth With an increase of day length, growth is stimulated (Fig. 1a).At the end of their life cycle (after 10 weeks of culturing in vitro), plants grown under conditions of 24 h long-day photoperiod (noninductive for flowering of C. rubrum) were approximately twice as high as plants grown under a 16 h/8 h photoperiod and even seven times higher than plants grown under a 14 h/10 h photoperiod, both of the latter being inductive for flowering of C. rubrum.This is in agreement with the results of C o o k (1975), showing that C. rubrum plants grown under a 15 h/9 h photoperiod were higher and had more nodes than plants grown under 12 h/12 h photoperiod.The only exception in the trend of the ratio between day length and plant height (Fig. 1a) was exhibited by plants grown under continuous darkness (inductive for flowering of C. rubrum) due to etiolation.The most intensive growth during the first 8 weeks is noticed under a 16 h/8 h photoperiod (Fig. 1a), which corresponds to natural photoperiods for C. rubrum. As already stated, six adequate photoperiodic cycles at the cotyledonary stage of development are sufficient for photoperiodic flower induction of C. rubrum (Opatrnáet al., 1980).The pattern of C. rubrum ALEKSANDRA MITROVIĆ ETAL.204 Fig. 1.Effect of different photoperiods (10 weeks of continuous light -24 h/0 h, 10 weeks of continuous darkness -0 h/24 h, 10 weeks of an 8 h/16 h photoperiod, 10 weeks of a 14 h/10 h photoperiod, 10 weeks of a 16 h/8 h photoperiod, 6 days of an 8 h/16 h + 9 weeks of a 16 h/8 h photoperiod, or 6 days of a 14 h/10 h + 9 weeks of a 16 h/8 h photoperiod) on C. rubrum growth during ontogenesis in vitro: plant height was (a) measured and number of leaves (b) was determined every 7 days for 10 weeks; means ± SE, n = 48.growth to the end of ontogenesis is also affected by the photoperiod applied during flowering induction, but is affected as well by the photoperiod following shortly after (evocation of flowering) (Fig. 1a).Asignificant difference in height is noticeable among plants grown under different photoperiods for only the first 6 days of the total of 10 weeks (and under the same photoperiod for the following 9 weeks), while plants grown under different photoperiods for 9 of the total of 10 weeks (and under the same photoperiod only during the first 6 days) are similar in height (Fig. 1a).This points to importance of the photoperiod during induction and evocation of flowering for determination of final plant height.Plants grown for the first 6 days under a 8 h/16 h photoperiod and then transferred to a 16 h/8 h photoperiod for the remaining 9 weeks were about two times shorter than plants grown under a 16 h/8 h photoperiod for all 10 weeks and three times higher than plants grown continuously under an 8 h/16 h photoperiod (Fig. 1a, Fig. 2).Similarly, plants grown for the first 6 days under a 14 h/10 h photoperiod and transferred to a 16 h/8 h photoperiod for the remaining 9 weeks were about two times shorter than plants grown under a 16 h/8 h photoperiod for all 10 weeks and at the same time twice as high as those grown continuously under a 14 h/10 h photoperiod (Fig. 1a, Fig. 2). Transient inhibition of growth at the time of flowering (O p a t r n á et al., 1980; U l m a n n et al., 1980;Mitrović, 1998) was noticeable (Fig 1a).The longer the inductive day length was, the more delayed was the flowering (Fig. 3a) and the flowering-related transient inhibition of growth (Fig. 1a).Under 8 h days, growth stopped at the time of flowering (Fig. 1a, Fig. 3a) and plants tips started to dry up as early as the 7 th to 8 th week Fig. 2. Chenopodium rubrum plants grown in vitro under different photoperiodic conditions (10 weeks on an 8 h/16 h photoperiod, 10 weeks of a 14 h/10 h photoperiod, 10 weeks of a 16 h/8 h photoperiod, 6 days of an 8 h/16 h photoperiod + 9 weeks of a 16 h/8 h photoperiod, 6 days of a 14 h/10 h photoperiod + 9 weeks of a 16 h/8 h photoperiod or 10 weeks of continuous light -24 h/0 h), after 10 weeks of culturing. of culturing (Fig. 3b).Plants grown under noninductive continuous light did not flower (Fig. 3a) until the end of the 10 th week of culturing, and stem elongation was relatively linear (Fig. 1a). Different photoperiodic conditions also affected the number of leaves (Fig. 1b).Extension of day length (shortening of night length) stimulated leaf development.C o o k (1975) noticed an increase in the number of leaves of C. rubrum plants grown under 15-h days compared to ones grown under 12-h days.Like stem elongation (Fig. 1a), leaf development (Fig. 1b) was also affected by the photoperiod to which the seedlings were exposed during the first 6 days (flowering induction) and the photoperiod following immediately after (evocation of flowering). Leaf development was inhibited (Fig. 1b) with the start of seed development (Fig. 3b).In plants grown in continuous darkness, leaf development stopped with flowering (around the 3 rd week), and plants tips started to dry after the 7 th to 8 th week of culturing.In plants grown under noninductive continuous light, there was no flowering and the number of leaves increased linearly throughout all 10 weeks (Fig. 1b).A high correlation was found between the number of leaves and flowering percentage in C. rubrum grown in continuous darkness and C. murale grown under inductive photoperiodic conditions (M i t r o v i ć, 1998; M i t r o v i ć et al., 2000). Effect of the photoperiod on flowering The shorter the inductive day was, the earlier flowering occurred (Fig. 3a).In plants grown in continuous darkness, flowering occurred as early as after the 1 st week of culturing, while in plants grown under a 16 h/8 h photoperiod, flowering did not start until the 4 th week.Adequate day length during the first 6 days is sufficient for flowering induction.However, the time to flowering (and growth, as already shown) is also determined by the photoperiod applied during evocation of flowering (Fig. 3a).Plants grown continuously under a 14 h/10 h photoperiod flowered during the 2 nd week of culturing.Transferring plants grown under the same photoperiod after 6 days to a 16 h/8 h photoperiod delayed flowering to the 3 rd week, while plants grown continuously under a 16 h/8 h photoperiod flowered after 4 weeks.A similar trend in flowering is evident in comparing three groups of plants: 1 -grown continuously under an 8 h/16 h photoperiod, 2 -grown 6 days under an 8 h/16 h photoperiod and transferred to a 16 h/8 h photoperiod, and 3 -grown continuously under a 16 h/8 h photoperiod (Fig. 3a). Effect of the photoperiod on seed development The photoperiod affected C. rubrum seed development in vitro in regard to both the time of seed maturation (Fig. 3b) and the number of matured seeds per plant (Fig. 4a).The longer the day is the earlier the seed maturation occurs (Fig. 3b) and the higher the number of matured seeds is (Fig. 4a). Seed maturation started 4-5 weeks after flowering and like flowering responded to day length during flowering induction and evocation.Plants grown the first 6 days under a 14 h/10 h photoperiod and then transferred to a 16h /8 h photoperiod for the remaining 9 weeks produced about three times more seeds compared to plants grown continuously under a 14 h/10 h photoperiod and ALEKSANDRA MITROVIĆ ETAL.206 Fig. 4. Effect of different photoperiods (10 weeks of a 14 h/10 h photoperiod, 6 days of an 8 h/16 h photoperiod + 9 weeks of a 16 h/8 h photoperiod, 6 days of a 14 h/10 h photoperiod + 9 weeks of a 16 h/8 h photoperiod, or 10 weeks of a 16 h/8 h photoperiod) in vitro on the number of produced seeds and seed weight: matured seeds were collected from each plant and their number (a) counted at the end of 10 th week of culturing and seeds were measured (b) after one month of drying (four replicates of 100 seeds); means ± SE. about for four times fewer seeds compared to plants grown continuously under a 16 h/8 h photoperiod (Fig. 4a).Plants grown continuously under an 8 h/16 h photoperiod flowered (Fig. 3a), but did not produce seeds (Fig. 3b).Plants grown under an 8 h/16 h photoperiod during the first 6 days and then transferred to a 16 h/8 h photoperiod for the remaining 9 weeks produced approximately half of the total number of seeds produced by plants grown continuously under a 16 h/8 h photoperiod (Fig. 4a). The highest number of seeds matured under conditions of the longest inductive photoperiod (16 h/8 h), while approximately 12 times fewer seeds matured under a 14 h/10 h photoperiod (Fig. 4).This is in agreement with C o o k's (1975) finding that C. rubrum plants produced a significantly greater number of seeds under a 15 h/9 h photoperiod compared to a 12 h/12 h photoperiod. Chenopodium rubrum plants grown in continuous darkness (conditions inductive for flowering) flowered (Fig. 3a), but did not produce seeds (Fig. 3b).Centaurium pulchellum plants grown under continuous darkness in vitro did produce seeds, but those seeds were not viable (C v e t i ć et al., 2004). Seeds collected from plants 10 weeks old, were dried for one month at room temperature and examined.Like seed number (Fig. 4a), seed weight (Fig. 4b) was affected by the photoperiod applied during flowering induction and evocation.This is in agreement with C o o k (1975), who showed that in 10-day-old C. rubrum seedlings, seed weight is determined by the photoperiod from the 4 th to 8 th day of reproductive development.He also showed that plants grown under a 15 h/9 h photoperiod produced a greater number of small seeds compared to plants grown under a 12 h/12 h photoperiod.Similar results were obtained on the quantitatively short day plant C. quinoa (B e r t e r o et al., 1999) and on different species of the genus Chenopodium (B e w l e y and B l a c k, 1982): plants grown under short days produced larger seeds compared to those grown under long days. In our in vitro conditions, seeds collected from plants grown under a 16 h/8 h photoperiod were significantly heavier than those collected from plants grown under a 14 h/10 h photoperiod (Fig. 4b).As in previous work (M i t r o v i ć et al., 2002), we obtained opposite results in C. rubrum plants grown in a greenhouse and ones grown outdoors (under conditions of shorter photoperiods, winter-grown plants produced a small number of 4.3 times heavier seeds compared to plants grown during the summer under long photoperiods).We believe that temperature plays a significant role in determining the weight of seeds produced under conditions of the same photoperiod.In vitro culture provides an optimal supply of mineral nutrients, optimal humidity, and a temperature of 25ºC, while in the greenhouse a 15 h/9 h photoperiod was accompanied by high temperature and low humidity.We showed that C. rubrum plants modify their growth and development in accordance with the photoperiod.With an increase of day length, plant height increased, flowering was delayed, seed development occurred earlier, and the plants produced more seeds.The obtained results showed that the growth pattern to the end of ontogenesis, flowering, and seed development are all determined by the photoperiod experienced by seedlings during the early phases of reproductive development -induction and evocation of flowering.Chenopodium rubrum shows crucial sensitivity to the photoperiod during a short, precise, and very early period of its life cycle, when key processes in its development are determined. Fig. 3 . Fig.3.Effect of different photoperiods (10 weeks of continuous light -24 h/0 h, 10 weeks of continuous darkness -0 h/24 h, 10 weeks of an 8 h/16 h photoperiod, 10 weeks of a 14 h/10 h photoperiod, 10 weeks of a 16 h/8 h photoperiod, 6 days of an 8 h/16 h photoperiod + 9 weeks of a 16 h/8 h photoperiod, or 6 days of a 14 h/10 h photoperiod + 9 weeks of a 16 h/8 h photoperiod) on C. rubrum flowering and seed development in vitro: the number of fully developed flowers (a) and number of plants with matured seeds (b) were determined every 7 for during 10 weeks; means ± SE, n = 48. In C. quinoa (B e r t e r o et al., 1999), the effect of the photoperiod on seed weight was strongly influenced by temperature, while growth temperature affected seed coat weight in Plantago lanceolata (L a c e y et al., 1997).In agreement with C o o k (1975), our results indicate that C. rubrum seed weight is determined early during reproductive development.In nature, C. rubrum plants receive photoperiodic induction for flowering when summer days become shorter.Since the size and number of seeds is determined during early reproductive development, natural flowering induction works in line with minimizing seed weight and maximizing seed number, favoring physiological mechanisms that work under suboptimal photoperiods, thereby maximizing the probability of survival (C o o k, 1975).
2018-12-05T20:24:24.354Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "ef65f3677f1108767050118af16fd86a9c22d612", "oa_license": "CCBY", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-46640703203M", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ef65f3677f1108767050118af16fd86a9c22d612", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
16637808
pes2o/s2orc
v3-fos-license
Sperm DNA Fragmentation and Standard Semen Parameters in Algerian Infertile Male Partners Purpose To date, standard semen parameters have been the only parameters investigated in sperm samples of infertile men in Algeria. We investigated, for the first time, semen parameters according to sperm DNA fragmentation (SDF) in these subjects. Materials and Methods SDF was determined by a validated sperm chromatin dispersion test in 26 infertile men. Patients were split into two groups according to the SDF level estimated by the DNA fragmentation index (DFI): the low fragmentation group (LFG; LFG with DFI ≤18%) and high fragmentation group (HFG; HFG with DFI >18%). The standard semen parameters were measured in both groups. Results We found that semen concentration and motility were negatively correlated with DFI (r=-0.65, r=-0.45, respectively; p<0.05), while morphology and semen volume were not correlated with it (r=0.24, r=-0.18, respectively; p>0.05). Comparison of the sperm concentration revealed that it was significantly higher in LFG than in HFG (37.57%±13.16% vs. 7.32%±3.59%, respectively; p<0.05), whereas no significant difference was observed regarding sperm motility and morphology. Conclusions Our findings suggest that SDF correlates well with both sperm motility and concentration but not with morphology. Thus, we conclude that SDF evaluation provides additional information regarding sperm quality, and should be used as a complementary test for assessing semen characteristics in infertile males. INTRODUCTION Semen quality determination is of great importance for both infertility management for couples and for reproductive toxicology. Since 1980, the World Health Organization (WHO) has published five editions of the "Laboratory manual for the examination and processing of human semen", in order to achieve greater standardization of semen examination procedures and reference values [1], which were controversial before the latest edition of this manual, as authors from different centers found that the cut-off limits of sperm concentration, mor-phology, or motility were either excessively high or excessively low in the previous editions [2,3], leading to infertility over-or under-diagnosis. However, WHO criteria remain contentious even after the publication of the fifth edition. For example, sperm concentration cut-off limits, shifted from 20 million sperm/mL to the 5th centile of sperm density (i.e., 15 million sperm/mL), which resulted in some men who were considered infertile being reclassified as fertile [4]. Some authors have sought to overlap the problem of a unique threshold by setting two thresholds for every standard parameter (concentration, morphology, and motility) so that men can be classified as fertile, sub-fertile, or with intermediate fertility [5]. Some other researchers have sought other tests, in addition to those recommended in the WHO manuals, in order to better assess men's fertility. Among them are sperm function tests, oxidative stress tests or sperm chromatin, and DNA fragmentation tests [6]. The latter tests seem to be the ones most adopted in routine clinical practice. Indeed, DNA integrity is crucial to ensuring that the fertilizing spermatozoon (SPZ) can sustain normal embryonic development of the zygote [7] and correlates with reproductive success [8]. In this study, we used an sperm chromatin dispersion (SCD) test to evaluate sperm DNA fragmentation (SDF) in male partners of infertile couples, and explored the relationship of SDF with the WHO standard semen parameters. Geographic variation in semen quality has been observed [9,10], and this is the first evaluation of SDF in an Algerian population along with standard semen parameters. Study population Twenty-six male partners of candidate couples for intracytoplasmic sperm injection (ICSI) were involved in this prospective study at the Ibn Rushd Center of Reproductive Medicine of Constantine, Eastern Algeria. The necessary precautions were taken to protect the participants, according to the principles of the Declaration of Helsinki. Informed written consent was obtained from all of the patients. The men's average age was 37.50±0.88 years and infertility dated back to at least one year prior (6.04±0.54 years). It was primary in 88.46% of cases and secondary in the remaining 11.54% cases. The etiology of infertility varied among oligospermia, asthenospermia, and teratospermia. The patients had no history of radiotherapy, chemotherapy, chronic illness, or varicocele. Only four smokers were recorded. Semen sampling and preparation by density gradient centrifugation Semen samples were collected in sterile containers by masturbation after 3 to 5 days of sexual abstinence. After semen liquefaction at room temperature for at least one hour, density gradient centrifugation (DGC) was carried out. Briefly, 100 μL of PureSperm TM buffer (Nidacon International AB, Molndal, Sweden) were added to 900 μL of 100% PureSperm TM medium to obtain 1,000 μL of 90% PureSperm TM . Five hundred fifty microliters of PureSperm TM buffer were added to 450 μL of 100% PureSperm TM medium to obtain 1,000 μL of 45% PureSperm TM . One milliliter of the liquefied semen was then added on top of two layers of gradients (90% and 45%), the whole was centrifuged at 300 g for 25 minutes, and the supernatant was removed to add 200 μL of washing solution: FertiCult TM Flushing medium (FertiPro N.V, Beernem, Belgium). After centrifugation at 500 g for 10 minutes, the washing solution was aspirated and a culture medium (FertiCult TM IVF medium) was added to the pellet. An aliquot of unprocessed semen was kept in order to be freshly used for SCD assessment in a subsequent stage. Standard semen parameter analysis Standard semen parameters (volume, concentration, motility, and morphology) were evaluated according to the WHO guidelines [11]. The concentration was assessed by examining samples using phase-contrast microscopy at a final magnification of 200× or 400× in a Makler counting chamber. A drop of 10 or 20 μL of diluted semen was introduced into the Makler chamber and covered with a coverslip. The patient was considered oligospermic if the sperm count was below 20 million/mL. Mobility was evaluated using a simple grading system. At least five microscopic fields were assessed systematically to classify 200 SPZ. The motility of each SPZ was graded ac-cording to whether it showed rapid progressive motility (denoted as a); slow or sluggish progressive motility (b); non progressive motility (c), or immobility (d). The percentage of a+b was calculated, and asthenozoospermia was diagnosed if this percentage was less than 50%. Smears were prepared for morphological evaluation, Papanicolaou stained, and finally assessed according to David's classification [12]. We used the cut-off of 30% to assess normality of sperm morphology [12]. DNA fragmentation assessment DNA fragmentation was assessed by SCD test [13]. In the absence of massive sperm DNA breakage and following acid denaturation and removal of nuclear proteins, dispersed DNA loops produce a characteristic halo. Sperm with fragmented DNA does not develop such a halo, or it is small. A Halosperm® kit (Halotech DNA SL, Madrid, Spain) was used following these steps: Semen was diluted in culture medium to obtain a maximum concentration of 20 million SPZ per milliliter. Agarose (100 μL) was fluid- After removing the coverslip, it was immersed in the acid denaturing solution for 7 minutes, then incubated in a lysis solution for 25 minutes. Next, the slide was rinsed in abundant distilled water for 5 minutes and fixed with 70% ethanol, then with 100% ethanol, each time for 2 minutes. After drying, the slide was stained by the Diff-Quik reagent. The visualization was done under a bright field light microscope (MOTIC B1 Series) with ×20 and ×40 objectives. We counted 300 to 500 SPZ while identifying those with DNA fragmentation according to manufacturer instructions [14] and then calculated the DNA fragmentation index (DFI) as: We chose to fix the DFI threshold at 18% to distinguish between two groups of patients: a high fragmentation group (HFG: DFI>18%) and a low fragmentation group (LFG: DFI ≤18%). This threshold was used by other authors, who have indicated that SDF levels above 18%, as measured by SCD, are not compatible with the initiation and maintenance of a term pregnancy [15]. The mean age of patients in HFG and LFG were 37.84±1.13 years and 36.57±1.17 years, respectively, without a significant difference. Data analysis GraphPad Prism ver. 6.00 for Windows (GraphPad Software, La Jolla, CA, USA) was used for statistical analysis. To assess the relationships between DNA fragmentation and semen parameters, we calculated the Spearman's correlation coefficient. The Mann-Whitney test was used to compare quantitative parameters between HFG and LFG. Fisher's exact test was used to detect the difference in the morphologically normal SPZ frequency between the two study groups. The data are presented as mean±standard error in a 95% confidence interval (95% CI), and statistical significance was set at 0.05. All reported p-values are from two-sided tests. Sperm standard analysis The mean sperm concentration was 15.46±15.02× 10 6 /mL (median, 4.5×10 6 /mL) and was distributed as follows: <5×10 6 /mL in 50%; between 5 and 20×10 6 /mL in 27%; and ≥20×10 6 /mL in 23% of the men. The mean sperm motility was 46.14%±3.27%, and 27% of the men had a sperm progressive motility ≥50%. We used the cutoff of 30% (David's classification) to assess the normality of sperm morphology. In our series, 50% of the patients had 30% or more SPZ with normal morphology in their semen. Finally, the mean semen volume in all patients was 2.85±0.26 mL. DNA fragmentation index and sperm volume We did not find any correlation between sperm volume and DFI (r=0.24; p=0.25) (Table 1), and when comparing mean sperm volume between LFG and HFG, we did not find a significant difference (2.37±0.51 mL vs. 3.03± 0.29 mL, respectively; p=0.22; Table 2). DNA fragmentation index and sperm morphology Plotting DFI with the percentage of normal SPZ in each sample did not illustrate any correlation between the two parameters (r=−0.18; p=0.68) ( Table 1, Fig. 1C). According to this, there is also no significant difference in typical SPZ forms between LFG and HFG (29.50±4.50 vs. 38.14±7.00, respectively; p=0.78; Table 2). It is worth noting that 63% of patients in HFG had at least 30% morphologically normal SPZ. Even if this proportion was inferior to that of LFG (i.e., 85.71%), belonging to HFG or LFG did not correlate with the frequency of morphologically normal SPZ, distinguished as either ≥30% or <30% (odds ratio=2.77, p=0.62, 95% CI [0.27∼28.41]). DISCUSSION Our study helped elucidate the relationship between SDF and standard semen parameters in a sample of infertile men from Eastern Algeria. We used an SCD test because it shows, like the terminal uridine nick-end labeling (TUNEL) test, a strong relationship with the sperm chromatin structure assay (SCSA) for SDF, both in infertile men and donors of known fertility [16]. We noted that the mean sperm concentration and sperm motility in our sample were beneath WHO limits, demonstrating why ICSI was the appropriate treatment in such cases. We found a negative correlation between the DFI and both sperm motility and concentration, whereas no correlation was observed with sperm morphology, and SDF does not seem to affect sperm morphology. Conflicting data exist in the literature concerning this issue. While some authors found the same results for motility and sperm concentration, either with a SCD test [15], TUNEL assay [17], or SCSA [18], other authors did not find this correlation for either of the standard parameters [19] or only for sperm concentration [20]. The same observations have been made with regard to sperm morphology and SDF correlation [15,20]. These discrepancies could be explained either by differences in the SDF assays that have been used or by heterogeneity in the proportion of apoptotic bodies in the sperm samples used. These are factors that vary in size and density, occurring with great prevalence in men with poor quality semen [21] and explaining why some authors found, in the same sperm sample, two types of SDF, one dependent and the other independent of semen quality [22]. Sperm selection by swim-up and/or migration in a discontinuous density gradient should affect SDF determination by eliminating apoptotic bodies and highly fragmented sperm [22]. Such selection could explain the discrepancies between the studies. It could also explain the fact that we did not find a significant difference in motility between LFG and HFG, since this parameter could be ameliorated by DGC. Additionally, the most frequent abnormality we found in the LFG (92%) was asthenospermia, and this could contribute to the explanation of the non-significant difference in motility between LFG and HFG. We used 18% DFI as the threshold value to distinguish between LFG and HFG. To make this choice, we were inspired from other publications using the same DNA fragmentation kit as ours [15]. Threshold values for infertility depend on the SDF assay type: Studies using an SCD assay use a threshold varying from 17% to 22.75% [15,23,24], whereas those using SCSA [25] or a TUNEL assay [24,26] set the cut-off at 27% to 30% or 12% to 20%, respectively. We should note that, in our study, all patients with standard semen parameter abnormalities had a DFI>18%, and that 97.73% of men in HFG, but only 30% in LFG, suffer from at least one standard sperm parameter abnormality. This could demonstrate indirectly that impairment of semen parameters is associated with an increase in SDF. Additionally, it is well known that the sperm DNA damage factor can be either testicular or extra testicular; thus if SDF is due, for example, to a failure in DNA break repair, other aspects of spermatogenesis can be affected, resulting in sperm number, motility, or morphology abnormalities. It can also be speculated that, if SDF is due to reactive oxygen species (ROS), then a relationship to sperm motility could be expected because ROS affect lipid peroxidation of sperm membranes rich with unsaturated fatty acids [27]. Furthermore, in highly differentiated elongated spermatids or mature spermatozoa, apoptotic events may be modified [28], so that SPZ mitochondrial activity, motility, and morphology can be normal although DNA is fragmented. Indeed, SPZ displaying translocation of membrane phosphatidylserine, as diagnosed by annexin V positive staining, were found in sperm fractions with both high and low motility [29], as well as in morphologically normal SPZ [30]. Finally, we did not find any correlation between semen volume and DFI, and this could be explained by the fact that semen is mainly composed of seminal fluid secretion from annex glands, which makes volume independent of SDF. CONCLUSIONS Our results show that the semen parameters of sperm concentration and motility were inversely correlated with SDF, whereas no correlation was observed between SDF and the other parameters. Taken together with the results of other authors who have even claimed that sperm DNA integrity measurement is more reproducible and more objective than conventional parameters [31,32], our results allow us to propose that SDF screening should be used as a complementary sperm parameter participating in semen quality evaluation by providing useful information in the diagnosis of male infertility.
2016-05-14T02:51:02.320Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "c198a5dc5e775459c448d74d258778b031165cc8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5534/wjmh.2015.33.1.1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c198a5dc5e775459c448d74d258778b031165cc8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12957617
pes2o/s2orc
v3-fos-license
Identification of Baicalin as an Immunoregulatory Compound by Controlling TH17 Cell Differentiation TH17 cells have been implicated in a growing list of inflammatory disorders. Antagonism of TH17 cells can be used for the treatment of inflammatory injury. Currently, very little is known about the natural compound controlling the differentiation of TH17 cells. Here, we showed that Baicalin, a compound isolated from a Chinese herb, inhibited TH17 cell differentiation both in vitro and in vivo. Baicalin might inhibit newly generated TH17 cells via reducing RORγt expression, and together with up-regulating Foxp3 expression to suppress RORγt-mediated IL-17 expression in established TH17 cells. In vivo treatment with Baicalin could inhibit TH17 cell differentiation, restrain TH17 cells infiltration into kidney, and protect MRL/lpr mice against nephritis. Our findings not only demonstrate that Baicalin could control TH17 cell differentiation but also suggest that Baicalin might be a promising therapeutic agent for the treatment of TH17 cells-mediated inflammatory diseases. Introduction The T helper 17 (T H 17) lineage, a lineage of effector CD4 + T cells characterized by production of interleukin (IL)-17, is described based on developmental and functional features distinct from classical T H 1 and T H 2 lineages [1,2]. T H 17 cells are associated with the development and pathogenesis of a growing list of chronic inflammatory diseases, including rheumatic arthritis, psoriasis, atopic dermatitis, and asthma [3,4,5]. Our studies, as well as others, have shown that T H 17 cells also play a key role in the pathogenesis of systemic lupus erythematosus (SLE) [6,7,8,9,10,11]. Several studies have advocated that T H 17 cells might be a promising therapeutic target for chronic inflammatory injury [12,13]. The differentiation of T H 17 cells is initiated by transforming growth factor-b (TGF-b) and interleukin-6 (IL-6) in mice, and interleukin-23 (IL-23) is also required [14]. Signal transduction and activator of transcription 3 (STAT3), aryl hydrocarbon receptor (AHR) and the retinoic acid-receptor-related orphan receptor-ct (RORct) mediate T H 17 lineage commitment [15,16,17]. Several studies have indicated that 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), halofuginone, and retinoic acid could suppress the expression of these transcription factors, and subsequently inhibit the differentiation of T H 17 cells [18,19,20]. However, few natural compounds restraining T H 17 cells are known. Moreover, it is important to explore not only effective but also safe therapeutic agents for the treatment of T H 17 cellsmediated inflammatory injuries. Baicalin, which is a main active ingredient originally isolated from the root of Huangqin (Scutellaria baicalensis Georgi), has safety records in clinic and has been used as an anti-inflammatory drug in traditional Chinese medicine [21,22]. Previous studies have showed that Baicalin could inhibit the proliferation of mononuclear cells, inhibit macrophage activation, inhibit the production of T H 1 related cytokines in different disease murine models [23,24]. Baicalin was shown to reduce the severity of experimental autoimmune encephalomyelitis (EAE) [23]. Since T H 17 cells are important inducer of EAE, we hypothesized that Baicalin might inhibit inflammatory injuries by suppressing effector T H 17 cells. Furthermore, previously published data confirmed that Baicalin inhibited the activation of AHR [25], which might has relevance to the proposed effect on T H 17 cell development. In this study, we observed that Baicalin inhibited T H 17 cell differentiation in vitro. Detailed studies showed that Baicalin might inhibit newly generated T H 17 cells via suppressing RORct expression, and together with up-regulating Foxp3 expression to suppress RORct-mediated IL-17 expression in established T H 17 cells. Baicalin could inhibit the generation of T H 17 cells in vivo, reduce T H 17 cells infiltration into kidney via inhibition of the CCL20-CCR6 signaling pathway, and could protect lupus-prone MRL/lpr mice against nephritis. Taken together, these findings suggest that Baicalin might be a promising therapeutic agent for the treatment of T H 17 cells-mediated inflammatory diseases. Baicalin inhibits T H 17 cell differentiation in vitro Baicalin (7-glucuronic acid, 5, 6-dihydroxyflavone, molecular weight = 446.36. Figure S1A) is a flavonoid compound originally isolated from the Chinese Herb Huangqin (Scutellaria baicalensis Georgi). First, using 3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) and flow cytometry, we observed that treatment with 20 mM Baicalin did not result in generalized inhibition of T cell proliferation and cell cycle ( Figure S1B and C), thus 20 mM Baicalin was used in most in vitro experiments. To determine whether Baicalin controls the differentiation of T H 17 cells, CD4 + CD25 2 T cells from B6 mice were isolated. Under T H 17 culture conditions (TGF-b plus IL-6 stimulation), IL-17 mRNA expression was increased 2.9-fold compared to control cells on day 2 and 10.7-fold compared to control cells on day 3. Following addition of 20 mM Baicalin to the culture, IL-17 mRNA expression was inhibited. In fact, IL-17 expression was decreased to 1.2-fold on day 2 and 2.5-fold on day 3 compared to controls ( Figure 1A). In addition, 20 mM Baicalin measurably inhibited IL-17 protein secretion ( Figure 1B). We further proved that the suppression of T H 17 cell differentiation was dependent on the dose of Baicalin ( Figure 1C). These results provide evidence that Baicalin can suppress the development of T H 17 cells. Baicalin inhibits IL-6 receptor and RORct mRNA expression IL-6, an acute-phase protein induced during inflammation, may ''dictate'' T H 17 cell differentiation [26]. Thus, we next determine whether Baicalin-mediated inhibition of T H 17 cell differentiation is IL-6-dependent. CD4 + CD25 2 T cells from B6 mice were stimulated with anti-CD3, anti-CD28, and the indicated cytokines in the presence or absence of Baicalin. IL-6 receptor (IL-6R) mRNA expression was analyzed by real-time RT-PCR at the indicated times. As expected, IL-6R mRNA was suppressed by Baicalin ( Figure 2A). Further study confirmed that Baicalin could reduce IL-6R protein expression during Th17 cell differentiation ( Figure 2B). RORct, which is a key transcription factor involved in T H 17 cell differentiation, is elicited by IL-6 and TGF-b [17]. During T H 17 cell differentiation in vitro, addition of Baicalin reduced RORct expression ( Figure 2C). IL-23 expands the pool of T H 17 cells [27], but Baicalin failed to affect the expression of IL-23 receptor during T H 17 differentiation ( Figure S2A). TGF-b and IL-21 can induce STAT3-mediated IL-17 expression during T H 17 differentiation [16], while Baicalin did not restrain IL-21-induced STAT3 and IL-17 mRNA expression during T H 17 cell differentiation ( Figure S2B and 2C). These data suggest that Baicalin could reduce IL-6R and RORct expression during T H 17 cell differentiation, which imply that Baiclain might suppress de novo T H 17 cell differentiation via inhibition of IL-6-mediated RORct expression. cells [19,28]. CD4 + CD25 2 T cells from B6 mice were stimulated with anti-CD3, anti-CD28, and the indicated cytokines, after 2 days stimulation, 20 mM Baicalin was added for additional 2 days. Foxp3 and IL-17 intracellular expression in CD4 + T cells were determined by flow cytometry. Surprisingly, T cells cultured with Baicalin under conditions that otherwise promoted IL-6-dependent T H 17 cell differentiation converted to Foxp3 + T cells with a concomitant decrease in T H 17 cell differentiation ( Figure 3A). In addition, Baicalin could inhibit RORct-mediated IL-17 mRNA expression in established T H 17 cells ( Figure S3A). Thus, we hypothesized that Baicalin could inhibit RORct transcriptional activity partly via up-regulation of endogenous Foxp3 expression, because previous report showed that Foxp3 could inhibit RORctmediated IL-17 expression and T H 17 cell differentiation [29]. To support this hypothesis, we further showed that Baicalin in synergy with TGF-b could up-regulate endogenous Foxp3 expression in CD4 + CD25 2 T cells ( Figure S3B), Baicalin could promote endogenous Foxp3 expression ( Figure 3B, middle panel) and reduce RORct expression during T H 17 cell differentiation ( Figure 3B, upper panel), and Baicalin together with forced expression of Foxp3 could inhibit RORct and IL-17 mRNA expression during T H 17 cell differentiation ( Figure 3C). Collectively, these data imply that Baicalin could inhibit RORct expression in established T H 17 cells, together with up-regulating Foxp3 inhibit RORct-mediated IL-17 expression. Baicalin inhibits T H 17 cell differentiation in vivo To determine whether Baicalin controls the development of T H 17 cells in vivo, lupus-prone MRL/lpr mice were treated with Baicalin or vehicle for nine weeks. Notably, mice without Baicalin treatment developed severe nephritis with increased urine protein, while mice receiving Baicalin were protected against nephritis with decreased urine protein ( Figure 4A-C). Baicalin also protected the survival and liver function of MRL/lpr mice (Table 1 and Figure S4). Furthermore, Baicalin reduced the spleen index and inhibited differentiation of T H 17 cells in spleens ( Figure 4D-F). Interestingly, Baicalin only slightly affected the frequency of T reg cells in vivo ( Figure 4F). Further study showed that Baicalin reduced the infiltration of T H 17 cells into the kidneys ( Figure 5A). Inflamed tissue produces CCL20 to facilitate the migration of CCR6-expressing T H 17 cells to the inflamed tissues [30,31]. Baicalin treatment inhibited CCL20 mRNA expression in kidneys and CCR6 expression in T H 17 cells ( Figure 5B and C), which indicated that Baicalin might interfere T H 17 cell infiltration into kidneys via inhibition of the CCL20-CCR6 expression. Baicalin inhibits IL-17 mediated gene expression of inflammatory molecules IL-17 acts as a potent inflammatory cytokine, and mediates leukocyte infiltration and tissue destruction [2,32]. Baicalin inhibited expression of genes encoding inflammatory molecules (ICAM-1, VCAM-1, and IL-17) in HUVEC that were induced by exogenous IL-17 ( Figure 6A). In support of these results, Baicalin reduced IL-17-induced adhesion of T cells to HUVEC ( Figure 6B). In addition, Baicalin also suppressed gene expression of inflammatory mediators in MRL/lpr mouse kidney ( Figure S5). Together, these data indicate that Baicalin could partially inhibit IL-17-induced inflammation. Discussion Baicalin, which is a main active ingredient originally isolated from the root of Huangqin (Scutellaria baicalensis Georgi), has safety records in clinic and has been used as an anti-inflammatory drug in traditional Chinese medicine [21]. Baicalin has been found to possess anti-inflammatory, antioxidant and anti-allergic properties, and appears to contribute to the treatment of chronic inflammatory diseases, including hepatitis, allergic diseases, and EAE [22,23,33]. The binding of IL-6 with IL-6R plays a key role in the transcription of RORct during the development of T H 17 cells, and IL-6 blockade by treatment with an anti-IL-6R monoclonal antibody might inhibit the development of T H 17 cells [34,35]. IL-6-deficient mice do not express RORct and IL-17 [17]. Together, these data suggest that IL-6 is a key cytokine to induce the expression of RORct and the development of T H 17 cells. Our data showed that Baicalin treatment inhibited the expression of IL-6R and RORct under culture conditions promoting T H 17 cell differentiation. However, the down-regulation of IL-6R was not accompanied by decreased expression of STAT3. IL-21 is also a key cytokine for STAT3-mediated T H 17 cell differentiation [16], Baicalin did not suppress IL-21R and STAT3 mRNA expression induced by TGF-b and IL-6 ( Figure S2A), which might explain that reduced expression of IL-6R was not accompanied by decreased mRNA expression of STAT3. Baicalin also hardly restrain IL-21-induced STAT3 and IL-17 mRNA expression during T H 17 cell differentiation ( Figure S2B and C). But Baicalin could affect the STAT3 phosphorylation induced by TGF-b and IL-6 ( Figure S2D). Thus, these data implied that transcript levels of STAT3 did not mirror protein levels, and Baicalin might regulate T H 17 cell differentiation by affecting STAT3 phosphorylation but not the expression of STAT3. Furthermore, cytokine receptors like IL-4R and IL-12Rb2 have negative impacts on T H 17 cell differentiation, our supplemental data showed that 20 mM Baicalin did not affect the mRNA expression of IL-4R and IL-12Rb2 during T H 17 cell differentiation ( Figure S2A). These data implied that Baicalin might restrain de novo T H 17 cell differentiation by abrogating IL-6 mediated RORct transcription, and Baicalin-mediated inhibition of STAT3 activation might contribute to reduced STAT3-mediated gene expression, such as RORct and IL-17A [36]. Because previous study has proved that Foxp3 could interact with RORct and inhibit RORc-directed IL-17 expression during T H 17 cell differentiation [29]. To support this hypothesis, we showed that Baicalin together TGF-b could up-regulate endogenous Foxp3 mRNA and down-regulate RORct mRNA expression ( Figure 3A, B, and Figure S3B). In addition, exogenous over expressed Foxp3 could inhibit RORct-mediated IL-17 mRNA expression in T H 17 cells, Baicalin together with Foxp3 might augment inhibition of IL-17 mRNA expression ( Figure 3C). Interestingly, we also noticed that RORct mRNA expression was also inhibited by forced expression of Foxp3, and Baicalin together with exogenous Foxp3 have a additive inhibition of RORct expression ( Figure 3C). Whereas further study should be performed to make clear the mechanism of Foxp3-mediated inhibition of RORct expression. All together, these data implied that Baicalin could up-regulate Foxp3 expression and suppress RORct-mediated IL-17 expression in established T H 17 cells. In our study, we unexpectedly found that Baicalin not only inhibited the differentiation of T H 17 cells but also promoted TGFb-mediated differentiation of T reg cells in vitro. Thus, Baicalin appeared to play a dual role in T-cell differentiation by mediating a reciprocal balance of Foxp3 and RORct. IL-6 is a key cytokine to inhibit Foxp3 expression during T reg cell differentiation [19], inhibition of IL-6R expression could increase T reg cell differentiation [38] Thus, Baicalin might induce Foxp3 expression by restoring IL-6-mediated inhibition of Foxp3 expression in vitro. In contrast to the observation that Baicalin enhanced Foxp3 expression in vitro, Baicalin treatment in MRL/lpr mice only slightly affected CD4 + Foxp3 + T cells. This minor expansion of Foxp3 + T cells might stem from the strong inhibition of excessive inflammatory cytokines and lack of TGF-b in vivo [38,39,40]. Although we observed that 20 mM Baicalin did not affect cytokines expression during the differentiation of T H 1 and T H 2 cells in vitro ( Figure S6), further study should be done to explore different concentrations of Baicalin on the differentiation of T H 1 and T H 2 cells. The number of T H 17 cells was found to be increased in murine model of SLE, including BXD 2 [41], SNF 1 [42], NZB6NZW F 1 [43,44], and Ro52 knockout mice [45]. Our previous studies, as well as others, showed that there were expansion of T H 17 cells in MRL/lpr mice [9,46,47]. Our unpublished data also showed that treatment with anti-IL-17 antibody could protect MRL/lpr mice against disease onset. Together, these data suggested that T H 17 cells might play a key role in the pathogenesis of MRL/lpr mice. Here we observed that Baicalin could reduce IL-6R and RORct mRNA expression in spleens of MRL/lpr mice ( Figure 4E), and Baicalin could accordingly inhibit T H 17 cell differentiation in vivo ( Figure 4F). These results were consistent with in vitro study of Baicalin on the differentiation of T H 17 cells. Interestingly, we noticed that a high percentage of IL-17 producers was CD4 2 cells in MRL/lpr mice ( Figure 4F). Actually, T H 17 cells (CD4 + IL-17 + T cells) are main source of IL-17 during chronic inflammatory responses. However, in mice other subsets can also express IL-17, including CD8 + T cells, invariant natural killer T (NKT) cells, and cd T cells [48,49,50,51]. Thus, we hypothesized that CD4 2 L-17 + cells were also expanded in MRL/lpr mice duo to severe inflammatory responses, but further study should be performed to dissect the specific source and function of these groups of CD4 2 L-17 + cells. Our data showed that Baicalin not only inhibited the differentiation of T H 17 cells in spleen but also reduced the infiltration of T H 17 cells into kidney, which might result from inhibition of the CCL20-CCR6 signaling pathway, since expression of CCL20 and CCR6 mRNA was found to be down-regulated by Baicalin. IL-17 is a key T H 17 cell-derived cytokine, which is implicated in leukocyte recruitment [32]. Treatment with Baicalin could suppress production of IL-17-mediated adhesion of T cells to HUVEC. Our in vivo studies further confirmed that Baicalin could inhibit IL-17-related inflammatory mediators, such as IL-22, IL-1, TNF-a, VCAM-1, and ICAM-1 ( Figure S5A). Through the potent inhibition of these adhesion molecules and inflammatory mediators, Baicalin might further impede the recruitment of T H 1, T H 2 or other effector cells in vivo. Thus, we are not making conclusions on the observed inhibition of T H 17 cells as the only function of Baicalin in vivo. Further study should be done to dissect the other specific subsets of effector cells affected by Baicalin in MRL/lpr mice. In addition, our unpublished data also showed that Baicalin could inhibit T H 17 cell differentiation in complete Freund's adjuvant induced inflammatory arthropathy mice, and ovalbumin immunized mice. Together, these data suggest that Baicalin could inhibit T H 17 cell differentiation in vivo, and exert therapeutic effects via inhibition of IL-17-mediated inflammation. Our findings define a role of Baicalin in T H 17 lineage commitment, thereby linking this natural compound to adaptive immunity in a way that has important implications for immune homeostasis and inflammatory diseases. Taken together, these findings suggest that Baicalin might be a promising therapeutic agent for the treatment of T H 17 cells-mediated inflammatory diseases. Mice and histopathology C57BL/6 (B6) and lupus-prone MRL/lpr mice were purchased from the Shanghai Laboratory Animal Center (Chinese Academy of Sciences). The animal study was approved by the institutional animal care and use committee of Zhongshan Hospital, Fudan University (ZS0862701). All mice were maintained under pathogen-free conditions. The onset of autoimmune diseases in MRL/lpr mice was monitored by the assessment of proteinuria. After clinical onset of disease, Baicalin (100 mg/kg; Purity.98%, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China; Baicalin was dissolved in phosphate-buffered saline prior to experimentation.) or PBS vehicle was given intraperitoneally every day for 9 weeks. For detection of urine protein, the total urine of 24 h were first collected, and performed according to the manufacturer's directions (Roche). Relative urine protein increases = urine protein (mg/L) at indicated time point -urine protein (mg/L) of week 0. At the time of sacrifice (9 weeks after treatment), the kidneys were fixed with formaldehyde, embedded in paraffin, stained with hematoxylin and eosin (H&E), and IL-17 (Santa Cruz Biotechnology, CA). The slides were read and interpreted in a blinded fashion, grading the kidneys for glomerular inflammation, proliferation, crescent formation, and necrosis. Interstitial changes and vasculitis were also noted. Scores from 0 to 3 were assigned for each of these features and then added together to yield a final renal scores. For example, glomerular inflammation was graded: 0, normal; 1, few inflammatory cells; 2, moderate inflammation; and 3, severe inflammation. Detailed pathological assessment was performed as described previously [52]. The spleens of MRL/lpr mice were collected to calculate the spleen index. Spleen index = spleen weight (g) divided by body weight (g). CD4 + T cell isolation, culture conditions, and western blot Intracellular cytokine staining and flow cytometry analysis For detection of T H 17 cells, cells obtained from in vitro cultures or spleen cells from mice were incubated for 5 hours with 50 ng/ml phorbol myristate acetate (PMA) and 750 ng/ml ionomycin in the presence of 20 mg/ml brefeldin A (Sigma-Aldrich) in a tissue culture incubator at 37uC. Surface staining with FITCconjugated anti-CD4 (eBioscience) was first performed for 15 min, then cells were re-suspended in Fixation/Permeabilization solution according to the manufacturer's instructions (Invitrogen), intracellular staining of PE-conjugated anti-IL-17 or isotype control was performed according to the manufacturer's protocol (eBioscience). After staining, we first gated on CD4 + T cells, then CD4 + IL-17 + cells were analyzed in a CD4 + gate in a FACS-Calibur (BD-Bioscience, Biosciences, San Jose, CA), and followed by analysis with FlowJo software (Tree Star, San Carlos, CA). For detection of T reg cells, cells were treated according to the Foxp3-staining kit protocol (eBioscience). Gating was on CD4 + T cells first, and then Foxp3 + cells were analyzed in CD4 + gate. Cytokine production Sorted CD4 + CD25 2 T cells from B6 mice were cultured under neutral conditions or in the presence of 5 ng/ml TGF-b plus 20 ng/ml IL-6 with or without 20 mM Baicalin for 3 days. IL-17 concentrations were determined by ELISA (R&D Systems, Minneapolis, MN). HUVEC and T cell co-culture HUVEC was seeded into 12-well plates (10,000 cells/well) and allowed to adhere for 24 hours. Then HUVEC was stimulated with 50 ng/ml IL-17 (eBioscience) for 24 hours with or without 20 mM Baicalin. After stimulation, Jurkat cells were added to the HUVEC cultures at a 1:5 ratio, and the co-culture was extended for an additional 24 hours. The HUVEC was then washed twice to eliminate the non-adherent Jurkat cells. The adhesion of T cells was counted under 6200 magnification. RNA isolation and real-time RT-PCR Total RNA was prepared with the use of the Trizol reagent (Invitrogen). The cDNA was synthesized with a first-strand cDNA synthesis kit and oligo (dT) primers (Fermentas, Hanover, MD), and gene expression was examined with a Bio-Rad iCycler Optical System (Bio-Rad, Richmond, CA) using a SYBR green real-time PCR Master Mix (Toyobo, Osaka, Japan). The 2 2DDCt method was used to normalize transcription to human 18S or mus b-actin and calculate fold induction relative to controls. The primer pairs could be seen in Table 2. Statistical analysis The quantitative data were expressed as the means 6 standard deviation (SD). The statistical significance was determined by ANOVA followed by Bonferroni post-hoc test for multiple comparisons or the Student's t-test. A paired t-test was also used in some cases. All p values #0.05 were considered significant.
2016-05-17T02:42:21.236Z
2011-02-16T00:00:00.000
{ "year": 2011, "sha1": "6620914a200df3f258ed3a4107c4fbda1f24e0ea", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0017164&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6620914a200df3f258ed3a4107c4fbda1f24e0ea", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
208958039
pes2o/s2orc
v3-fos-license
Pathological changes of liver one year later in CHB patients with negative HBV DNA Background In this study, we aim to determine the hepatic pathological changes in HBV DNA-negative chronic Hepatitis B (CHB) patients after 12-month antiviral therapy. Methods Blood routine indicators including platelet count (PLT) and white blood cell (WBC) were determined. The coagulation function was evaluated by determining the prothrombin time (PT) and prothrombin time activity (PTA), together with the HBV DNA quantification and alpha fetoprotein (AFP). The virology data included hepatitis B surface antigen (HBsAg)/antibodies against hepatitis B surface antigen (anti-HBs), hepatitis B e antigen (HBeAg)/antibodies against hepatitis B e antigen (anti-HBe) and antibodies against hepatitis B core antigen (anti-HBc) were tested. Pathological assay was performed to the liver puncture tissues. Based on the HBV DNA data in the 12-month follow-up of the cases that received anti-viral therapy during this time, the experimental group was divided into group A (HBV DNA negative at the baseline level, HBV DNA negative after 12 months, N = 79) and group B (HBV DNA negative at the baseline level, HBV DNA turning to be positive after 12 months, N = 13). Statistical analysis was performed on the each test index of the two groups. Results The inflammation grade of group A showed significant improvement after 12-month treatment (P < 0.05). The pathological inflammation grade of group B was increased after one year, and the liver function indices and the PTA (P < 0.05) levels were all increased. Pathological results indicated that the proportion of disease progression in group A was decreased after 12-month follow-up while that proportion was increased in group B. Significant differences were noticed in AFP levels between the patients with progression in group A and those with progression in group B. Conclusion Negative HBV DNA does not mean a controlled hepatitis B. Hepatitis B patients transferred to HBV DNA positivity during the anti-viral therapy are easily to show disease progression, and then special attention should be paid to the HBV DNA monitoring. Meanwhile, close monitoring to the changes of liver function, PTA and AFP levels may help to detect changes on the disease in a timely manner. Background Hepatitis B virus (HBV) induced chronic hepatitis B (CHB) is mainly characterized by liver inflammation that causes multi-organ damages. Despite the extensive application of hepatitis B vaccine, a large number of patients are suffering from hepatitis B infection in China mainland, and the cure rate is extremely low. HBV, a member of hepadnavirus family, can trigger hepatic inflammation gradually, and then results in fibrosis, cirrhosis and even accounting for at least 50% of the cases with primary hepatocellular carcinoma worldwide [1][2][3][4]. Chronic HBV infection is the most important risk factors for HCC [4,5]. It has been well acknowledged that persistent HBV replication is associated with cirrhosis development, as a consequence, it can lead to liver decompensation and the occurrence of HCC [6]. In clinical settings, nucleoside analogues (NAs) have been commonly utilized for the treating CHB, however, only functional cure is achieved in most cases. In China, the subsequent hazard caused by HBV infection is still a severe threat to the individuals. Nowadays, a great challenge is generated in the public health program in China. With the advances of medical technology and laboratory test technique, extensive efforts have been given to identify indicators that are more specific and sensitive for the diagnosis of hepatitis B. HBV DNA, an important indicator for HBV replication, is the most direct, specific and sensitive index for the determination of HBV infection [6]. HBV itself causes no damages to hepatocytes directly, but the level of HBV DNA can be utilized to evaluate the replication, infectivity and treatment efficiency of HBV. To date, treatment efficiency in patients with HBV infection is mainly determined by the decline rate, magnitude or negative conversion of HBV DNA titer no matter which kind of antiviral drug treatment is used [7,8]. HBV DNA titer is an important predictor for the efficacy of antiviral therapy, which contributes to the monitoring and efficacy evaluation after antiviral therapy. Studies confirmed that HBV DNA positive hepatitis B patients showed better prognosis after active treatment [9,10]. Nowadays, there are still some controversies about the treatment efficiency of antiviral regimen for HBV DNAnegative hepatitis B patients. To understand the correlation between the change of HBV DNA after antiviral therapy and liver disease prognosis in patients with HBV DNA-negative hepatitis B, we designed such retrospective analysis involving the HBV DNA-negative hepatitis B patients who have been receiving antiviral therapy over 6 months. We hope to provide a theoretical basis for deep understanding for the clinical treatment of HBV infection and the development of liver diseases. Patients Ninety-two HBV DNA-negative hepatitis B patients (male: 66; female: 26; mean age: 43.50 ± 10.81 yrs) received antiviral therapy for more than 6 months in Beilun People's Hospital (Ningbo, China) from January 2011 to December 2015 were enrolled in this study. Upon enrollment, the patients continued to receive antirival therapy. CHB diagnosis was performed based on the consensus recommendations of the Asian Pacific Association for the study of the liver (APASL) [11]. HBV DNA-negativity was considered as a detection limit of < 1000 IU/ml. The exclusion criteria were as follows: (i) those with other liver diseases, such as drug-induced hepatitis, alcoholic hepatitis, autoimmune hepatitis, or those with liver injuries caused by toxic substances or other causes; (ii) those infected with other hepatitis viruses (e.g. hepatitis A, C, D, E virus) and HIV infection; (iii) patients received medications that may affect immune function within 6 months; (iv) those received radiation or chemotherapy. All participants signed the informed consent. This study was approved by the Ethics Committee of Beilun People's Hospital (Ningbo, China). Grouping In total, 92 CHB patients underwent antiviral therapy using Entecavir (Zhengda Tianqing Pharmaceutical, Jiangsu, China) or Lamivudine (Glaxosmithkline pharmaceuticals, Jiangsu, China) via oral administration (0.5 mg or 0.1 g, once per day) every morning prior to food and/or drinking for more than 6 months were enrolled in this retrospective study. Persistent anti-viral therapy was given during the 12 months follow-up. HBV DNA quantification was performed following the manufacturer's instruction (Zhijiang Biotech, Shanghai, China). Real-Time quantitative PCR was used to determine serum HBV DNA using an ABI-7500 system. The detection limit was 1000 IU /ml. Based on the HBV DNA concentration after 12-month antiviral therapy, the patients were classified into group A (HBV DNA negative at the baseline level, HBV DNA negative after 12 months, N = 79) and group B (HBV DNA negative at the baseline level, HBV DNA transferred to positivity after 12 months, N = 13), respectively. Observation indices Venous blood samples were obtained from each patient and were immediately stored at − 80°C for subsequent analysis. Liver function was evaluated by determining the levels of aspartate aminotransferase (AST), alanine aminotransferase (ALT), direct bilirubin (DBIL), total bilirubin (TBIL), albumin (ALB), alkaline phosphatase (ALP), globulin (GLB), and glutamyl transpeptidase (γ-GT), using a Hitachi automatic biochemical analyzer (Hitachi7600-110, Japan). Coagulation function was evaluated by determining the level of prothrombin time (PT) and prothrombin activity (PTA). The coagulation function was measured using an automatic coagulation analyzer (Sysmex CS5100, Japan). The blood routine test involved platelet (PLT) count and white blood cell (WBC) that were tested by automatic blood cell analyzer (Sysmex XN, Japan). The hepatic tumor marker determined was alpha-fetoprotein (AFP) and the chemoluminescence microparticle immunoassay (Abbott i2000SR, USA) was used to detect serum AFP in hepatitis B patients. Qualitative detection of virological data included Hepatitis B surface antigen (HBsAg), antibodies against hepatitis B surface antigen (anti-HBs), Hepatitis B e antigen (HBeAg), antibodies against hepatitis B e antigen (anti-HBe), antibodies against hepatitis B core antigen (anti-HBc) determined by Enzyme linked immunosorbent assay (ELISA, Xinchuang, Xiameng, China). Twelve months after Lamivudine or Entecavir administration, we then obtained the serum samples again for the determination of the same indices. Pathological analysis Hepatic tissues were obtained after puncture using 16 G disposable needles (C. R. Bard, NJ, USA). Subsequently, the hepatic biopsy specimens were fixated with 4% paraformaldehyde immediately, and were embedded by using paraffin. The sections were conventionally stained with HE staining. The images were observed under a microscope (Olympus BX51, Tokyo, Japan) for pathological and histological analysis. All liver biopsies in the study were interpreted by the same pathologist. The inflammatory activity or fibrosis in hepatic tissues of chronic hepatitis was divided into 4 grades or stages (i.e. G1-4 or S1-4) in clinical pathological diagnosis, G0 (no hepatic necroinflammation) and S0 (no fibrosis) according to the Scheuer scoring system. A score of 1-4 was utilized in the statistical analysis to indicate the corresponding grade or stage. For the diagnostic criteria of pathological transformation, a stable condition was defined in the patients presenting stable symptoms and physical signs, stability of various serum targets utilized for the liver disease diagnosis, and stable pathological inflammation or fibrosis grade after the 12-month follow-up, especially those with a similar pathological inflammation grade and inflammatory staging. Improved condition was defined as significant remission in the symptoms, physical signs, improvement of various serum targets, pathological inflammation or fibrosis grade in the 12month follow-up than the baseline levels, especially those with improvement in the pathological inflammation grade and inflammatory staging. Progression was defined as no significant improvement or even deterioration of clinical symptoms, physical and signs, various test findings, and pathological inflammation or fibrosis grade in the 12-month follow-up than the baseline levels, especially those with deterioration in the pathological inflammation grade or inflammatory staging. Statistical analysis SPSS 19.0 software was used for the data analysis. The data were presented as mean ± standard deviation (SD). Differences between two groups were conducted using paired T test or sign rank test. Analysis of variance was carried out on the comparison of more than two groups. Mann-Whitney U test was utilized to evaluate treatment effects. Chi square test or Fisher's exact test was utilized for the rate comparison. A P value of less than 0.05 was considered as statistical significance. Patients characteristics Among the 92 HBV DNA-negative hepatitis B patients, 79 (85.87%) were still HBV DNA negative 12 months after anti-viral treatment (termed as group A; mean age: 43.57 ± 11.32 years). The resting 13 (14.13%) were HBV DNA positive after 12-months treatment using Entecavir or Lamivudine (termed as group B; average age: 43.08 ± 7.27 years). No statistically significant differences were observed in the age and gender between the two groups (P > 0.05, Table 1). Comparison of serum targets before and after one year Twelve months after anti-viral treatment, the serum targets in group A were stable at the same level as the baseline levels. The average level of ALT and viral indices (especially the proportion of HBeAg positive) decreased despite the fact that no statistical difference was observed (P > 0.05). AFP, ALB, PT levels in group A showed significant decrease compared with the baseline levels (P < 0.05). PTA showed significant increase (P < 0.05) compared with the baseline level. The serological results of group B indicated that the indicators of liver function were all increased about 12 months after treatment, especially ALT and AST. Additionally, PTA and AFP showed increase compared with the baseline levels. There were significant differences in the PTA level in the 12-month follow-up when comparing with the baseline level (P < 0.05). The TBIL and DBIL levels between group A were significantly different from those of group B at the baseline levels (P < 0.05). The ALT and DBIL levels between group A and group B showed significant differences (P < 0.05) about 12 months after treatment. Taken together, the inflammatory activity in the liver tissues of hepatitis B patients whose HBV DNA transformed into positivity 12 months after antiviral therapy was more prominent than those were HBV DNA negative (Table 2). Pathological outcome Pathological analysis indicated that the inflammation in patients of group B was lower than that in group A in the baseline. However, the grading of inflammation in group A showed decline after 12-month antiviral treatment. In group B, significant increase was observed in the inflammation grading after the 12month follow-up compared with that of group A (P < 0.05). The fibrosis in cases of group A and group B showed progression. Additionally, no statistical differences were observed between the two groups (P > 0.05, Table 2). For the cases with stable conditions, there were no significant differences in stability rate between group A and group B (P > 0.05). Whereas, the percentage of cases with improvement in disease conditions in group A was higher than that in group B. Meanwhile, in group A, 19 (24.05%) showed improvement in the inflammation and 18 (22.78%) showed improvement in fibrosis. In group B, 2 cases (15.39%) showed no improvement in the inflammation grade and the fibrosis stage. The disease progression rate was lower in group A compared with that of group B. The progression rate of inflammation grade in group A was significantly lower than that of group B (15.19% vs.38.46%, P < 0.05). In contrast, the progression rate of fibrosis stage in group A showed no statistical differences compared with that of group B (37.97% vs. 46.15%, P > 0.05). These suggested that patients in group A showed better response to antiviral therapy (Table 3). Statistical analysis of the two groups before and after one year follow-up and the indices of the disease progression For the comparison of progression condition indicators of inflammation grade, significant differences were observed in the baseline AFP levels in group A compared with that of group B (15.19% vs. 38.46%, P < 0.05). Additionally, we compared the indices of aggravation of fibrosis stage between the two groups. The HBV DNA between the two groups showed significant differences (P < 0.01). No significant differences were observed between the indicators in the group A or in the group B in the baseline level and after the 12-month follow-up. There was no statistically significant difference in the statistical analysis of the differences between the indicators in group A and group B in the 12-month follow-up. Discussion HBV DNA serves as an indicator for HBV replication. Quantitative detection of HBV DNA is used as the main method for the monitoring and prognosis evaluation after antiviral efficacy. Generally, the virus activity in HBV DNA-negative patients is usually under control, and then their conditions are stable in a certain period. However, there are indeed possibilities of relapse under low immunity or stress due to the presence of cccDNA in liver tissues [12]. Nowadays, there are also disputes on the recommendation of antiviral therapy to HBV DNA-negative patients. Besides, most of the HBV DNAnegative patients are not appropriately treated, which may lead to progression several years after treatment [13]. Thus, special attention should be paid to monitor the alternations of liver of HBV DNA-negative patients by serology in clinical settings. In this study, 92 HBV DNA-negative hepatitis B patients who have been receiving antiviral therapy over 6 months were included in our study. In this study, we determined the effects of antiviral treatment on the prevention and prognosis of HBV reactivation in these patients and how to monitor the treatment response of patients with CHB under Chinese conditions. Extensive studies have confirmed that antivital therapy contributes to prognosis of HBV DNA-positive hepatocarcinoma (HCC) patients [9,10]. Besides, patients with latent HBV infection are apt to develop HCC in those with HBV DNA rather than HBV surface antigen in serum [14]. This suggested that there might be a certain relationship between HBV infection and liver cancer [15]. Thus, antiviral therapy is necessary to control the progression of hepatitis. Meanwhile, HBV DNA examination can also provide an accurate option for therapy. In our study, pathological analysis indicated that the inflammation grades and fibrosis stages of group B patients were lower than those of group A at the baseline level. In the 12-month follow up, the inflammation grade and fibrosis stage in cases of group B showed improvement, especially the inflammation grade (P < 0.05). In group A, the inflammation grade and fibrosis stage in cases of group showed improvement. These results demonstrated that HBV DNA negative patients with hepatitis B showed better response to antiviral therapy compared with the counterpart that were HBV DNA positive. There were really some HBV DNAnegative hepatitis B patients with poor response after antiviral treatment. This may be related to the fact that HBV is in a high level of replication and the condition is still progressing. Additionally, this may be related to the weak response of the body to liver inflammation and poor liver reserve. ALT and AST were selected as liver function indicators in this study as they could reflect the liver damage specifically. A higher AST level exceeding ALT indicates deterioration in the hepatic parenchymal damage, which serves as a sign of chronic aggravation. Our results showed that AST and ALT levels in group B were higher than those in group A at the baseline level and one year after treatment, especially the ALT levels with statistical differences. In group A, the ALT level 12 months after treatment was lower than the baseline level. About 12 months after treatment, the liver function indices in group B were higher than the baseline level. These indicators may have a warning effect on the disease progression. In the presence of inflammation in liver, there might be aberrant changes in the liver synthesis and metabolism [16]. Patients with hepatitis B showed impaired liver function, and then the conversion between indirect bilirubin and direct bilirubin was hampered, which was manifested as elevation of indirect bilirubin and decline of direct bilirubin. In this study, TB and DB levels in group B were significantly lower than those in group A in the baseline levels (P < 0.05). Meanwhile, the DBIL level in cases of group A was significantly higher than that of group B in the baseline level. Furthermore, there were statistical differences in group A in terms of ALB, PT and PTA in the 12-month follow-up compared with the baseline level (P < 0.05). The PTA level of group B was significantly increased in the 12-month follow-up than the baseline level (P < 0.05). These findings suggested that the liver function in cases of group A was better than that of group B before the follow-up. In the 12-month follow-up, the liver function showed improvement in group A, and liver inflammation in group B was more obvious. AFP is a glycoprotein belongs to the albumin family,which does not express or underexpress on normal liver tissues, but the AFP level was increased significantly on hepatitis or cirrhosis tissues and the highest expression on liver cancer tissues [17]. It is mainly utilized as a serum marker for the diagnosis and efficacy monitoring of primary liver cancer. HBV infection is closely related to the high expression of AFP [18]. Our data showed that the AFP level in group A was significantly lower in the 12-month follow up than the baseline level (P < 0.05). The AFP level in patients of group B was increased at month 12, however, no statistical difference was identified. The AFP level of group A was higher at the baseline level than that of group B, and the AFP level of group A was lower at month 12 than that of group B. These results indicated that the improvement of liver inflammation in group A was better than that of group B. On this basis, we implied that the use of antiviral drugs and the sustained low level of HBV DNA may contribute to the improvement of hepatic function and prevention of disease progression. Taken together, the combined monitoring of liver function, PTA and AFP may be helpful to predict the progression of the disease. The improvement of inflammation in group A was superior to that of group B. This indicated the application of anti-viral agents and persistent low level of HBV-DNA may contribute to the improvement of liver function, which then delayed the disease progression. To our best knowledge, some serum indicators are not adequate to prove the stage and extent of the liver damage. For example, in cases with poor liver conditions, hepatocyte necrosis is severe or the HBV is inactive, and then the inflammatory response is weak. The serological indicators suggest that the liver damage is weak. Therefore, many hepatitis B patients are not diagnosed until the end stage of hepatitis, especially HBV DNA negative patients, which may miss the best therapy period. The most direct evidence for clinical diagnosis of hepatitis is liver biopsy [19,20]. HBV DNA is an important factor in the progression of HBV-associated liver disease, but there are indeed many limitations on the study of liver tissues, such as ethical controversy and complexity of sample collection [21]. Therefore, few reports investigated the relationship between the antiviral effects and the pathological grade, prognosis and disease progression of liver tissue in HBV DNA-negative hepatitis B patients. In this study, the degree of disease stability between group A and group B showed no statistical differences after 12-month antiviral therapy. Nevertheless, the rate of disease improvement in group B was lower than that in group A. The rate of disease progression in group B was higher than that of group A. This suggested that HBV DNA negative can not be used as a basis for the termination of HBV replication. High HBV DNA level indicated that the course of chronic hepatitis B patients may further progress to liver cirrhosis or even liver cancer. On one hand, this may be related to the mutations on the pre-C region of HBV, which resulted in decrease or elimination of HBeAg, viral gene mutations do not affect HBV replication, which then resulted in persistent infection of HBV [22]. Our results showed that the proportion of HBeAg positive in the case group were also reduced after continued anti-viral therapy. On the other hand, it may also be related to the continuous antiviral therapy in hepatitis B patients, which may cause resistance to certain drugs and subsequent recurrence of viral replication. These confirmed that there was a close relationship between HBV replication and disease progression. In future, we will focus on the specific mechanism in this process. In summary, antiviral therapy is effective for treating HBV DNA-negative hepatitis B patients, which can improve anti-viral response. It contributes to the improvement of hepatic function and delay of disease progression. Great attention should be paid to the hepatitis B patients with significant fluctuation in the HBV DNA viral load after antiviral therapy as their disease conditions are still progressing. Close monitoring of liver function, PTA and AFP may help to find the change in the disease condition timely. In addition, it is necessary to suspect whether drug-resistant mutations occur in these patients. Besides, attention should be paid to determine the genotyping and drug-resistant mutations of hepatitis B virus. This also provides a new perspective to investigate the pathogenesis and efficacy of HBVrelated disease progression. Further studies on the relationship between HBV DNA and HBV-related liver disease progression will be beneficial to the treatment and prevention of HBV-related liver diseases. Statement on human and animal rights All human and animal studies have been approved by the appropriate ethics committee and have therefore been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments.
2019-12-10T15:19:38.884Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "aa5cced6c93154a2ea30400d36622e9470327607", "oa_license": "CCBY", "oa_url": "https://infectagentscancer.biomedcentral.com/track/pdf/10.1186/s13027-019-0265-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa5cced6c93154a2ea30400d36622e9470327607", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231882974
pes2o/s2orc
v3-fos-license
Overfishing and habitat loss drive range contraction of iconic marine fishes to near extinction We used a novel approach to estimate the spatial contraction of sawfish populations and guide recovery efforts. INTRODUCTION The ocean comprises 99.8% of the habitable volume of our planet, yet its resources are not inexhaustible (1).Around the world, the intensity, spatial reach, and technical capacity of fisheries have expanded enormously over the past half-century (2,3).As a consequence, overfishing is unquestionably the primary threat to ocean biodiversity (4,5).Even as the stock health of some managed and monitored fisheries improves (6), many exploited species go unmonitored, making it difficult to track the extinction of marine fish species (7)(8)(9).The statistical approaches used to infer extinctions are typically based on time series of sightings data, which are difficult to obtain for wide-ranging species, particularly marine fishes (10,11).As a consequence, marine extinctions have been overlooked, as many marine populations have been exploited to the point of collapse long before monitoring began (9,12,13). There is a rich body of theory describing how population abundance drives spatial patterns of site occupancy, whereby habitat occupancy and geographic range contract as numerical abundance declines (14)(15)(16)(17)(18)(19).This dynamic geography theory posits that high-quality habitat is preferentially occupied first by individuals until, as density increases, they are forced to occupy poorer-quality habitats (20).Thus, a key insight of dynamic geography is that habitat quality can be effectively defined by the population growth rate r and can be derived from the abundance-occupancy relationship [Fig.1A; (17,18,21)].Specifically, the shape of the relationship may directly reflect the habitat quality through population-specific parameters [e.g., death rates (17)].Consequently, increasing death rates by overfishing may drive local population growth rate below zero more quickly when habitat is lost; specifically, effective habitat quality is reduced per unit of available habitat [Fig. 1, A and B; (18)].Thus, habitat loss can encompass the reduction in combined Fig. 1.Linking dynamic geography to abundance-occupancy. (A) Changes in abundance-occupancy with varying levels of fishing shown in different shades of red.Slope tangent to the line represents r, the population growth rate, which is synonymous with, and indeed a definition of, habitat quality.Increasing fishing pressure causes the occupancy curve to approach its asymptotic limit for a smaller given abundance compared to no fishing pressure.The maximum abundance of a given population shrinks under stronger fishing regimes, as shown with the point and the dashed line.(B) Curves derived from (A) showing changes in habitat quality (= r) with habitat availability with varying levels of fishing.When fishing pressure is high, the abundance-occupancy curve approaches its asymptotic limit (r = 0) at a lower given occupancy (A), resulting in a steeper decline in population growth rate and habitat quality for a given available habitat, resulting in a geographic range contraction. habitat availability and quality.This theory yields the prediction that the probability that populations are extinct is greater in locations with higher fishing pressure and lower ecological carrying capacity.Increasing fishing pressure has the ability to reduce the carrying capacity of a given population, leading to decreased levels of maximum population size [Fig.1A; (22)].Even in the absence of large-scale, long-term population and abundance time series, we can draw on dynamic geography theory to track and attribute causality of local marine extinctions using occurrence records, complemented by indices of ecological carrying capacity and fishing pressure. The sawfishes (family Pristidae) are among the world's most threatened families of marine fishes (23).The imperiled status of many populations was only recently recognized long after major declines and local extinctions had occurred (24,25).Three of the five species are Critically Endangered according to the International Union for Conservation of Nature (IUCN) Red List of Threatened Species: Largetooth Sawfish (Pristis pristis), Smalltooth Sawfish (Pristis pectinata), and Green Sawfish (Pristis zijsron).The other two species-Dwarf Sawfish (Pristis clavata) and Narrow Sawfish (Anoxypristis cuspidata)-are Endangered (26).Sawfishes are highly vulnerable to population depletion: their tooth-studded rostra are easily entangled in nets, they live primarily nearshore in heavily exploited tropical and subtropical regions, and they have low reproductive outputs, yielding some of the largest ova in the animal kingdom (23,27).Sawfish fins are among the most valuable in the global shark fin trade as a celebratory dish in some Asian cultures (23).Sawfish rostra are sold for curios and medicine, while rostral teeth are prized as spurs for cockfighting (28).In the absence of adequate fishing restrictions, intensely exploited populations collapsed rapidly in the early 20th century.Today, sawfishes remain among the world's most valuable internationally traded wildlife, although most commercial international trade has been prohibited since 2007 under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (4,27).In places where they persist without enforced protections, sawfishes that are caught are often retained for consumption or sale, while others are killed in an effort to recover fishing gear or protect fishers (23,25).Sawfishes are an exemplar of one of the fundamental challenges of tracking biodiversity change: discerning severe population declines from local extinction for a group that is scarce, relatively infrequently encountered, and for which there is little systematic monitoring [e.g., (29)]. We combine national-level sawfish occurrence surveys with ecological, socioeconomic, and political drivers in a space-for-time substitution framework (i.e., using spatial occupancy as a substitute for time series of abundances) to attribute causality of local extinctions.Here, we (i) report on the current status of sawfish occurrence, (ii) attribute causality with underlying mechanisms of local extinction using the listed threats from the IUCN Red List (fig.S1), and (iii) predict the probability of extinction in data-deficient nations (i.e., Presence Uncertain status) using 13 indices of ecological carrying capacity, fishing pressure, and management capacity (Table 1 and fig.S2).Note that the use of the term "extinct" or "near extinct" refers to the local extinction, or the increasing risk of extinction, of a population and does not reflect the IUCN Red List category of Extinct (26). RESULTS AND DISCUSSION The 2014 publication of the IUCN Species Survival Commission Shark Specialist Group's (SSC SSG) Global Sawfish Conservation Strategy helped reveal the plight of sawfishes to the world (23,28).We reviewed sawfish research activity from personal correspondences, published, and gray literature and documented 251 activities from 64 nations from 2014 to 2019.These activities were the combination of ongoing research efforts and targeted sawfish searches to determine the current status of sawfish occurrence (Fig. 2A). All five sawfish species were historically found in the coastal waters of 90 nations, with the greatest species richness occurring in the Indo-West Pacific nations of India, Indonesia, Malaysia, Papua New Guinea, and Australia (Fig. 2B).Now, sawfishes are presumed extinct from more than half (n = 46) of these nations: 18 nations are missing at least one species, and 28 nations are missing two species (Fig. 2C).At least one species of sawfish still persists in 38 nations (fig.S3).Nevertheless, the uncertainty that remains due to the challenge of detecting rare marine species (30) means that the presence of sawfishes remains uncertain (Presence Uncertain) in 42 nations because either (i) the current presence of all sawfishes is unknown or (ii) although the presence of some species can be confirmed, the presence of others is unknown (Fig. 2D).We used the presenceabsence data generated from these geographic distribution maps to predict the probability of extinction of sawfishes in the remaining Presence Uncertain nations (see Materials and Methods). We find that most of the variation in sawfish occurrence in each nation is explained by ecological carrying capacity, fishing pressure, and management capacity indices, which accounted for 46.5, 35.8, and 16.5% of the summed variable importance from 1000 bootstrapped Boosted Regression Trees (BRTs), respectively (crossvalidated pseudo r 2 = 0.57 and bootstrapped range = 0.44 to 0.66; Fig. 3 and table S1).We found that species was not a substantial contributor to the model (sum of variable importance of all five species <2%; Fig. 3, N to R, and table S1).This suggests that, although the ecologies of all five species are different, the pathways of extinction are similar when assessed at the national level.The probability of local extinction of sawfish populations was higher in nations with low habitat availability and quality (Fig. 3, A to E), high fishing pressure (Fig. 3, F to J), and low management capacity (Fig. 3, K to M).Although all 13 indices best explain spatial patterns of extinction in sawfishes, the three indices with the highest variable importance values were shelf area (a measure of habitat availability; Fig. 3A; median variable importance = 25.0%,range = 22.8 to 28.6%), mangrove area (a proxy for habitat quality; Fig. 3B; 14.8%, 13.0 to 17.0%), and gear-specific landings, measured as the total tonnage of all fishes caught using gears that have high sawfish catchability (a direct measure of fishing pressure; see Table 1 and Fig. 3F; 14.5%, 13.0 to 17.3%). The prediction that localized extinction occurs through range contractions and changes in the dynamic geography relationships among a species' population holds true (18).The probability of occupancy by sawfishes increased with habitat availability (shelf area); however, this relationship was strongly mediated by fishing pressure (gear-specific landings; Fig. 4, A and B) and weakly mediated by habitat quality (mangrove area; Fig. 4B and fig.S4).As fishing pressure increased, the probability of occupancy for a given habitat size decreased.This is particularly apparent when comparing the median habitat required for 5% occupancy across levels of fishing pressure (Fig. 4C).When fishing pressure (i.e., gear-specific landings) is set to zero, the median habitat required to ensure 5% occupancy of sawfish populations is 110 km 2 ; however, the median shelf area required increases to 493 km 2 when fishing pressure is low, to 1410 km 2 when fishing pressure is moderate, to 7332 km 2 when fishing pressure is high, and to 26,903 km 2 when fishing pressure is maximum (Fig. 5C).This latter amount is roughly equivalent to the shelf area of Venezuela or Oman.Although a similar pattern exists for mangrove area, where decreasing the total area of mangroves increases the habitat required to achieve 5% occupancy, this pattern is much weaker than that of fishing pressure (fig.S4).Dynamic geography theory predicts that species extinctions are most likely to occur at range edges, as immigration is likely to be lower, which has been borne out by the well-documented disappearance of sawfishes from the range edges of the United States, South America, South Africa, eastern Australia, and the Mediterranean Sea (24,31).The listed threats from the IUCN Red List Assessments highlight that the major drivers endangering sawfishes are direct biological resource use [i.e., fishing (26)] and habitat loss (e.g., due to residential and commercial development, natural system modifications, etc.; fig.S1).Consequently, although local extinction of sawfishes can be attributed to the combination of ecological carrying capacity, fishing pressure, and management capacity, extinction is ultimately driven by overfishing and habitat loss. To illustrate the conservation potential of reducing fishing mortality and increasing habitat quality, we tested two hypothetical sce-narios on the probability of extinction in sawfishes: eliminating all forms of sawfish fishing pressure (i.e., setting gear-specific landings, marine protein consumption, chondrichthyan landings, and fishing effort to zero) or increasing the habitat quality by 100% (i.e., doubling the mangrove area).In nations where extinction probability is high, eliminating sawfish fishing mortality yields the greatest reduction in extinction risk.For example, eliminating all fishing pressures in Venezuela is predicted to result in a fourfold reduction in the probability of extinction in sawfishes, from 37.5 to 13.4% (Fig. 5B).Similarly, doubling the mangrove forest area in certain nations, especially those where restoration measures are already in place [e.g., Vietnam (32)], is predicted to also reduce extinction risk in sawfishes (Fig. 5C).Globally, reducing all fishing pressure to zero would decrease the probability of extinction of sawfishes by 20.7%, whereas doubling the mangrove area would decrease global extinction risk by 10.1% (Fig. 5, B and C).Although minimizing sawfish fishing mortality or increasing habitat quality in most nations can yield a large decline in extinction risk, the magnitude of this effect is not uniform across the globe.In many nations, even eliminating sawfish fishing pressure completely at this point may still be insufficient, Curves are colored and predicted by levels of fishing pressure (where mangrove area is held at its mean): zero fishing shown in the lightest orange, low fishing in orange, moderate fishing in red, high fishing in dark red, and maximum fishing in darkest red.The thick gray line shows the intersection where 5% occupancy occurs.Light gray rugs show the data.(B) Posterior distributions of the coefficient estimates from the logistic regression for shelf area (blue), mangrove area (blue), and fishing pressure (i.e., gear-specific landings; red), where the majority of the posterior is darker.Shelf area had a strong positive effect on the occupancy of sawfishes [mean estimate = 4.08, 95% credible interval (CI) = 1.52 to 8.05; 100% of the posterior > 0], whereas mangrove area had a small positive effect (0.48, 95% CI = −0.99 to 2.54; 72.1% of the posterior distribution > 0), and fishing pressure had a strong negative effect on the occupancy of sawfishes (−1.17, 95% CI = −3.05 to 0.03; 97.2% of the posterior distribution < 0).(C) Estimated habitat required to have 5% occupancy drawn from the posterior distribution through different levels of fishing.Violin plots and points show spread of the predicted draws and thick lines show the median value.Points have been jittered for ease of interpretation.leaving a relatively high probability of extinction compared to other nations (e.g., the probability of extinction in the Dominican Republic is only reduced from 76.3 to 62.8%; Fig. 5B).As such, targeted conservation efforts tailored to each nation's unique combination of fishing pressure, management capacity, and ecological carrying capacity are required to minimize the probability of extinction of sawfishes. Despite several international treaty mandates, most of the nations where sawfish presence is uncertain and extinction probability is very low have yet to establish legal protections for sawfishes (e.g., Madagascar, Colombia, and Panama).It is important to stress that sawfish protections are still urgently warranted in nations with low predicted probabilities of extinction; abundance in these nations is still relatively low and likely declining as a myriad of threats remain.The actual status of sawfishes in all nations lacking adequate protection may be much worse than predicted.Conservation action should be informed by the long-term knowledge of experts with insight into relevant local conditions that we have not accounted for (e.g., cultural importance of sawfishes). Our analysis can guide sawfish conservation efforts around the world, including targeted assistance, particularly in nations with high likelihoods of sawfish presence and relatively high management capacity.We recommend the following eight nations that have very low predicted extinction probability but currently Presence Uncertain status(es) (median extinction probability <20.0%) as priorities for initiating or continuing specialized surveys to determine sawfish status and enacting protections: Cuba (median extinction probability = 9.4%), Tanzania (12.2%),Colombia (12.6%),Madagascar (13.3%),Panama (15.5%),Brazil (18.0%),Mexico (18.6%), and Sri Lanka (18.6%) (Fig. 5A and table S2).Achieving prescribed sawfish conservation policies in these eight nations, when added to the 38 nations where the presence of sawfishes has been confirmed, would amount to protection in 71.5% of their historical global distribution (Fig. 5). Currently, Australia and the United States can be considered "lifeboat" nations: sawfishes are relatively well protected and are still present in these nations.Largetooth Sawfish are locally extinct in the United States, but the Smalltooth Sawfish population is considered to be increasing, due to strict prohibitions, public outreach, habitat protection, and coastal gillnet bans (33,34).Removal or relaxation of any key safeguards would pose an immediate threat to this lifeboat population.Australia's protections have ensured that the four Indo-West Pacific species persist, although further mitigation is required to reduce ongoing bycatch in commercial trawl and gillnet fisheries.Adjacent to both lifeboat nations are "beacon of hope" nations where sawfishes are present but remain largely unprotected (i.e., the Bahamas and Papua New Guinea).In these two nations, scientists and conservationists are working to understand the viability of the species and to secure protections. There are other beacon-of-hope nations with varying geopolitical and macroeconomic barriers to conservation.Although there are a number of nations that have implemented legal protections for sawfishes (35), there is a mismatch between conservation implementation and probability of extinction.For example, despite the adoption of national sawfish protections in South Africa in 1997, it cannot currently act as a beacon-of-hope nation because sawfishes are considered locally extinct there (35).Conversely, the combination of fishing pressure, management capacity, and ecological carrying capacity in Cuba results in a very low predicted probability of extinction of sawfishes (9.4%).Given the relatively strong capacity to implement and enforce marine conservation actions (36), Cuba has the potential to transform from a beacon-of-hope to a lifeboat nation.To do so, sawfish-specific legal protections would need to be implemented, strictly enforced, and complemented by educational programs.Other beacon-of-hope nations include Brazil and several nations bordering the Red Sea and the Arabian/Persian Gulf.Sawfish recovery warrants ideally strict, species-specific prohibitions (on killing, retention, and trade), complementary educational and enforcement programs, bycatch mitigation measures, and habitat conservation, supported by strategic research.Ongoing public, political, and financial support as well as, in many cases, capacity building, bolster the effectiveness of these measures (23,28).Such programs can also benefit similar species of threatened elasmobranchs, particularly wedgefishes and giant guitarfishes (37). Here, we have presented the first well-documented near-extinction of an iconic family of marine fishes due to overfishing and habitat limitations.Our approach offers an opportunity to improve the detection of the disappearance of wide-ranging species, attribute causal factors, and identify the relative benefits of different conservation solutions.There are key, yet fleeting, opportunities to prevent further nation-scale extinctions and reverse population declines through immediate and stringent conservation action in remaining range states.Without such action, the repeated losses of populations of these extraordinary species are likely to serve as stepping stones toward the first global extinction of a marine fish species.Our space-for-time approach offers the capability to track spatial declines and probabilities of extinctions of widespread, rare species as well as identify threatening processes and priority nations for action. Geographic distribution maps The IUCN SSC SSG convened 26 experts to reassess the geographic status of sawfishes, building upon a review of the recently published, gray, and online literature and email communication with 174 members of the IUCN SSC SSG network (38).We reviewed all sawfish-related activities, projects, and literature between 2014 and 2019, and these activities were classified as (i) local ecological knowledge surveys (interviews) of primarily subsistence fishing communities, (ii) ecological research (field-based ecology, habitat use, movements, life history, and molecular ecology), (iii) fisheriesrelated research (including examination of bycatch and analyses of bather protection program data), (iv) distribution mapping (synthesizing field, encounter, and/or museum records; environmental DNA surveys aiming to map occurrence), (v) historical ecology, and (vi) taxonomy. We modeled our spatial analyses of geographic distribution following the methods of a previous status report on sawfishes [i.e., 100 m maximum bathymetry and exclusion of distant offshore islands; see (23) for further details].For each species and historical range state (see Fig. 2B), nations were scored as either Extant (1), Possibly Extinct (0), or Presence Uncertain using the IUCN spatial presence codes (39).Note that the use of Possibly Extant was ignored, as this dataset is an update from a previous status report on sawfishes, where Possibly Extant was only used for P. clavata for uncertain point estimates within the Australian Coral Sea (23).The use of expert opinion, extensive search protocols, and bathymetry considerations has vastly improved the resolution of sawfish occurrence compared to other model-based distribution maps [e.g., AquaMaps are created using distribution models with relative probabilities of occurrence based on the species' environmental envelopes (40)].As such, this new analysis yields a comprehensive assessment of sawfish populations through local status surveys throughout much of their historic range. Data collation We focused our analysis at the national level (including countries and their territories where data were available) owing to the resolution of our data, but also because species protection typically occurs at the national or subnational scale.Note that we could not use traditional species distribution models because of the absence of point records for sawfishes outside the United States and Australia.Furthermore, because of the scale of our analysis, we could not reliably use indicators of successful management protocols that achieve fisheries management objectives, as they are limited in the target fisheries and the spatial scale considered [e.g., Melnychuk et al. (41) only considered 10 directed fisheries for 28 countries (42)].To both reconcile these spatial gaps and build upon previous climate and conservation vulnerability work [e.g., (43)], we focused on general indicators of fishing pressures specific to sawfishes [table S1; (42,(44)(45)(46)].Ideally, we would include sawfish-specific data (e.g., stock assessments, total sawfish landings, etc.); however, owing to the scarcity and high economic value of sawfishes, accurate data do not exist at a global scale.Attempts at establishing and enforcing strict national protection are lagging or otherwise inadequate for sawfishes (35) and only apply to a subset of nations; thus, we omitted direct measurements of conservation action from our analyses and focused on general indicators of management capacity (Table 1).We excluded nations that either (i) never harbored sawfish due to unfavorable physical conditions [i.e., Namibia was excluded from all sawfish distribution maps because it is predominantly an upwelling ecosystem and has a steep bathymetric shelf; (23)] or (ii) were otherwise lacking adequate governmental or fisheries data (e.g., Western Sahara and a number of European territories in the Caribbean). We modeled the occurrence of sawfishes using (i) five indices of indirect and direct fishing pressures, (ii) three indicators of the capacity at which a nation can implement effective fisheries management processes, and (iii) five indicators of the ecological carrying capacity of the available habitat (Table 1).First, we separated fishing pressures into indirect and direct fishing pressures, where we used coastal human population size (https://sedac.ciesin.columbia.edu/data/set/nagdc-population-landscape-climate-estimates-v3) and marine protein consumption (47) as indirect measures of fishing pressure (42).Because sawfishes have a high catchability in specific fishing gears, we used total landings with specific gear types (Food and Agriculture Organization gear types named: bottom trawls, otter trawls, shrimp trawls, gillnets, small-scale gillnets, small-scale longlines, small-scale trammel nets, and trammel nets), the total chondrichthyan landings, and the fishing effort from subsistence and artisanal sectors as direct fishing pressures (47,48).Second, we used three measures of governance and literacy to reflect the capacity of management to undertake conservation: World Governance Index (WGI; https://databank.worldbank.org/source/worldwidegovernance-indicators),Human Development Index (HDI; http:// hdr.undp.org/en/content/human-development-index-hdi),and Gross Domestic Product (GDP in USD; https://data.worldbank.org/indicator/NY.GDP.MKTP.CD).Last, to characterize ecological carrying capacity, we used the continental shelf area (km 2 ) as a measurement of the total habitat available (see Supplementary text).We only measured shelf area as the area found within the geographic distribution maps for each species clipped to the maximum depth bathymetry for each species [see Supplementary text; (23,26)].We also used marine primary productivity [mg m −3 ; (49)], mangrove cover [km 2 ; (50)], total freshwater estuarine discharge rate [m 3 s −1 ; (51)], and sea surface temperature [°C; (52)] to characterize ecological carrying capacity.We selected these indicators because of the habitat preferences of sawfishes for shallow, inshore waters in tropical regions [Table 1; (53)]. Analysis We used BRTs to model the occurrence of sawfishes and to predict the probability of extinction in the Presence Uncertain nations (n = 42; table S2).We also predicted the probability of extinction under two hypothetical scenarios: (i) if all fishing pressure was zero [i.e., marine protein consumption, chondrichthyan landings, fishing effort, and gear-specific landings (not coastal population) values were all set to zero] or (ii) if mangrove area increased by 100% in Presence Uncertain nations (Fig. 5, B and C).Using the geographic distribution maps for each species (excluding the Presence Uncertain nations), we were able to use a Bernoulli loss function to predict the probability of extinction as the difference between one and the probability of occurrence [P(extinct) = 1 − P(occurrence)].BRTs are a powerful statistical method with high predictive accuracy because they combine many decision trees and a boosting algorithm and are not restricted by nonlinear relationships, complex interactions, or missing data (54).To improve model performance, we ln-transformed chondrichthyan landings, gear-specific landings, coastal human population, marine protein consumption, GDP, fishing effort, continental shelf area, primary productivity, mangrove cover, and total estuarine discharge rate; species was coded as a dummy variable.Although BRTs can handle collinear variables, removing highly collinear variables may sometimes improve model fit (54).We initially considered the total landings of marine fisheries production and the catches of illegal, unreported, and unregulated fishing as indicators of fishing pressure, but removed them from further analyses due to high collinearity with multiple variables, and they were both classified as indicators of fishing pressure (|r| > 0.8; fig.S2).Although coastal human population size and GDP were also highly correlated (r = 0.84), we chose to keep both variables in the model because they are not within the same indices (classified under fishing pressure and management capacity, respectively; Table 1 and fig.S2). We randomly separated our data into a training set (80%; n = 102 species-nation combinations) and a test set (20%; n = 25).We selected hyperparameters by varying the learning rate and tree complexity and assessed model fit based on minimizing the root mean squared error.Our final model used a learning rate = 0.005, a tree complexity = 10, and a bag fraction = 0.5 and was optimized using 10-fold cross-validation.Because of the stochastic building process of BRTs, we used the median of the variable importance and partial dependence values of each variable from 1000 bootstrapped samples.Our final BRT model explained 67.6% (range = 49.0 to 81.5%) of the deviance in the training dataset and 27.1% (range = 15.6 to 31.0%) of the deviance in the test set across 1000 bootstrapped samples.Despite the low percentage of deviance explained in the test set, which could be due to the variability due to a small sample size (n = 25), this model had high predictive accuracy where our bootstrapped cross-validated AUC (area under the curve) of the receiver operating characteristics curve was 0.83 (range = 0.73 to 0.88) and our evaluation AUC of the test set was 0.84 (range = 0.81 to 0.88).We performed all BRT analyses using the gbm v.2.1.4(55) and dismo v.1.1-4(56), packages in R v. 3.5.2 (57). To test the dynamic geography of sawfishes (14)(15)(16)(17)(18)(19), we used a generalized linear mixed model in a Bayesian framework with a logit link and the default non/weakly informative priors in the brms package (58).We modeled occupancy as a function of habitat availability (ln shelf area), habitat quality (ln mangrove area), and fishing pressure (ln gear-specific landings) and used nation as a grouping factor by specifying random intercepts.For this model, we used four Markov chain Monte Carlo chains simultaneously, each with 2000 iterations and 1000 warm-up iterations.We achieved convergence on all four chains (rhat = 1.00 for all coefficient estimates).We ran all dynamic geography analyses in R v.3.5.2 (57), using Stan through the brms v.2.10.0 package (58). Fig. 2 . Fig. 2. The historical presence, extinction, and uncertainty of the presence of sawfishes.(A) Global sawfish search effort with each color representing the different activities and the size of the point representing the number of activities in each nation, where the smallest point represents one activity and the largest point represents 14 activities.(B) The historical distribution of sawfish species richness across 90 nations.(C) The number of sawfish species extinct in each nation.(D) The number of sawfish species with Presence Uncertain status; no color means the presence status is known.For (B) to (D), statuses are colored in the exclusive economic zone (EEZ) of each nation's coastal waters and where greater species richness is denoted by the warmer colors. Fig. 4 . Fig. 4. Dynamic geography of sawfish populations.The effects of increasing fishing pressure (gear-specific landings), habitat quality (mangrove area), and habitat availability (shelf area) on occupancy in sawfishes.(A) Logistic regression where the thin curves show draws from the posterior distribution and the thick colored curves are the mean posterior estimates.Curves are colored and predicted by levels of fishing pressure (where mangrove area is held at its mean): zero fishing shown in the lightest orange, low fishing in orange, moderate fishing in red, high fishing in dark red, and maximum fishing in darkest red.The thick gray line shows the intersection where 5% occupancy occurs.Light gray rugs show the data.(B) Posterior distributions of the coefficient estimates from the logistic regression for shelf area (blue), mangrove area (blue), and fishing pressure (i.e., gear-specific landings; red), where the majority of the posterior is darker.Shelf area had a strong positive effect on the occupancy of sawfishes [mean estimate = 4.08, 95% credible interval (CI) = 1.52 to 8.05; 100% of the posterior > 0], whereas mangrove area had a small positive effect (0.48, 95% CI = −0.99 to 2.54; 72.1% of the posterior distribution > 0), and fishing pressure had a strong negative effect on the occupancy of sawfishes (−1.17, 95% CI = −3.05 to 0.03; 97.2% of the posterior distribution < 0).(C) Estimated habitat required to have 5% occupancy drawn from the posterior distribution through different levels of fishing.Violin plots and points show spread of the predicted draws and thick lines show the median value.Points have been jittered for ease of interpretation. Fig. 5 . Fig. 5. Sawfish extinction risk and national conservation potential.(A) Predicted probability of extinction from 1000 bootstrapped BRTs combined with current nations of occurrence represented in the EEZ.(B and C) Changes in predicted probability of extinction (current risk in dark colored points, predicted risk in transparent points) for Presence Uncertain nations where (B) fishing mortality (except coastal human population) is zero and (C) mangrove area is doubled.Dark blue, nations where sawfishes are present or have the lowest probability of extinction (<0.2); light blue, low probability of extinction (0.2 to 0.4); lightest blue, moderate probability of extinction (0.4 to 0.6); red, high probability of extinction (0.6 to 0.8); dark red, extremely high probability of extinction (0.8 to 1.0) or are already recorded as locally extinct. Table 1 . Variables considered in the BRT model. Only variables included in the model are shown.
2021-02-12T06:16:23.382Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "748fa9d8ea25e0489c77bdd9ea03067de66a2e2c", "oa_license": "CCBYNC", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7875525", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "aa868e812c3e5916160d5f0fd72562b75c5a7cae", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
234348368
pes2o/s2orc
v3-fos-license
Screening of Key Proteins Affecting Floral Initiation of Saffron Under Cold Stress Using iTRAQ-Based Proteomics Background Saffron crocus (Crocus sativus) is an expensive and valuable species that presents preventive and curative effects. This study aimed to screen the key proteins affecting the floral initiation of saffron under cold stress and thus increasing yield by regulating the temperature. Results Protein expression profiles in flowering and non-flowering saffron buds were established using isobaric tags for relative or absolute quantitation (iTRAQ). A total of 5,624 proteins were identified, and 201 differentially abundant protein species (DAPs) were further obtained between the flowering and non-flowering groups. The most important functions of the upregulated DAPs were “sucrose metabolic process,” “lipid transport,” “glutathione metabolic process,” and “gene silencing by RNA.” Downregulated DAPs were significantly enriched in “starch biosynthetic process” and several oxidative stress response pathways. Three new flower-related proteins, CsFLK, CseIF4a, and CsHUA1, were identified in this study. The following eight key genes were validated by real-time qPCR in flowering and non-flowering top buds from five different growth phases: floral induction- and floral organ development-related genes CsFLK, CseIF4A, CsHUA1, and CsGSTU7; sucrose synthase activity-related genes CsSUS1 and CsSUS2; and starch synthase activity-related genes CsGBSS1 and CsPU1. These findings demonstrate the important roles played by sucrose/starch biosynthesis pathways in floral development at the mRNA level. During normal floral organ development, the sucrose contents in the top buds of saffron increased, and the starch contents decreased. In contrast, non-flowering buds showed significantly decreased sucrose contents under cold stress and no significant changes in starch contents compared with those in the dormancy stage. Conclusion In this report, the protein profiles of saffron under cold stress and a normal environment were revealed for the first time by iTRAQ. A possible “reactive oxygen species–antioxidant system–starch/sugar interconversion flowering pathway” was established to explain the phenomenon that saffron does not bloom due to low temperature treatment. Background: Saffron crocus (Crocus sativus) is an expensive and valuable species that presents preventive and curative effects. This study aimed to screen the key proteins affecting the floral initiation of saffron under cold stress and thus increasing yield by regulating the temperature. Results: Protein expression profiles in flowering and non-flowering saffron buds were established using isobaric tags for relative or absolute quantitation (iTRAQ). A total of 5,624 proteins were identified, and 201 differentially abundant protein species (DAPs) were further obtained between the flowering and non-flowering groups. The most important functions of the upregulated DAPs were "sucrose metabolic process," "lipid transport," "glutathione metabolic process," and "gene silencing by RNA." Downregulated DAPs were significantly enriched in "starch biosynthetic process" and several oxidative stress response pathways. Three new flower-related proteins, CsFLK, CseIF4a, and CsHUA1, were identified in this study. The following eight key genes were validated by real-time qPCR in flowering and non-flowering top buds from five different growth phases: floral induction-and floral organ development-related genes CsFLK, CseIF4A, CsHUA1, and CsGSTU7; sucrose synthase activity-related genes CsSUS1 and CsSUS2; and starch synthase activityrelated genes CsGBSS1 and CsPU1. These findings demonstrate the important roles played by sucrose/starch biosynthesis pathways in floral development at the mRNA level. During normal floral organ development, the sucrose contents in the top buds of saffron increased, and the starch contents decreased. In contrast, non-flowering buds showed significantly decreased sucrose contents under cold stress and no significant changes in starch contents compared with those in the dormancy stage. INTRODUCTION Crocus sativus L., commonly known as saffron, is a flowering plant in the Iridaceae family that consists of a bulb, white roots, dark green leaves, and flowers with six petals and a sole stigma of three threads that have an intense and unique red color (Rubert et al., 2016). It is a male-sterile triploid lineage that has been propagated vegetatively since its origin and has a relatively stable genotype (Busconi et al., 2015). Iran is one of the most important countries in the world where saffron is produced, and it is widely cultivated in Europe, Asia, Mediterranean countries (especially in the area of Kozani in Greece), and India (Negbi, 1999;Bathaie and Mousavi, 2010). In the last 30 years, saffron has been successfully introduced and cultivated in some areas in China, such as Shanghai, Zhejiang, and Jiangsu (Liu et al., 2017). Saffron is widely used as a natural dietary spice as well as a popular traditional medicine. Several studies have described the health benefits of this plant, such as anticancer activity, antidepressant activity, and cytotoxic effects (Saeed et al., 2010;Kashani et al., 2018;Moradzadeh et al., 2018). Saffron is cultivated for its red stigmatic lobes and blooms only once a year. It is known as the most expensive spice in the world and called "red gold, " and the maximum estimated world production does not exceed 300-400 tons (Kyriakoudi and Tsimidou, 2018). Therefore, increasing the flower number can represent an effective way of producing more saffron to meet the ever-increasing demand in the market (Agayev et al., 2009;Khilare et al., 2019). For many kinds of flowering bulbs, such as saffron, temperature is one of the most critical factors affecting the formation and development of floral organs. The best temperature range for flower initiation in saffron is from 23 to 27 • C, and during the stage of flower primordium formation, flower bud formation rather than leaf bud formation can be inhibited when the ambient temperature is below 16 • C. The molecular mechanisms controlling saffron flower formation and development by temperature can be inferred from model plants such as Arabidopsis, but validation and further study still need to be carried out. How saffron integrates environmental temperature signals and endogenous signals to ensure the transition from vegetative growth to flowering is of great significance for controlling the number of flowers by regulating the temperature in production practice. Floral initiation is regulated by a complex genetic network that monitors several environmental signals, which has driven the exploration of genes and proteins that regulate flowering in plants. Light and temperature are important environmental factors affecting the flowering process. Some famous flower genes have been reported to response to the induction of environmental factors and drive flower development. For example, FLOWERING LOCUS T (FT)-like clade induces floral initial process in numerous species by forming regulatory protein complexes with FD-like bZIP transcription factor under a photoperiod pathway (Beinecke et al., 2018). Similarly, the day length-dependent regulation of CONSTANS (CO) protein stability plays a central role in the precise control of flowering time, and it was regulated by GIGANTEA through altering the interaction between F-BOX 1 and ZEITLUPE (Hwang et al., 2019). The vernalization process is closely related to temperature, in which the expression of many genes and proteins is regulated by temperature. For example, the flowering locus C (FLC) family, which acts as the key regulator of the vernalization process of Arabidopsis, was proven to cooperate with the VERNALIZATION INSENSITIVE 3 (VIN3) family during the course of evolution to ensure a proper vernalization response through epigenetic changes (Kim and Sung, 2013). The studies carried out so far in model plants, including Arabidopsis and rice, have allowed extensive knowledge of the flower-related genes and proteins influenced by temperature, and they can serve as a foundational basis for studying the molecular mechanism of temperature regulating in non-model plant flowering. For saffron, the development of flower buds is influenced greatly by temperature, although the possible signaling pathways remain largely unknown. Recently, Haghighi et al. (2020) isolated and characterized a regulator gene involved in saffron floral induction, the short vegetative phase (SVP) gene, which may repress floral initiation genes in the temperature response pathway. In our previous studies, full-length transcriptomes of flowering and non-flowering saffron were obtained using a combined next-generation sequencing short-read and singlemolecule real-time (SMRT) long-read sequencing approach. Although the genome of the Iridaceae family has not been revealed, several flower-related genes of saffron were identified by transcriptome analysis (Qian et al., 2019). Later, high-quality SMRT sequencing datasets of the full transcriptome for saffron were also presented by Yue et al. (2020). However, the flower development of saffron and the molecular mechanism underlying the flowering response to low temperatures at the protein level have not been reported. In addition, correlations between transcriptomes and proteomes are only in the range of 0.3-0.5 (Voelckel et al., 2010). Therefore, the dynamics of proteomes need further exploration to understand more bioinformation on temperature-responsive flowering in saffron. Isobaric tags for relative or absolute quantitation (iTRAQ) is one of the most robust and easy-to-use techniques and can be applied in quantitative proteomics of many species. Mass spectrometry technology allows a quantitative comparison of protein abundance by measuring the peak intensities of reporter ions released from iTRAQ-tagged peptides by fragmentation during tandem mass spectrometry (MS/MS) (Karp et al., 2010). It has become a powerful tool in the field of quantitative proteomics and can help with the study of comprehensive protein expression profiles in specific biological responses. For example, Dong et al. (2018) indicated that 4-coumarate-CoA ligase shows a major effect on repressing flavonoid metabolism in chlorotic "Huangjinya" tea leaves by investigating iTRAQ-based quantitative proteomics with phenotypic, biochemical, and transcriptomic data. Here, to understand the underlying mechanism of flowering modulated by temperature, the differentially abundant proteins between flowering plants under normal temperature treatment and non-flowering plants under cold temperature treatment were investigated by iTRAQ-based proteomics and then validated by real-time qPCR. Due to the absence of genomic resources of saffron, the coding sequences were predicted based on the fulllength transcriptome sequencing obtained in our laboratory; hence, the accuracy of alignment matches between peptides and assembled transcripts can be improved. Plant Materials Saffron plants which came from the same clone line were cultivated at a research farm at South Tai Lake Agricultural Park, Huzhou (longitude 120.6 • E, latitude: 30.52 • N, elevation 0 m), using a two-stage cultivation method: corms were planted in soil to allow them to grow outdoors, and they were cultivated indoors without soil (Han et al., 2019). In May 2018, dormant corms (≈25 g) with the same germplasm source and planting method were excavated from the field and divided into two groups, one cultivated at room temperature (20-25 • C, flowering phenotype) and the other exposed to low temperature (16 • C, non-flowering phenotype) for 30 days. All the samples were exposed to the same humidity and light conditions. When the top bud length was 7 mm and floral organs and leaf organs could be observed clearly by stereomicroscopy, the top buds were collected individually from the two groups (Figure 1). Three replicates were prepared in each group. All samples were frozen in liquid nitrogen and stored at −80 • C for iTRAQ and real-time qPCR analysis. Protein Extraction Plant tissues (1 to 2 g) with 10% polyvinylpolypyrrolidone were ground into powder in liquid nitrogen and then sonicated on ice for 5 min in lysis buffer 3 [8 M urea and 40 mM Tris-HCl containing 1 mM phenylmethylsulfonyl fluoride (PMSF), 2 mM ethylenediaminetetraacetic acid (EDTA), and 10 mM dithiothreitol (DTT), pH 8.5] with five volumes of samples. After centrifugation at 25,000 × g at 4 • C for 20 min, the supernatant was treated by adding five volumes of 10% TCA/acetone with 10 mM DTT to precipitate proteins at −2 • C for 2 h overnight. The precipitation step was repeated with acetone alone until there was no color in the supernatant. The proteins were airdried and resuspended in lysis buffer 3 (8 M urea and 40 mM Tris-HCl containing 10 mM DTT, 1 mM PMSF, and 2 mM EDTA, pH 8.5). Ultrasonication on ice was performed for 5 min (2/3 s) to improve protein dissolution. After centrifugation, the supernatant was incubated at 56 • C for 1 h for reduction and alkylated by 55 mM iodoacetamide in the dark at room temperature for 45 min. Five volumes of acetone were added to the samples to precipitate proteins at −20 • C for 2 h overnight. Lysis buffer 3 was used to dissolve the proteins with the help of sonication on ice for 5 min. Protein Digestion The protein solution (100 µg) with 8 M urea was diluted four times with 100 mM tetraethylammonium bromide (TEAB). Trypsin Gold (Promega, Madison, WI, United States) was used to digest the proteins at a ratio of protein/trypsin = 40:1 at 37 • C overnight. After trypsin digestion, peptides were desalted with a Strata X C18 column (Phenomenex, Torrance, CA, United States) and vacuum-dried according to the manufacturer's protocol. Peptide Labeling The peptides were dissolved in 30 µl 0.5 M TEAB with vortexing. After the iTRAQ labeling reagents were recovered to ambient temperature, they were transferred and combined with proper samples. Peptide labeling was performed by an iTRAQ Reagent 8-plex Kit according to the manufacturer's protocol. The labeled peptides with different reagents were combined and desalted with a Strata X C18 column (Phenomenex, Torrance, CA, United States) and vacuum-dried according to the manufacturer's protocol. HPLC Fractionation and LC-MS/MS Analysis The peptides were separated on a Shimadzu LC-20AB HPLC pump system coupled with a high-pH RP column. The peptides were reconstituted with buffer A (5% ACN, 95% H 2 O, adjusted pH to 9.8 with ammonia) to 2 ml and loaded onto a column containing 5-µm particles (Phenomenex, Torrance, CA, United States). The peptides were separated at a flow rate of 1 ml/min with a gradient of 5% buffer B (5% H 2 O, 95% ACN, adjusted pH to 9.8 with ammonia) for 10 min, 5-35% buffer B for 40 min, and 35-95% buffer B for 1 min. The system was then maintained in 95% buffer B for 3 min and decreased to 5% within 1 min before equilibration with 5% buffer B for 10 min. Elution was monitored by measuring absorbance at 214 nm, and the fractions were collected every 1 min. The eluted peptides were pooled as 20 fractions and vacuum-dried. Each fraction was resuspended in buffer A (2% ACN, 0.1% FA) and centrifuged at 20,000 × g for 10 min. The supernatant was loaded on a Thermo Scientific TM UltiMate TM 3000 UHPLC system equipped with a trap and an analytical column. The samples were loaded on a trap column at 5 µl/min for 8 min and then eluted into a homemade nanocapillary C18 column (ID 75 µm × 25 cm, 3-µm particles) at a flow rate of 300 nl/min. The gradient of buffer B (98% ACN, 0.1% FA) was increased from 5 to 25% in 40 min and then increased to 35% in 5 min, followed by 2-min linear gradient to 80%, maintenance at 80% B for 2 min, then 5% over 1 min, and equilibrated for 6 min. The peptides separated by nanoHPLC were subjected to tandem mass spectrometry Q EXACTIVE HF X (Thermo Fisher Scientific, San Jose, CA, United States) for datadependent acquisition detection by nanoelectrospray ionization. The parameters for MS analysis are listed as follows: electrospray voltage, 2.0 kV; precursor scan range, 350-1,500 m/z at a resolution of 60,000 in Orbitrap; MS/MS fragment scan range, >100 m/z at a resolution of 15,000 in HCD mode; normalized collision energy setting, 30%; dynamic exclusion time, 30 s; automatic gain control for full MS target and MS2 target, 3e 6 and 1e 5 , respectively; and the number of MS/MS scans following one MS scan, 20 most abundant precursor ions above a threshold ion count of 10,000. Bioinformatic Pipeline The raw MS/MS data are converted into MGF format by the corresponding tool, and the exported MGF files are searched by the local Mascot server against the database described above. In addition, quality control is performed to determine if a reanalysis step is needed. IQuant automated software (Wen et al., 2014) was applied to the quantification of proteins. All proteins with a false discovery rate (FDR) less than 1% will proceed with downstream analysis, including Gene Ontology (GO) 1 , euKaryotic Orthologous Groups (KOG) (the eukaryotic KOG set 2 ), and Kyoto Encyclopedia of Genes and Genomes (KEGG) 3 . The differentially abundant protein species (DAPs) mentioned below were identified based on the fold change ratio and Q-value. Furthermore, DAPs with a fold change ratio > 1.2 and Q-value < 0.05 were defined as up-DAPs, while those with a fold change ratio < 0.83 and Q-value < 0.05 were defined as down-DAPs. We also performed deep analyses based on DAPs, including GO enrichment analysis, KEGG pathway enrichment analysis, KOG functional annotation, cluster analysis (heat map), protein interaction analysis (STRING 4 ), and subcellular localization analysis (WoLF PSORT 5 ). Supplementary Figure 1 presents a list of the workflow for the whole iTRAQ data processing. Phylogenetic Analysis To further understand the possible function of CsFLK, CseIF4a, and CsHUA1, their peptide sequences were aligned with Web BLAST-blastX in the NCBI database for characterization, and the listing protein sequences of other species were downloaded and screened for phylogenetic analysis. The amino acid sequences of CsFLK, CseIF4a, and CsHUA1 were aligned with the selected listing protein sequences using the multiple sequence alignment program ClustalX, and a phylogenetic tree was constructed with MEGA-X software using the neighbor-joining method, with 1,000 bootstrap replicates. Quantitative Real-Time Analysis To complement the changes in abundance at the transcriptional level and validate the key flower proteins, we selected eight candidates between flowering and non-flowering samples cultivated at 20-25 and 16 • C. The top buds at 0.1, 0.2, 0.5, 0.7, and 1.1 cm in length from the flowering and non-flowering groups were used to determine different temporal gene expression patterns of genes by qRT-PCR. To determine the expression levels of genes in different tissues, the leaves, corms, roots, pistils, petals, and stamens from flowering samples (top bud lengths of approximately 0.7 cm) were collected. All the samples were ground in liquid nitrogen, and total RNA was prepared using a RNeasy@Plant Mini Kit. The PrimeScript II 1st Strand cDNA Synthesis Kit (TaKaRa, Japan) and SYBR Premix Ex Taq II (TaKaRa, Japan) were used for reverse transcription and qRT-PCR assays. Specific primers of the chosen genes were designed using Primer Premier 5.0 software (Supplementary Table 1). PCR products were verified by dissociation curves, and data were normalized to three endogenous reference genes to obtain Ct values. A larger Ct value indicated lower gene expression. Water was used as a negative and quality control, and each sample was measured in triplicate. Measurement of ROS, Starch, and Sucrose Content The reactive oxygen species (ROS), starch, and sucrose contents at two developmental stages were measured by ROS ELISA kit (Meibiao Biological, Jiangsu, China), starch content assay kit (Biolab, Beijing, China), and sucrose content assay kit (Biolab, Beijing, China), respectively. The microtiter plate provided in this kit has been pre-coated with an antibody specific to ROS. Standards and samples were added to the appropriate microtiter plate wells. Next, avidin conjugated to horseradish peroxidase was added to each microplate well and incubated. After the 3,3 ,5,5 -tetramethylbenzidine substrate solution was added, the enzyme-substrate reaction was terminated by the addition of sulfuric acid solution, and the color change was measured spectrophotometrically (Bio-Rad, Hercules, CA, United States) at a wavelength of 450 nm. The soluble sugar and starch in the sample were separated by 80% ethanol, and the starch was further decomposed into glucose by acid. Finally, by the anthrone colorimetric method, the starch content was determined. To determine the sucrose content, the reducing sugar in the sample was destroyed by alkali first, and then the sucrose was hydrolyzed to glucose and fructose under acidic conditions. Fructose further reacted with resorcinol to form colored substances with characteristic absorption peaks at 480 nm. The absorption values were assayed by a Bio-Rad SmartSpec Plus spectrophotometer (Bio-Rad, Hercules, CA, United States). Three biological replicates for each stage were obtained. Statistics Student's t-test was used to compare statistical significance using SPSS 22.0 software package (SPSS, Chicago, IL, United States), and p values < 0.05 were considered significant. GO, KEGG, and KOG Functional Classification of All Identified Proteins Predicting the functions of identified proteins is an important step to fully evaluate or exploit these data. Here the GO, KEGG, and KOG databases were used to annotate and classify all identified proteins. The results of the GO analysis revealed that only a few proteins were located in the extracellular region. In the biological process category, the identified proteins were enriched in "cellular component organization or biogenesis, " "reproduction, " "reproductive process, " and "growth, " suggesting that, at the stage of bud development, saffron was abundant in proteins involved in growth and development. In addition, the proteins were largely enriched in "response to stimulus, " which demonstrated that proteins could play a vital role in adaptation to harmful environments (Supplementary Figure 3A). In organisms, biological functions are based on a series of protein interactions. KEGG analysis is an effective method that can help to better understand protein biological functions. In the KEGG pathway analysis, a large number of predicted proteins were involved in primary metabolism, such as "carbohydrate metabolism, " "amino acid metabolism, " and "lipid metabolism, " which provide critical compounds needed for growth. Furthermore, the proteins were enriched in "environmental adaptation, " which suggested that various proteins function together in the process of plant arousal response to external stimuli (Supplementary Figure 3B). Eukaryotic Orthologous Groups were delineated by comparing protein sequences encoded in complete genomes, thus representing major phylogenetic lineages. Each KOG consists of individual orthologous proteins or orthologous sets of paralogs from at least three lineages. Orthologs typically have the same function, thus allowing for the transfer of functional information from one member to an entire KOG. This relation automatically yields a number of functional predictions for poorly characterized genomes. KOGs consist of a framework for functional and evolutionary genome analysis. The KOG functional classification results demonstrated that proteins were strongly enriched in substance and energy metabolism pathways, such as "energy production and conversion, " "carbohydrate transport and metabolism, " "amino acid transport and metabolism, " and "lipid transport and metabolism"; furthermore, many predicted proteins were associated with substance synthesis pathways, such as "translation, ribosomal structure, and biogenesis, " "RNA processing and modification, " and "posttranslational modification, protein turnover, chaperones" (Supplementary Figure 3C). A total of 130 pathway terms were annotated across all identified proteins. The metabolic pathway term was annotated most frequently, and the second most common term was the secondary metabolite synthesis pathway. More detailed data are available in Supplementary Table 3. Taken together, the bud during this developmental stage is considered the appropriate experimental material because numerous proteins for regulating growth and development are found in the bud. Additionally, the expression changes of resistance-related proteins in cold-treated non-flowering plants further demonstrated the reliability of the determination method in this study. Identification and Hierarchical Cluster Analysis of DAPs In total, 201 proteins, with 120 upregulated and 81 downregulated, were identified as DAPs in the present study (Supplementary Table 4). The top 20 most upregulated and downregulated DAPs are shown in Supplementary Table 5. Among them, non-specific lipid transfer protein (nsLTP, PB.74530.1| m.15978) and glutathione S-transferase U7 (GSTU7; PB.62685.1| m.5798) were upregulated dramatically during the floral process. nsLTP has been reported to be implicated in pollen tube tip growth and fertilization (Chae et al., 2009), and GST protein was involved in the transport of pigment to vacuoles where it accumulates (Momose et al., 2013). In Arabidopsis, GSTU7 has a role in seed germination mediated by GSH-ROS homeostasis and ABA signaling (Wu et al., 2020). Thus, nsLTP and GSTU7 proteins play important roles in the development of saffron floral organs, such as the pistil and stamen, where pollen and most pigments are deposited. Compared with cold-treated non-flowering samples, the two most downregulated proteins in normal saffron samples were heat stress transcription factor B-4b (HSFB4B; PB.52694.2| m.74526) and wall-associated receptor kinase 5 (WAK5; PB.32779.1| m.19653). Although the function of class B is less understood, the expression profiles of the HSF B family indicated that they might be integrated into signaling pathways induced by different abiotic stressors, particularly prominent in osmotic, salt, and cold stress samples (Scharf et al., 2012), which suggested that cold stress had a significant effect on the detected protein profiles of saffron. An initial study of Arabidopsis thaliana first identified five WAK genes (WAK1-WAK5) that are thought to physically link the extracellular matrix and the cytoplasm and serve a signaling function between them (He et al., 1999). Li et al. (2020) reported that resistance mechanisms engaged in the response to WAK signaling include reconstructed homeostasis of ROS species, such as hydrogen peroxide (H 2 O 2 ) and superoxide (O 2 − ). In contrast to WAK1, the function of WAK5 is less reported; however, based on its special transmembrane structure and the relatively high expression levels in cold-treated samples, it may play a role in cell response to cold stress and the production of reactive oxygen species. The possible cause of the increasing expression of HSFB4 and WAK5 in cold-treated non-flowering saffron remains to be further investigated. GO, KEGG, and KOG Analysis of DAPs in Saffron To further determine the promotion or inhibition effect of DAPs on different physiological processes, GO enrichment analysis of upregulated DAPs (relatively highly expressed during the process of floral and leaf organogenesis under ambient temperature) and downregulated DAPs (relatively highly expressed with cold treatment during the process of leaf organogenesis) was carried out. In total, 120 upregulated DAPs belonged to 142 biological process, 80 molecular function, and 65 cellular component GO terms, while only 18 biological process, 15 molecular function, and 21 cellular component GO terms were significantly enriched, with a p-value < 0.05. The lower the level of the GO term, the more specific the functional description, and the most specific and significantly enriched functions are easily detected in the directed acyclic graphs. For biological processes, Figure 2A shows that the most important functions of the upregulated DAPs were "lipid transport, " "sucrose metabolic process, " "glutathione metabolic process, " and "gene silencing by RNA." Similar molecular functions were also enriched, as shown in Figure 2B, for example, "lipid binding, " "sucrose synthase activity, " "glutathione transferase activity, " and "RNA binding and structural constituent of ribosome." Correspondingly, upregulated DAPs in flowering samples were significantly located in the cellular components of "nucleosome, " "ribosome, " and "microtubule" (Figure 2C). As previously described, nsLTP (an important lipid transfer protein) and GSTU7 (with glutathione transferase activity) were the top two upregulated proteins, and several lipid transport-and glutamine metabolism-related proteins may be involved in floral organ differentiation and biosynthesis of secondary metabolites of saffron, which are the unique physiological processes only observed in flowering samples. Sucrose not only provides energy for plant growth and development but also plays an important role in signal transduction. Sucrose may function as a key signaling molecule involved in floral transition under normal temperature (Cho et al., 2018). In addition, posttranscriptional gene silencing participates in the regulation of endogenous gene expression in a variety of developmental processes, and increasing attention has been given to RNA silencing during flowering events (Herr et al., 2006;Guo et al., 2016). However, as far as saffron is concerned, related research has not been reported. Inspired by this result, we screened differentially expressed microRNAs between flowering and non-flowering saffron samples to further reveal the flowering mechanism of saffron. A total of 80 downregulated DAPs belonged to 86 biological process, 76 molecular function, and 44 cellular component GO terms. Among them, 18 biological process, 15 molecular function, and 21 cellular component GO terms were significantly enriched with a p-value < 0.05. For biological process, "starch biosynthetic process" and several upstream GO terms were significantly enriched ( Figure 3A) CsPGMP), which covered most of the important enzymes in this pathway. A recent study proved that the soluble starch in flower buds of saffron was lower than that in dormancy stages without visible flower organs . Our results further suggested that the starch content of mixed buds under a normal environment may be lower than that of leaf buds under cold stress and that starch biosynthetic processes may be involved in the response to cold stress and floral induction. The production of ROS and the subsequent oxidative stress response may be the main physiological responses of saffron to cold stress since "cell redox homeostasis, " "response to reactive oxygen, " "cellular response to oxidative, " and "hydrogen peroxide catabolic process" were significantly enriched. This conclusion was also confirmed by ROS measurement results of cold-treated non-flowering top buds and normal flowering top buds collected separately at both the dormant stage (when top bud length < 2 mm) and floral/leaf organogenesis stage (when top length = 6 to 7 mm) (Supplementary Figure 4). PB.58772.3| m.205851 and PB.58772.2| m.185008 were annotated as L-ascorbate peroxidase (CsAPX4), PB.58660.2| m.66725 was annotated as peroxidase 72 (CsPER72), and PB.62862.1| m.30914 was annotated as CsPPX2E. All of them are members of the peroxidase family and can directly catalyze the reduction of hydrogen peroxide to water and alcohol. In addition, several thioredoxin family members (CXXS1, PB.74417.1| m.22999; GRXC2 PB.74995.1| m.182587, and PDIL1 PB.39155.5| m.9557), with the activity of reductase or isomerase to disulfide bond, may also play a role in responding to ROS stress and maintaining cell redox homeostasis. For molecular function analysis (Figure 3B), the enriched functions, such as "peroxidase activity, " cytochrome-c peroxidase activity, " "antioxidant, " "protein disulfide oxidoreductase activity, " and "glycogen (starch) synthase activity, " further supported the result that the relatively upregulated DAPs under cold stress have significant functions in scavenging oxidants and starch biosynthesis. In addition, cellular component analysis showed that downregulated DAPs were significantly enriched in amyloplasts and chloroplasts, two important plastids in the top buds of saffron ( Figure 3C). The main plastid proteins (CsGBSS1, CsAGPL1, CsAGPS1, and CsPULA1) in DAPs were involved in starch biosynthesis, which occurs in amyloplasts and chloroplasts; the other plastid proteins were involved in carbohydrate metabolic process (PB.23022.16| m.151042, α-glucan phosphorylase) and response to oxidative stress (CsAPX4), which suggested that the normal function of plastids plays an important role in the process of floral initiation. The KEGG pathway statistics for upregulated and downregulated differentially expressed proteins between the flowering and non-flowering saffron crocus are shown in Supplementary Figure 5A. The results showed that most of the DAPs were involved in biological metabolism and signal transduction-related pathways, thereby demonstrating that they were the main pathways. Specifically, in pathways such as "starch and sucrose metabolism, " "carbon fixation in photosynthetic organisms, " and "pentose phosphate pathway, " more DAPs were upregulated in cold-treated non-flowering saffron. The results suggested that the primary metabolism of the top buds in saffron was significantly changed under cold stress and the distribution of carbon sources in starch and sugar may have changed. In contrast, upregulated DAPs in flowering saffron were enriched in several pathways, such as "nitrogen metabolism, " "isoquinoline alkaloid biosynthesis, " "phenylalanine metabolism, " and "diterpenoid biosynthesis, " suggesting that the biosynthesis and metabolism of nitrogen and secondary metabolites were closely related to flowering. Among all the pathways, "starch and sucrose metabolism, " "stibenoid, diaryheptanoid and gingerol biosynthesis, " "glutathione metabolism, " "galactose metabolism, " and "ribosome biogenesis in eukaryotes" were the most significantly affected ( Table 1). More detailed results are available in Supplementary Table 5. In the functional classification of KOG, DAPs were largely enriched in "carbohydrate transport and metabolism, " "translation, ribosomal structure and biogenesis, " "RNA processing and modification, " and "posttranslational modification, protein turnover, chaperones" (Supplementary Figure 5B). Consistent with the analysis of other bioinformatic analyses, the KOG analysis showed that saffron had vigorous substance metabolism at the bud development stage. More detailed results are provided in Supplementary Table 6. Identification of Flower-Related DAPs in Saffron More than 60 flower-related genes in saffron were reported using PacBio and RNA-sequencing technologies in our previous research. In this study, three new flower-related proteins, CsFLK (flowering locus K homology domain, PB.36509.3| m.13842), CseIF4a (eukaryotic initiation factor 4A-III homolog B, PB.43424.8| m.67369), and CsHUA1 (zinc finger CCCH domain-containing protein 37, PB.42723.2| m.17343), were further identified using iTRAQ technology, and as expected, all of them were significantly upregulated in flowering saffron samples. Lim et al. (2004) demonstrated that FLK encodes a putative RNA binding protein with KH motifs that serves as a genetic component of the autonomous flowering pathway and positively regulates flowering by repressing FLC expression and posttranscriptional modification. eIF4a is a conserved ATPdependent RNA helicase and an RNA-dependent ATPase that participates in the initiation of messenger RNA translation. Bush et al. (2016) showed that a pronounced late flowering phenotype appeared in the eif4a1 mutant because the expression levels of FLC and/or other genes in this pathway were regulated, and cell proliferation and growth, which are essential to produce the inflorescence and floral meristems, were retarded. In addition, eIF4A has been implicated as a candidate gene for a flowering time quantitative trait locus in maize (Durand et al., 2012). HUA1 is required for floral determinacy and facilitates pre-mRNA processing of the MADS-box floral homeotic gene AGAMOUS in Arabidopsis (Irish, 2010). Interestingly, FLK was found to be involved in HUA activity during C-function maintenance (Rodríguez-Cazorla et al., 2015). To further understand the possible function of CsFLK, CseIF4a, and CsHUA1, their peptide sequences were aligned with Web BLAST-blastX in the NCBI database to characterize them, and the listing protein sequences were downloaded and screened for phylogenetic analysis. Phylogenetic studies revealed that three proteins of C. sativus were intensively clustered with monocotyledon plants, such as Phoenix dactylifera, Elaeis guineensis, A. asparagus, Oryza sativa, Dendrobium catenatum, and Phalaenopsis equestris. Moreover, it also displayed a high bootstrap value with other dicotyledons, and the CsFLK protein showed the highest value, indicating that CsFLK, CseIF4a, and CsHUA1 were relatively conserved proteins. Furthermore, these three proteins were similarly distributed in the branch nearest to A. asparagus, which suggested that C. sativus was probably closely related to Asparagus officinalis in the evolutionary process (Supplementary Figure 6). Temporal and Spatial Distribution of Selected DAPs on mRNA Levels To verify the results of protein quantification at the mRNA expression level and research the expression profiles of 13 key genes selected in this study (including floral induction-and floral organ development-related genes CsFLK, CseIF4A, CsHUA1, and CsGSTU7; sucrose synthase activity-related genes CsSUS1 and CsSUS2; and starch synthase activity-related genes CsGBSS1 and CsPU1), another two groups of flowering buds and cold-treated non-flowering buds were collected at five different growth stages: dormancy stage and top bud lengths of 2, 5, 7, and 11 mm. The mRNA expression levels were analyzed by qPCR, and each data point was represented as the mean of three biological replicates. The results showed that the change trends of mRNA and protein levels of all verified genes were consistent ( Figure 4A and Supplementary Figure 7). In Figure 4A, three flowerrelated genes, CsFLK (Figure 4A-1), CseIF4A (Figure 4A-2), and CsHUA1 (Figure 4A-3), had higher mRNA levels in flowering buds than in non-flowering buds during the whole developmental process. Interestingly, all of them showed sharply increasing expression levels at the germination stage, when the top buds recovered from dormancy status and growth to Values (mean ± SD) were determined from three independent experiments (n = 3). *0.01 ≤ p < 0.05; **0.001 ≤ p < 0.01; ***0.0001 ≤ p < 0.001; ****p < 0.0001. The three genes (CsSUS2, CsFLK, and CsGSTU7) in other tissues were all compared to those in top buds by Student's t-test. ( Figure 4A-4), which was predicted to regulate pigment synthetase activity, showed continuous increases in expression in both flowering and non-flowering buds, which may be due to the need for pigments in both flower and leaf development. However, when the buds grew to 1.1 cm, the development of pistils, stamens, and petals required more pigment synthesis, and CsGSTU7 showed a significantly higher accumulation in mixed buds of flowers and leaves. As expected, the expression levels of two key enzyme genes (CsGBSS1: Figure 4A-5 and CsPU1: Figure 4A-8) in the starch biosynthesis pathway of saffron buds under cold stress were relatively higher than that in normal condition, and both showed continuous declines during the whole flower organ and leaf organ developmental stages. In contrast, the expression levels of CsSUS1 (Figure 4A-6) and CsSUS2 (Figure 4A-7) increased continuously from flower bud germination to flower organ formation. Under cold stress, the expression of CsSUS1 and CsSUS2 increased in the early developmental stages (bud length = 0.5 cm) but decreased rapidly after the formation of leaf organs. The results suggested that sucrose metabolismrelated genes were continuously activated during floral organ development, and compared with normal conditions, cold stress may maintain the activity of starch biosynthesis enzymes and inhibit the activity of sucrose synthase enzymes to some extent at the gene expression levels. Three genes with gradually accumulated expression during the floral organ formation process (CsFLK, CsSUS2, and CsGSTU7) were further used to detect their distribution profiles at the flowering stage in different tissues, including leaves, corms, roots, pistils, petals, and stamens ( Figure 4B). These genes in top buds (length = 7 mm) were detected in the previous section and were also included in this section as a reference. CsSUS2 showed the highest expression levels in all the tissue organs, followed by CsFLK and CsGSTU7, which were consistent with the findings for different developmental stages. Except for the expression levels of CsSUS2 in roots, the expression levels of the three genes in other mature differentiated tissues were lower than those in top buds. In addition, the expression levels of these genes in leaves and roots were significantly higher than those in the pistils, petals, and stamens. How does their significant enrichment in roots and leaves promote the progress of flowering (which is an important physiological process)? Thus, this enrichment needs to be further studied from the tissue distribution of protein levels and the mechanism of protein transport between different tissues. Effect of Cold Stress on the Starch and Sucrose Contents in the Top Buds of Saffron According to the results from the bioinformatic analysis of DAPs and the expression levels of DAPs and their coding genes, cold stress may affect the biosynthesis or metabolism processes of sucrose and starch in the top buds of saffron and then affect the contents of sucrose and starch. Both cold-treated non-flowering top buds and normal flowering top buds were collected separately at both the dormant stage (when top bud length < 2 mm) and floral/leaf organogenesis stage (when top length = 6 to 7 mm). During the normal development of floral organs, the sucrose contents in the top buds of saffron increased from 10.18 ± 0.63 mg g −1 (fresh weight) to 12.83 ± 0.69 mg g −1 , and as expected, the starch contents decreased from 35.66 ± 2.08 mg g −1 (fresh weight) to 30.76 ± 1.92 mg g −1 . In contrast, nonflowering buds showed a significant decrease in sucrose content under cold stress of 5.49 ± 0.33 mg g −1 , which was almost half of the sucrose content at the dormancy stage. There was no significant change in the starch content of non-flowering buds compared with that in the dormancy stage, while a relatively higher accumulation of starch than the flowering buds was detected, as shown in Figure 5. The Model of "ROS-Antioxidant Protein-Starch Biosynthesis-Starch/Sugar Homeostasis-Flowering Phenotype" May Explain the Non-flowering Phenotype of Saffron Top Buds Induced by Cold Stress Cold stress can have a devastating effect on plant metabolism, disrupt cellular homeostasis, and uncouple major physiological processes, such as alterations in chlorophyll fluorescence, electrolyte leakage, ROS, malondialdehyde, and other metabolites . Our data suggested that the major result of cold stress-induced cellular changes in the top buds of saffron is the enhanced accumulation of ROS. Basically, upon cold stress, the fluidity of the plasma membrane is altered, and misfolded proteins and excess ROS are produced (Choudhury et al., 2017). If kept unchecked, ROS concentrations will increase in cells and cause oxidative damage to membranes (lipid peroxidation), proteins, and RNA and DNA molecules and can even lead to the oxidative destruction of the cell in a process termed oxidative stress (Mittler, 2002). However, this process is mitigated in cells by a large number of ROS-detoxifying proteins [e.g., superoxide dismutase, ascorbate peroxidase (APX), catalase, glutathione peroxidase, and peroxiredoxin (PRX)]. Indeed a series of peroxidases and thioredoxin (TRX) superfamily members was found to be upregulated under chilling conditions in this study, which can eliminate hydrogen peroxide and maintain redox homeostasis in cells. For example, the members of the peroxidase family, probable L-ascorbate peroxidase 4 (APX4), peroxidase 72 (PER72), and PRX2E, can directly catalyze the reduction of hydrogen peroxide to H 2 O. CXXS1, PDIL, and GRXC2, which are members of the TRX superfamily, have disulfide isomerase and reductase activities. Since both GRXC2 and APX are involved in the glutathione-ascorbate cycle, the results suggested that the glutathione-ascorbate cycle may be the major antioxidant system by which saffron responds to ROS induced by cold stress. Interestingly, in addition to scavenging ROS, the highly expressed antioxidant proteins induced by cold stress could also influence the expression of enzymes involved in carbohydrate metabolism, which may further affect the normal physiological phenotypes of plants. For example, thioredoxin regulates starch and protein breakdown as a scavenger of H 2 O 2 in starchy endosperm under oxidative stress (Wong et al., 2003), and sucrose synthase is also regarded as a potential thioredoxin target protein in starchy endosperm of rice. Kim Y. J. et al. (2012) reported that PDIL1-1 knockout transgenic rice showed an opaque endosperm and a thick aleurone layer and proved that the opaque phenotype of PDIL1-1 mutant seeds results from the production of irregular starch granules and protein bodies through loss of regulatory activity for various proteins involved in the synthesis of seed components. In our results, the expression of PDIL in top buds under cold stress was 1.8 times higher than that of normal buds, which can potentially affect starch biosynthesis and further starch-sugar homeostasis. Starch, as a glucose homopolymer, is the major storage carbohydrate deposited in plastids. It can be transported out to cytosol to synthesize sucrose and other disaccharides, which act as a "sugar source" when carbon is needed or as a "sugar sink" in amyloplast when sugars are in excess, which may permit the optimal use of these carbon reserves. Starch-sugar interconversion between plastids and cytosol plays a profound physiological role in all plants (Dong and Beckles, 2019). During the different stages of regular plant growth and development, the ratio of starch-sugar changes dynamically in a reasonable range. More sugar, especially sucrose, may be needed excessively in the flowering transition stage in some plant species (Bernier et al., 1993;Ohto et al., 2001). In this study, compared with cold-treated non-flowering samples, several peptides encoded as SUS1 and SUS2 were significantly upregulated in normal flowering plants, leading to the GO pathway of DAPs enriched in "sucrose metabolic process." The results were further verified at the mRNA level using more biological replicates, suggesting that the increasing sucrose signal may positively promote the flowering transition of saffron buds. Hu et al. (2020) proved the upregulation of soluble sugar content during saffron floral initiation, and in this study, the sucrose content in the top buds was further detected to increase during the flowering process, which further confirmed this result. However, when saffron was exposed to cold stress, the balance between starch and sugar was interrupted, as shown by the DAPs enriched in "starch biosynthetic process" in plastids and "sucrose metabolic process" in cytosol. At the metabolic level, the starch contents in flowering buds were significantly decreased, while those in cold-treated non-flowering samples remained almost unchanged, which further confirmed the excessive starch accumulation in the apical buds after cold treatment compared with that under normal conditions. In contrast, the sucrose content sharply decreased compared with normal samples, indicating the positive conversion of sugar to starch. Based on the results in our work, it can be speculated that changes in carbon distribution in the top buds after cold treatment may be responsible for or involved in the loss of the flowering phenotype of saffron. Of course, the changes of carbon distribution in the top buds after cold treatment may be caused by other unknown mechanisms. As previously described, sucrose signaling participates in flower induction. Sugar not only fuels cellular carbon and energy metabolism for the plant but also serves as a signal molecule in coordination with hormone signaling pathways (Rolland et al., 2006) to mediate various plant physiological processes, which likely include flowering and juvenile-to-adult phase transition (Yu et al., 2013). Research has demonstrated that flower induction in plants, which involves sugar levels and responses, may actually be regulated by alterations in sugar flux or sugar signaling (Smeekens et al., 2010). For example, by transporting sugar signaling produced from "source" leaves to the shoot apical meristem, flower development and induction can be determined (Bernier and Périlleux, 2005). Among the various components of sugar, there has been a good amount of evidence suggesting that sucrose promotes flowering in most species that have been examined (Ohto et al., 2001). In FIGURE 6 | Hypothetical model of "ROS-antioxidant protein-starch biosynthesis-starch/sugar homeostasis-flowering phenotype" to explain the phenomenon that saffron does not bloom due to low temperature treatment. Genes marked in red are identified first in this study. Genes marked in gray are important key genes that have not been identified at present. Genes marked in orange have been reported in other studies in saffron. ROS, reactive oxygen species; PER2, period circadian regulator 2; PRX2F, peroxiredoxin-2F; APX4, L-ascorbate peroxidase; DHAR, dehydroascorbate reductase; GRXC2, glutaredoxin-C2; ADH1, alcohol dehydrogenase 1; pCaP1, plasma membrane-associated cation-binding protein 1; VIL1, VIN3-like protein 1; SUS1, sucrose synthase 1; SUS2, sucrose synthase 1; IDD10, indeterminate domain 10; UGPase, UDP-glucose pyrophosphorylase; USPase, UDP-sugar pyrophosphorylase; PGMP, phosphoglucomutase; APL2, AGPase large subunit; GLGs, glucose-1-phosphate adenylytransferase small subunit; SS, sucrose synthase; GBSS1, granule-bound starch synthase 1; SBE1, 1,4-alpha-glucan branching enzyme; PULA1, pullulanase 1; VIN3, vernalization insensitive 3; FLC, flowering locus C; FLK, flowering locus K homology domain; HUA1, zinc finger CCCH domain-containing protein 37; HUA2, enhancer of AG-4 protein 2; eIF4A1, eukaryotic translation initiation factor 4A1; SVP, short vegetative phase; FT, flowering locus T; FD, flowering locus D; LHY, late elongated hypocotyl; CCA1, circadian clock associated 1; TFL1, terminal flower 1; AP1, apetala 1; SERA1, D-3-phosphoglycerate dehydrogenase 1; nsLTPs, non-specific lipid transfer proteins. our results, proteins were highly enriched in the "starch biosynthetic process, " while the sucrose content significantly decreased in non-flowering saffron under cold treatment. Therefore, the decreased sugar levels induced by excessive starch accumulation may inhibit the initiation signal. On the other hand, the results indicated that some proteins involved in starch synthesis are also regulatory proteins of flowering genes. For example, granule-bound starch synthase (GBSSI), significantly upregulated in non-flowering saffron with cold temperature treatment, is necessary for the synthesis of amylose, and its mRNA expression is controlled by the CCA1 (CIRCADIAN CLOCK ASSOCIATED 1)/LHY (LATE ELONGATED HYPOCOTYL) transcription factors (Tenorio et al., 2003). CCA1/LHY encodes molecular components of the circadian oscillator, which also controls the process of flowering. Coupland's group found that, under short days, lhy cca1-1 mutant plants flowered significantly earlier than wildtype plants and further proved that LHY/CCA1 were important transcription regulators of the flowering gene TOC1 (Mizoguchi et al., 2002). Therefore, the overexpression of GBSSI protein in saffron may not only directly cause the accumulation of starch and subsequently starch-sugar redistribution but also indirectly involve the disappearance of the flowering phenotype through the CCA1/LHY-TOC1 axis. Our results provide evidence to show the modulation of starch-sucrose dynamics under cold stress and suggest that this redistribution may cause a change in the flowering phenotype. In total, saffron under cold stress produced the amount of ROS. In order to eliminate it, the activities of antioxidant proteases were increased significantly and can further influence the expression of enzymes involved in starch biosynthesis. Based on dynamic changes in the starch-sugar interconversion, more starch accumulation in saffron under cold stress may reduce sucrose synthesis. Consequently, low sucrose level and some proteins involved in starch synthesis may inhibit the function of flower-related proteins, which leads to the non-flowering phenotype of saffron. Therefore, we established a hypothetical model-"ROS-antioxidant protein-starch biosynthesisstarch/sugar homeostasis flowering phenotype"-to explain the phenomenon that saffron does not bloom due to low temperature treatment (Figure 6). It will be very challenging and meaningful for us to further explore the detailed molecular mechanism of the pathways. Other Proteins That Are Associated With Cold Adaption in Saffron In addition to antioxidant proteins, a series of other coldresponsive proteins reported in other plants was also highly expressed in the saffron top buds under cold stress, which may facilitate saffron acquisition of tolerance ability. Arabidopsis plasma membrane-associated cation-binding protein 2 (PCaP2) is widely expressed in the roots, hypocotyls, cotyledons, root hairs, and pollen tubes (Wang et al., 2007;Kato et al., 2010). Previous studies have demonstrated that PCaP2 is involved in regulating the dynamics of microtubules and F-actin and Ca 2+ binding ability (Wang et al., 2007;Kato et al., 2010Kato et al., , 2013Zhu et al., 2013). Interestingly, one work by Kato et al. (2010) found that the mRNA level of PCaP2 increased approximately eightfold by cold treatments. A recent study stated that PCaP2 plays an important and positive function in chilling tolerance and ABA response and can trigger CBF and SnRK2 transcriptional networks . In this study, PCaP1 (PB.68957.1| m.21748) showed a similar upregulation trend in chilling conditions, which could improve saffron cold tolerance. Classic alcohol dehydrogenase (ADH) is a Zn-binding enzyme that can play important roles in plant biotic and abiotic stress resistance (Komatsu et al., 2011;Pathuri et al., 2011). ADH1 could enhance cold resistance in plants by maintaining the stability of the membrane structure. Additionally, soluble sugars (e.g., sucrose) and amino acids (e.g., asparagine) were found to change accordingly in the adh1 mutant under cold stress (Song et al., 2016). Indeed our data showed that the level of ADH1 (PB.46661.1| m.6484) was significantly upregulated in nonflowering plants under cold temperature treatment, suggesting the important role of ADH1 in maintaining the stability of the saffron cell membrane. More interestingly, VIN3-like protein 1 (VIL1), an important protein involved in the vernalization pathway during the flowering process, was highly expressed in the saffron top buds under cold stress. In most flowering plants experiencing the vernalization pathway, VIL1 mediates the cold response and is involved in modifications to FLC and FLM chromatin that are associated with an epigenetically silenced state and with acquisition of competence to flower (Sung et al., 2006). Our results showed that the level of VIL1 in saffron was upregulated under cold stress but was unable to promote blooms. This may be related to the fact that saffron blooms in autumn (usually in November) in China and does not share the same vernalization pathway as spring flowering plants. Non-specific Lipid Transfer Proteins Are Involved in Flower Induction and Development in Saffron Plant nsLTPs usually have low molecular mass and are known to play roles in many biological processes, such as cuticular wax synthesis , abiotic stress (Guo et al., 2013), disease resistance (Maldonado et al., 2002;Jung et al., 2009), anther development (Zhang et al., 2010), and pollen tube tip growth (Chae et al., 2009;Chen et al., 2011). Our research showed that at least four peptides (PB.33284.5| m.123278, PB.68442.1| m.210074, PB.8322.7| m.32545, and PB.74530.1| m.15978) that are highly expressed in normal flowering top buds of saffron were annotated as nsLTPs, which suggested that, when flower primordia were formed, the proteins involved in flower organ development began to accumulate, and nsLTPs may play an important role in the differentiation of anthers and pollen tubes of saffron. In conclusion, we discussed how changes in starch biosynthesis and sucrose metabolism can facilitate adaptive changes in carbon allocation of saffron buds for protection against cold stresses. Stress resistance and flowering are two complex biological processes in plants. The interaction between them is not clear to date, and this study suggested possible interaction pathways between them. In addition, the protein profiles of saffron under cold stress and a normal environment were revealed for the first time. A series of proteins related to flowering and abiotic stress was screened in saffron, and the temporal/spatial distribution of newly identified saffron flowerrelated genes, including CsSUS2, CsFLK, and CsGSTU7, was explored in this study, which further improved the regulatory network of saffron flowering genes. DATA AVAILABILITY STATEMENT The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE (Perez-Riverol et al., 2019) partner repository with the dataset identifier PXD021020. AUTHOR CONTRIBUTIONS LL designed the experiments. JC and GZ wrote the manuscript. JC, GZ, JL, and XX performed the experiments. HH and LX contributed to the data analysis. YD and XQ contributed to the material planting and sample collection. All the authors read and approved the final manuscript. Supplementary Figure 4 | Reactive oxygen species contents of apical buds during the flowering transition process between flowering (normal) and non-flowering groups (cold-treated). Values (mean ± SD) were determined from three independent experiments (n = 3). *0.01 ≤ p < 0.05; **0.001 ≤ p < 0.01; ***0.0001 ≤ p < 0.001; ****p < 0.0001.
2021-05-11T13:26:23.533Z
2021-05-11T00:00:00.000
{ "year": 2021, "sha1": "0b5cad70fd6740182ca4b83cca1a0fe3033d3be1", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2021.644934/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b5cad70fd6740182ca4b83cca1a0fe3033d3be1", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }