id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
198783977 | pes2o/s2orc | v3-fos-license | Numerical Simulation of Electrodynamic Structure of Penning Discharge in Large Volume Discharge Chamber with Annular Anode
In the study numerical simulation of electrodynamic structure of Penning gas discharge formed in large volume discharge chamber with annular anode is performed. For numerical modelling computer code implementing two dimensional axisymmetric electrostatic particle-in-cell method on unstructured grids is used. Computer simulation results of the Penning gas discharge in He under pressure 3·10−4 - 9·10−4 mbar, anode voltage 1-2 kV in the presence of external magnetic field B∼0.1 T are provided. Comparison of numerical and experimental results is performed.
Introduction
Typically, depending on application, Penning gas discharge plasma is formed ether in molecular hydrogen [1][2][3] or in helium [4,5]. However, other buffer gases, such as neon or argon, are also used [6 -8]. Molecular hydrogen is used in case when Penning ion source is part of the neutron generator [1 -3]. Noble gases (helium, argon, neon) are used in case when plasma of Penning gas discharge is utilized as a light emitting source [4 -8].
Most of numerical simulations are aimed at the investigation of spatial electrodynamic structure of Penning gas discharge in molecular hydrogen [9 -14]. For the sake of development of numerical models of Penning gas discharge and validation of these models in this study results of numerical analysis using PIC-MCC method of large volume Penning gas discharge in helium experimentally studied in [4] are going to be presented.
Structure of the paper is as follows. In section 2 brief description of experiments reported in [4] will be given. In section 3 numerical model of considered Penning discharge based on PIC-MCC method will be described. In section 4 results of numerical analysis of selected experimental conditions are going to be presented.
Description of experiments
The study [4] is dedicated to the development of light emitting source, based on the large volume Penning discharge with single and double annular anodes. Schematic view of experimental setup is shown in figure 1. Developed discharge chamber consists of two cylindrical cathodes made of stainless steel. Radius of cathodes is 6 cm, distance between them is 5.5 cm. Cylindrical cathodes are connected to top and bottom flanges of vacuum chamber. Two different configurations of gas discharge chambers were studied in [4]: with single annular anode and double annular anode (figures 2 and 3). In the first case anode ring with circular cross section radius 0.4 cm, inner diameter 6.45 cm and outer diameter 7.25 cm is placed at the height of 2.25 cm from the bottom cathode. In the second case two similar rings are located between cathodes. The distance between two rings is 2 cm. The bottom ring is places at the height of 0.75 cm from the bottom cathode. Teflon rods are used in order to keep anodes in the given position. Magnetic field is created by means of neodymium magnets placed behind the cathodes. Authors of [4] state that magnetic field is almost uniform and its induction is 0.1 T in the center.
The buffer gas is helium. In [4] dependences of discharge current on applied anode voltage was measured in the wide range of buffer gas pressures for two anode configurations. Results of measurements are shown in figures 4 (single anode) and figure 5 (double anode). Results of estimation of electron plasma density for single and double anode configurations at working gas pressure 9·10 -4 mbar are presented in [4]: sa e n = 2·10 10 cm -3 , da e n = 2·10 11 cm -3 correspondingly.
For the sake of numerical simulation it is necessary to estimate various plasma parameters relevant to the problem such as mean free path , Debye length
Description of the model of Penning discharge in helium
Numerical model of Penning gas discharge is based on 2D/3V axisymmetric electrostatic particle-incell -Monte-Carlo collision method. According to PIC-MCC method each time step macroparticles, which simulate behavior of large number of real plasma particles (electrons or ions) move in the selfconsistent electromagnetic field and participate in collision processes. Each time step consists of several stages shown in the figure 6. Detailed description of numerical algorithms and methods at the core of computer implementation of each stage is described in [15,16]. Here we are going to discuss only several topics connected with the modeling of elementary processes using Monte-Carlo collision technique in the framework of particle-in-cell method. Figure 6. PIC-MCC computational cycle.
Monte-Carlo collision method
Monte-Carlo collision method as it used in particle-in-cell simulations generally can be described by the following formulas [17,18]: These steps can be described as following. For each particle in the simulation one determines kinetic energy i (1) and total cross-section ttl (2). In (2) only processes that are specific for given particle kind are considered. After that according to (3) nx is the local density of the target species at the position of the i -th particle and t is the time step. Let focus our attention on the last value. One can choose a common time step for computation of particles motion and for computation of collision processes. However this might lead to the problems: 1. In case t is small the expression in the exponent can tend to zero as well as i P , which means that there will be no collision processes at all.
2. Monte-Carlo collision procedure is quite time consuming (even in the case of null-collision modification [18]). If one would compute it each time step it would significantly enlarge computational time needed for the problem to be solved.
Another choice is to set t independently on the time step for particles motion calculation. In this case one needs some criteria to impose on t in order to obtain valid results. The physical restriction which has to be applied to this value is that probability of more than one collision for given particle has to be negligible on the chosen time step. In [18] the probability of occurrence of more than one collision for the given particle was estimated: (ionization of atom He in collisions with electrons); Cross-sections for these processes can be found in [19 -21]. In [19] 1D kinetic simulation of direct current glow discharge in helium is performed. In the model following elementary processes for the helium plasma are included: elastic momentum transfer, full excitation, direct ionization, stepwise ionization, superelastic collisions and excitation of He atoms. In [20] cross-section database for excitation and ionization processes of helium atoms is presented. In [21] cross-sections for elastic scattering and ionization of He atoms was calculated based on the convergent close-coupling theory.
Models of elementary processes
Cross-section for the elastic scattering of electrons on He atoms was take from [21], cross-section for the ionization of He Atoms in collision with electrons was take from [20]. In figure 7 cross-section for both of these processes are shown.
As for the processes at the boundaries ion-electron emission from the cathode is accounted for in the model. The data on the dependence of the electron yield per ion versus energy was taken from [22]. In the rest of this section the models of scattering and ionization processes [18,23] that are used in order to create new particles in the simulation will be described.
Velocity of electron changes in direction and absolute value after elastic scattering on the neutral particle [18]. In order to account for the change in the direction two angles are computed: polar and azimuthal . Both angles are computed by means of random number uniformly distributed in the range [0,1] . Formula for azimuthal angle is: 2 R (5) Approximate expression for differential cross-section is used for calculation of polar angle [18]: here is the initial kinetic energy of electron. Knowing both of these angles one can compute direction of the electron velocity after scattering. Corresponding expressions are presented in [18] and [22]. In [22] expressions for Cartesian components of velocity vector after scattering are given. In order to use formula from [18] one has to take into account that . After algebraic manipulation it can be shown that formulas of [18] and [22] are coincide.
It is worth noting that high energy electrons scatter in the forward direction, while low energy electrons scatter isotropic.
In order to determine kinetic energy of the electron after scattering one can use the following formula [18,24]: here m is the mass of electron and M is the mass of neutral particle. Using kinetic energy calculated according (7) one can fully define velocity of the scattered electron.
It is worth noting that particle-in-cell method used in the study is axisymmetric. It means that one has to transform electron velocity from cylindrical coordinate system to Cartesian before the application of the formulas presented above. Also conclusion step is to convert Cartesian velocity of scattered electron to cylindrical coordinate system. Formulation of the model of ionization of neutral particles in the framework of PIC-MCC method starts with the energy conservation equation [18,23]: In this case one has to determine how the energy of incident electron divided between created electron and scattered electron. Formula for the calculation of the energy of created electron is given in [18,23] As energies of the scattered and ejected electrons are determined polar and azimuthal angles are calculated according to (5)÷(6). Velocity of newly created ion is determined by means of Maxwell distribution and temperature of the neutral gas, which is the parameter of the model (for the results presented in the study T = 300 K).
It is important to remind regarding the need of transformation of velocities from cylindrical to Cartesian coordinate system and backwards.
Numerical results of simulation of helium plasma of Penning discharge
In the section results of numerical simulation of helium plasma of Penning discharge will be presented. Numerical modeling was performed on the mesh presented in figure 8. Mesh consists of 4412 points and 8566 elements. Boundary conditions for Poisson equations are: All presented results are obtained using the assumption that magnetic field inside the discharge chamber is not uniform. The distributions of radial r B and axial z B components of induction of magnetic field are presented in figures 9 and 10. Magnetic field was calculated according to method described in [25].
Initially two electrically neutral clouds were distributed in the computational area. The first one is located above anode and the second one is located below anode. Each cloud consists of 120000 electrons and 120000 ions He + . Weight of the particles in the cloud varies (~7·10 5 ÷4·10 6 ) depending on the simulation in order to maintain reasonable amount of particles in the computational domain. Initial velocities of the macroparticles were sampled using Maxwell distribution at temperature 300 K. For the carried out numerical simulations time step is 5·10 -12 s.
Numerical modeling of the helium plasma of Penning discharge was conducted at following parameters: One can notice that anode layers exists in the discharge. Also one can notice presence of local maxima around 100-200 V in the center of the discharge chamber.
In figure 15 and 16 spatial distribution of number density of electrons and He + ions in computational area is shown for p=9·10 -4 mbar, V a = 2000 V. In figures 23 and 24 distribution of the electrons and He + ions temperatures along the line z=2.75 cm are shown for the 4 sets of considered discharge conditions. For all conditions temperature of electrons reaches two maxima on both sides of anode. Temperature of the electrons in these maxima decreases at decreasing anode voltage from 85 to 40 eV. Temperature of electrons in the bulk of plasma is ~10 eV for all considered conditions. As for the ion temperature it is noticeable that two maxima in T i vanish as local (in the vicinity of the anode, see figure 19) ion number density vanish.
In the figure 25 energy distribution functions of electrons and ions for different discharge parameters are presented. It can be seen that energy distribution functions mostly varies with the anode voltage. Dependence on the discharge pressure is not prominent. Electron energy distribution functions can be approximated using two temperature distributions. According to the results of numerical simulation one can observe that energy distribution function of ions resembles plateau in the range 1000÷1700 eV for cases where anode voltage V a = 2000 V and in the range 250÷750 eV for cases where anode voltage V a = 1000 V.
In the figure 26 comparisons of experimental and numerical data is presented. Blue symbols are results of numerical simulation, black symbols and lines are experimental data. It can be seen that at higher pressures agreement of numerical and experimental data is satisfactory. At low pressures discrepancy is high. Nevertheless presented numerical model of Penning gas discharge in helium correctly describes trend in changing of the discharge current, i.e. discharge current increases at increasing pressure and anode voltage. If one sums steady state current through anode and cathode (considering sign of the current) than the resulting value would oscillate near zero. The maximum deviation of the value varies in the range 14÷30% of discharge current shown in figure 26.
Conclusion
In the study numerical model of Penning gas discharge in helium was described. Model was applied for the simulation of electrodynamic structure of helium plasma of large volume Penning discharge at experimentally investigated parameters. Electrodynamic structure of the Penning gas discharge in helium at pressure p= 3·10 -4 ÷ 9·10 -4 mbar and anode voltage V=1000÷2000 V was obtained and analyzed. Distribution of potential, electric field, charged particles number densities, temperatures and energy distribution functions are presented. Results of simulation are compared with the experimental data for the validation of numerical model.
Presented numerical model of Penning gas discharge in helium correctly describes trend in changing of the discharge current. Agreement of numerically predicted and experimental discharge current is acceptable, while at low pressure discrepancy is significant. | 2019-07-26T12:37:15.078Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "ee053aa2b35583c951711bbf76df47ea7916926c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1250/1/012038",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f49445372e967aea79b8b665d145dc5283a73fb1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
221203031 | pes2o/s2orc | v3-fos-license | Topological polarization, dual invariants, and surface flat band in crystalline insulators
We describe a three-dimensional crystalline topological insulator (TI) phase of matter that exhibits spontaneous polarization. This polarization results from the presence of (approximately) flat bands on the surface of such TIs. These flat bands are a consequence of the bulk-boundary correspondence of polarized topological media, and contrary to related nodal line semimetal phases also containing surface flat bands, they span the entire surface Brillouin zone. We also present an example Hamiltonian exhibiting a Lifshitz transition from the nodal line phase to the TI phase with polarization. Utilizing elasticity tetrads, we show a complete classification of 3D crystalline TI phases and invariants. The phase with polarization naturally arises from this classification as a dual to the previously better-known 3D TI phase exhibiting quantum (spin) Hall effect. Besides polarization, another implication of the large surface flat band is the susceptibility to interaction effects such as superconductivity: the mean-field critical temperature is proportional to the size of the flat bands, and this type of systems may hence exhibit superconductivity with a very high critical temperature.
We describe a three-dimensional crystalline topological insulator (TI) phase of matter that exhibits spontaneous polarization. This polarization results from the presence of (approximately) flat bands on the surface of such TIs. These flat bands are a consequence of the bulk-boundary correspondence of polarized topological media, and contrary to related nodal line semimetal phases also containing surface flat bands, they span the entire surface Brillouin zone. We also present an example Hamiltonian exhibiting a Lifshitz transition from the nodal line phase to the TI phase with polarization. Utilizing elasticity tetrads, we show a complete classification of 3D crystalline TI phases and invariants. The phase with polarization naturally arises from this classification as a dual to the previously better-known 3D TI phase exhibiting quantum (spin) Hall effect. Besides polarization, another implication of the large surface flat band is the susceptibility to interaction effects such as superconductivity: the mean-field critical temperature is proportional to the size of the flat bands, and this type of systems may hence exhibit superconductivity with a very high critical temperature.
I. INTRODUCTION
The best-known topological insulators [1] in two dimensions are characterized by robust edge states and a (spin) Hall conductivity [2] quantized in the units of σ 0 = e 2 /h. In three dimensions, conductivity scales like σ 0 /[ ], where [ ] is a length scale characteristic to the system under consideration. In crystalline matter, the relevant length scale is obtained from the lattice vectors [3,4] and the quantum Hall response is different in different directions specified by the reciprocal lattice vectors.
Another type of a topological response is the electric polarization. It has been discussed e.g. in Refs. [5][6][7][8][9], and it has recently attracted renewed interest [10][11][12][13][14]. The quantized quantity is the 2D polarization charge density that scales like 1/ [ 2 ]. In crystalline media we may hence expect the relevant length scales to be associated with the crystal lattice vectors.
A natural framework to describe the topological response in crystalline media is in terms of elasticity tetrads E a µ = ∂ µ X a , where X a counts the number of lattice planes along crystal direction a. They are a convenient way to discuss semi-classical hydrodynamics, elastic deformations and conserved charges in terms of (continuum) lattice geometry [15]. Different to the dimensionless tetrads in the first order formulation of general relativity, they have the canonical dimensions of inverse length inherited from the underlying lattice. More recently, they have been shown to enter the field theoretical topological response of crystalline insulators with additional conserved lattice charges, specified by integer quantized momentum space invariants [16,17], and they can be extended [18] to the relativistic quantum fields and gravity [19][20][21][22][23][24]. In these cases, the associated crystal lattice is not necessarily due to periodic real space structure on which the fermionic system is placed but can be induced by interactions and/or other superstructures in the relevant ground state [25]. [2,26] is determined by the topological charge Na integrated over the surface spanned by a pair of E a , whereas the polarization jump is described by the charge N a integrated along one E a .
Here we show that using a combination of the elasticity tetrads E a µ and the electromagnetic gauge fields A µ , one can in 3+1 dimensions construct four different types of topological terms in the electromagnetic action. They are presented in Eqs. (5), (7), (9) and (13), and correspond to dual responses containing different number of elasticity tetrads: three and zero or one and two elasticity tetrads, respectively. As we discuss in Sec. II B, the first two describe the trivial band insulator and axion electrodynamics, respectively. The third term describes the 3D quantum Hall effect [16], and the fourth the topological polarization. The latter two phases are schematically described in Fig. 1.
Besides quantized bulk polarization, the TI phase with polarization implies protected boundary modes. There is a marked difference between the boundary modes of the QHE-type topological insulators and those exhibiting surface polarization. Namely, whereas the previous form chiral "half-Dirac" surface modes [27], the surface states in the latter case form approximate flat bands spanning the entire surface Brillouin zone (BZ). Similar flat bands are found in the surfaces of nodal line semimetals, [28][29][30][31] nodal line superconductors [32] and superfluids [33]. However, in those cases the flat bands span only part of the surface BZ, corresponding to the projection of the nodal lines to the surface.
Accordingly, here we discuss the topological polarization and flat band using an extension of a simple model [28] to the range of parameters relevant for crystalline insulators (or superconductors). In this extension the multiple Dirac points in a layered quasi-2D system evolve into a flat band, which occupies the whole 2D BZ on the boundaries of the 3D system when the number of atomic layers increases. This is accompanied by the formation of a topological crystalline insulator state in the bulk. In the numerical model, we consider the topological response and the corresponding topological invariants for the bulk topological insulator in terms of the elasticity tetrads and calculate the generalized polarization, matching the polarization by the surface flat bands.
II. TOPOLOGICAL POLARIZATION AND DUAL INVARIANTS IN CRYSTALLINE INSULATORS
In this section, we first review the elasticity tetrads, following Ref. [16], representing continuum translational gauge fields in the crystalline system. We then discuss the dual forms of topological responses arising from the elasticity tetrads in crystalline insulators in three dimensions. These are, respectively, the total charge conservation and theta term and the three-dimensional quantum Hall effect and topological polarization. Although charge transport is suppressed by the mobility gap in insulators, stricly speaking, the system is not gapped since e.g. the elasticity tetrads explicitly include symmetry breaking Goldstone modes. This carries over to the responses that are quantized in terms of the invariants and combinations of elasticity tetrads. Throughout the paper, we work mostly in units where = e = 1.
A. Elasticity tetrads
Let us consider the theory of crystalline elasticity using the approach of Refs. 15 and 16. An arbitrary, weakly deformed crystal structure can be described as a system of three crystallographic surfaces, Bragg planes, of constant phase X a (x) = 2πn a , n a ∈ Z with a = 1, 2, 3. The intersection of the surfaces (1) represent the lattice points of a deformed crystal. In the continuum limit, the elasticity tetrads are gradients of the phase functions: y, z, a = 1, 2, 3, (2) for a three-dimensional spatial crystal. For simplicity, we work with the orthorombic unit cell lattice system, but the generalization to other lattice symmetries and bases is straightforward and does not affect the general results. Generalizing to temporal directions, in an equilibrium (spacetime) crystal lattice the quantities E a µ are lattice four-vectors of the reciprocal (four-dimensional) Bravais lattice. Here E a t would describe dynamic changes in the lattice, such as phonons, whereas E 0 µ would correspond to a periodicity in time. In what follows, we concentrate on the static case, but the formulas are readily generalizable to the dynamic case as well. In a deformed crystal, but in the absence of dislocations the tetrads E a µ satisfy the integrability condition of vanishing torsion [16] These tetrads have dimension of inverse length, [E a µ ] = 1/[l], being gradients of dimensionless functions X a . This and the presence of finite lattice symmetries is the main difference to the dimensionless tetrads used in the theories of general relativity. However, also in some theories of gravity, the tetrads have naturally dimension 1/[l], see e.g. Refs. 18-24. Moreover, due to the periodicity of the crystal, the functions X a play the role of continuum U (1) fields, and thus the tetrads play the role of tautological vector potentials representing effective gauge fields corresponding to conserved lattice charges in different directions. In a crystalline topological insulator/superconductor, their products correspond to the approximate low-energy (higherform) symmetries of charge conservation along lattice lines or surfaces below the mobility/quasiparticle gap [34]. In this way, the higher-dimensional bulk state can still be (weakly) topologically non-trivial with associated non-zero and quantized momentum space invariants in the response. For similar ideas, see [35,36]. The remaining elasticity tetrad fields enter in new crystalline topological terms and contain a mixture of the electromagnetic A µ and elastic E a µ gauge fields [34], as we next discuss.
B. Lattice volume and 3D theta term
To set the stage, we first discuss the tautological topological conservation law of lattice charges in an insulator and its dual response. The three-dimensional lattice volume form is Related to this, the insulator has an integer number of filled electronic bands per unit cell, (minus the positive background charge) and no free charges below the mobility gap [25]. The charge density (4) couples to the electric potential A 0 as [16,17] via the topological action where the invariant in terms of the (semiclassical) counts the number of occupied states in the BZ. The invariant N ω ≡ N ω (p) can only change when the gap in the spectrum closes. This makes N ω the simplest topological invariant possible. Note that the lattice vectors represented by the elasticity tetrads carry spatial indices only, therefore singling out the potential A 0 . While the conservation of lattice volume is tautological to charge conservation below the (mobility) gap, and must be compensated by overall charge neutrality over the unit cell, the response (5) can be non-trivial when considered on the surface of a topological state with polarization [17], see below. This arises since the boundary response is dictated by overall conservation laws from non-trivial higher-dimensional bulk terms and often is anomalous as a purely lower-dimensional theory. The response, thus understood, applies to insulators under elastic deformations, i.e. with coordinate dependence on the tetrads E a . The dual topological response corresponding to the lattice volume is the bulk theta term, corresponding to axion electrodynamics [37], coupling to zero-dimensional lattice points, i.e. the response of the original point charges.
is the invariant corresponding to the whole frequency-momentum space, extended by the periodic adiabatic parameter u [16,[37][38][39]. The invariant is equal to the second Chern number, and therefore reduces to an integral over the physical BZ. Time-reversal invariance and electric charge conservation suffices for non-trivial N θ but are not necessary. With some other protecting symmetry K it reads where K is the operator representation of the symmetry transformation. Examples of such topological invariant are provided by the superfluid 3 He-B and Standard Model of particle physics [40], when they are considered on the lattice. In other words, whereas in Eq. (6) p is fixed and the integral goes over the frequency, for the dual invariant the frequency is fixed and the integral goes over the momenta. Combined with the protecting symmetry, the theta term mod π implies the protected boundary modes on the surfaces of the insulator [37]. The invariant N θ (x, t) is not quantized in general, however, and can be noninteger mod π for solitonic configurations [39,41], see also [42].
Next we discuss non-trivial topological crystalline responses that are tantamount to extra crystalline topological conservation laws, featuring the elasticity tetrads, in addition to electric charge conservation (gauge invariance). These also imply protected boundary modes in associated crystal directions.
C. Anomalous QHE in 3D topological insulators
In particular, the elasticity tetrads are important in the field theory description of the intrinsic (without external magnetic field) quantum Hall effect in 3D topological and axion insulators [26]. The corresponding topological response contains the elasticity tetrad as a dynamical lattice gauge field combined to the electromagnetic gauge field with the Chern-Simons topological term [3,16,[43][44][45]: The response resulting from this action is topologically non-trivial and the prefactor is expressed in terms of the topological charges in momentum space. This implies chiral fermion modes on the boundary, relevant for the 3D QHE along N a E a (see Eq. (12) below), as well as on dislocations [3].
where φ is the symmetry breaking phase mode, can enter [46] leading to chiral Majorana modes instead. The three independent integer quantized coefficients N a are expressed in terms of integrals of the Green's functions in the energy-momentum space [43,44]: Here the momentum integral is over the 2D torus -the 2D boundary S a of the elementary cell of the 3D reciprocal lattice, see Fig. 1. For a simple, say, orthorhombic lattice in Fig. 1, the topological charge describing the QHE in, say, the (x, y)plane is N z . It is the integral in the (p x , p y ) plane of the elementary cell of the reciprocal lattice at fixed p z : This integral in gapped crystalline insulators with AQHE does not depend on p z , signaling the quantized response (9). While in 2D crystals the topological invariant describes the quantization of the Hall conductance, the topological invariants N a in 3D crystals describe the quantized response of the Hall conductivity to deformation [3]: The presence of the reciprocal lattice vector E a k of dimension 1/[l] leads to the correct dimensions of the 3D conductivity, as expected.
D. Polarization and flat bands in 3D topological insulators
The three topological invariants N a responsible for the 3D QHE are expressed in terms of integrals over three planar cross sections of the elementary cell of the threedimensional reciprocal lattice, specified by perpendicular lattice directions. In three dimensions, there is another class of topological invariants represented by three invariants N a in terms of line-integrals along vectors of the reciprocal Bravais lattice that couple to perpendicular planes as in Fig. 1. Such a line forms a closed loop in the crystal that can, for example, accumulate a Zak phase π, see e.g. Refs. 28 and 29 and 10 and 13.
The invariants N a are related to a topological response that can be considered dual to the action (9), where one gauge field A µ is substituted by the tetrad gauge field. This is given by the following topological term in the action Since the term (13) is linear in the electric field E = ∂ t A − ∇A 0 , three invariants N a (a = 1, 2, 3) characterize the topological polarization δS[A, E, E]/δE along three directions. It leads to the induced boundary charges from the bulk, in addition to modes bound on dislocations [16,17]. They are described by the action, assuming constant A µ along a at the boundary for simplicity, It describes the surface polarization charge density coupling to A 0 , and ∆N a is the (integrated) bulk-boundary jump in N a and the integral is perpendicular to the direction a. Similar to the case of Eq. (5), the static elasticity tetrads single out only the A 0 term. For superconductors, the combination A 0 − ∂ t φ can enter [46], with Majorana modes from polarization [14]. Moreover, the boundary theory can be anomalous when considered without the associated bulk [17,25]. From the comparison of the polarization to the Zak phase, see e.g. Refs. [10,13,14] for insulators, superconductors and [28,29] for gapless systems, we conclude that in some cases the invariants N a can be written simply in terms of an effective Hamiltonian H(p) = 1/G(p, ω = 0), which is the inverse of the Green's function at zero frequency. The polarization invariant can be more generally linked to the semi-classical expansion for the momentum space invariants discussed in Ref. 16. Here we assume that the insulator is P T symmetric, i.e. obeys the combination of time reversal and space inversion symmetries, and thus the P T operation commutes with the Hamiltonian. It is important that the operator P T is local in momentum space (see also [47,48]), so that we can write the invariant in terms of an effective Hamiltonian. In particular, for an orthorhombic lattice the invariant is Tr P T dp z H −1 ∂ pz H .
Similar to the invariant N z (p z ), which does not depend on p z in insulators, the invariant N z (p x , p y ) does not depend on the transverse momenta p ⊥ in the gapped systems (insulators or superconductors).
In non-interacting P T -symmetric insulators [13] and superconductors [14] these invariants determine the Berry phase change along the loop (the Zak phase), which is 2πN a . In nodal line semimetals the non-zero Zak phase produces zero-energy surface states, which form a flat band [28,29]. In gapped crystalline insulators, where the invariants do not depend on p ⊥ , the flat band occupies the whole Brillouin zone on the corresponding boundaries of the sample. Note that the exact flatness of these surface bands rely on a chiral symmetry often present especially in nodal line superconductors [32], but also in approximative descriptions of nodal line semimetals. [28] This symmetry is not necessary for the stability of the nodal lines [31,49], but in its absence the surface states become "drumhead" states with some dispersion.
The same is expected for the flat bands of the crystalline insulator [13].
E. Dual invariants and quantized electric polarization response
To gain insight to the dual responses, including the polarization, let us consider for simplicity an orthorhombic crystal with an electric field along z. From Fig. 1, the invariants N a can be considered as geometric duals to the three invariants N a in the crystalline lattice. While the invariant N 3 is an integral over the surface formed by two vectors, E 1 ∧ E 2 , the invariant N 3 is an integral on the path along the vector E 3 . They respectively couple to the tetrads E 3 and E 1 ∧ E 2 in the response.
We now focus explicitly on the polarization. Then the appropriate part of the action contains the invariant N 3 : where S 12 is the area of the 2D BZ in the plane perpendicular to the normal of the considered boundary.
Electric polarization is determined as the response of the action to the electric field E in the limit of infinitesimal electric field, E → 0. From Eq. (16) it looks that for the topological insulator with N a = 0, the polarization is non-zero in zero electric field, which is however forbidden by parity symmetry, or by the P T invariance. In fact, it is forbidden for the infinite sample, while in the presence of boundaries this is possible, since boundaries violate parity symmetry, similar to the time-reversal symmetry and surface modes with theta term. In the presence of two boundaries there are two degenerate ground states with opposite polarization. In one state the positive electric charges are concentrated on the upper boundary (with electric charge +|e|/2 per one state in the flat band), and the negative charges are on the lower boundary. In the other degenerate state the polarization is opposite. The first state is obtained as a response to the electric field E z → +0, while the second state is obtained in the limit E z → −0. This means that the integer topological polarization can be considered as the difference in polarization, when the electric field changes sign.
Recent calculations of the topological polarization in nodal loop semimetals have been done in Ref. 8. We consider this for crystalline topological insulators where the response is quantized in terms of the elasticity tetrads. Similar to the response of the QHE to deformations in Eq. (12), which is quantized in crystalline topological insulators in terms of invariants N a , the response of the topological polarization to strain is quantized in terms of the invariants N a . From Eq. (16) it follows that the quantized response corresponding to the polarization P i = δS/δE i | E=0 is the deformation of the cross sectional area in the reciprocal lattice: For the simple orthorhombic crystal and for polarization along z this becomes The quantized variation of the polarization with respect to deformation is an example of a well defined "differential" polarization [6,7]. Note that the polarization itself is not quantized, depending on (the surface spanned by) the reciprocal lattice vectors, but its derivative with respect to deformation in Eq. (17) is quantized.
III. POLARIZATION AND FLAT BAND IN A NUMERICAL MODEL
In 3D topological insulators, the same invariant N a hence implies both the flat band on the surface of the material and the topological polarization in the bulk response. In general terms, this is an example of bulkboundary correspondence or anomaly inflow from the bulk to the boundary, as discussed above.
More concretely, this follows since each p ⊥ the system represents a 1+1d topological insulator, and thus for each p ⊥ there should be a zero energy state on the boundary. Thus for the topological insulators with nonzero N c the flat band exists on the surface for all p ⊥ . This is distinct from nodal line semimetals, where the region of the surface flat band is bounded by the projection of the nodal line to the boundary. The topological insulator phase can be obtained when the Dirac loop is moved to the boundary of the BZ. This can be verified using an extension of the model i.e. the Hamiltonian in the limit of infinite number of layers is For low enough t, the nodal line can be found at the momenta p x , p y , p z that simultaneously nullify the coefficients of σ x,y . This model has three different phases depending on the value of the coefficient t as illustrated in Figs. 2 and 3. For t < 1, the first Brillouin zone contains four spiral lines, one inside it, others going through the Brillouin zone boundaries. In this case there are surface flat bands at the projection of the spirals to the surfaces. At t = 1 these lines touch and cut each other to form closed nodal line loops when 1 < t < √ 2. The projection of these loops to the surface still mark the boundaries of the surface flat bands. Finally, for t = √ 2 the loops shrink into four nodal points and vanish for t > √ 2 in which case the system forms a topological insulator. In this case the flat band extends throughout the 2D Brillouin zone of the transverse momenta. This behavior is qualitatively similar to that found for the slightly more complicated model of rhombohedrally stacked honeycomb lattice. [49] The topological invariant (15) is in Eq. 5 of Ref. 28, where the P T operator is played by σ z . In terms of the unit vector of Pauli matrices in the Green's function g(p, ω) ≡ G |G| the invariant is in Eq. (8) of Ref. 28: In the case of the Hamiltonian in Eq. (19), in the nodal line phase corresponding to t < √ 2, N 3 (p ⊥ ) is non-zero inside the projection of the nodal lines to the 2D space p ⊥ and zero outside it. On the other hand, in the topological insulator phase with t > √ 2, N 3 (p) = 1 for all transverse momenta.
For a finite number of layers the Hamiltonian matrix is This can be used to compute the spectrum shown in Fig. 3. Moreover, using the (spinor) eigenstates φ n (j, p x , p y ) of the finite-system Hamiltonian corresponding to eigenenergy n , we also get the charge density at layer j Here the integral goes over the 2D Brillouin zone of size S 12 of the transverse momenta, f ( ) is the Fermi distribution and ρ 0 = eS 12 /(4π 2 ) ensures a charge neutral situation at zero chemical potential. We calculate everything at at zero temperature.
In a given electric field, the polarization can be computed as We calculate this polarization in the case of an applied electric field similarly as in Ref. 8 (see Appendix for details) [50]. We mostly concentrate on the case of negligible screening, i.e., disregard the back-action of the charge density to the electric field. This corresponds to the limit α → 0 in Ref. 8. The results are shown in Fig. 4. Due to the presence of the flat bands, a small electric field leads to a charge density that is antisymmetric with respect to the center of the system (the average charge hence vanishes), i.e., a non-zero charge polarization. This polarization jumps rather abruptly as a function of the sign of the electric field. The size of the jump is integer where Ω FB is the area of the flat band in momentum space. In the topological insulator phase t > √ 2 the size of the flat band becomes equal to the size of the 2D Brillouin zone, Ω FB = S 12 , and hence we get the result of Eq. (18). Note that the model described here contains a chiral symmetry: H anticommutes with the P T symmetry operator σ z . Such chiral symmetries are typically not encountered in crystal lattices, but they may be approximate symmetries in their model Hamiltonians (for the case of rhombohedral graphite, see Ref. 51). Chiral symmetry breaking terms do not destroy the surface states, but in their presence the surface states become drumhead states with a non-zero bandwidth δ . In this case the polarization no longer contains an abrupt jump as a function of the field, but the jump has a finite width. Nevertheless, the size of the jump remains the same as in Eq. (24).
IV. CONCLUSION AND OUTLOOK
We discuss the topological responses in threedimensional crystalline insulators (and superconductors) in terms of dual pairs of invariants and the elasticity tetrads. We focus on the topological polarization response and the associated flat bands at the boundaries. This polarization response and modes are distinct from e.g. the bulk theta term in time reversal invariant topological insulators, in that they are protected only by the associated (weak) crystalline symmetries. We discuss the relation of polarization to the other possible invariants in three dimensions and show the explicit momentum space invariant linking the bulk and boundary. We also demonstrate the formation of the topological polarization in the case of an example Hamiltonian specified in Eqs. (19,21).
In more detail, the one-dimensional topological invariant (the Zak phase) in Eqs. (15) and (20) describes two related phenomena: the topological response of polarization to the strain and the surface flat band. This demonstrates that the bulk topological polarization implies the filling of the zero energy surface states and vice versa, constituting an example of bulk-boundary correspondence and the associated anomaly inflow. Notably, the bulk polarization response is a total derivative. Using a simple model, we explicitly verified that the response of the polarization to the properly defined deformations is quantized, see Eq. (17), and that the corresponding surface flat band is present throughout the whole BZ. This is distinct from the nodal line semimetals, where there is also a flat band, but where this flat band occupies only part of the surface BZ. As a result there is no bulk quantization. The surface polarization and theory become anomalous in terms of a mere two-dimensional description, and they have to be discussed in the context of the bulk-boundary correspondence, including the gapless fermions. However, the polarization difference and derivative with respect to the deformation becomes quantized precisely when the nodal loop moves to the boundaries of the BZ and annihilates, forming a gapped topological insulator.
This situation is very similar to that in the AQHE, implying the presence of protected chiral edge modes. In 3D topological crystalline insulators it is the derivative of the Hall conductivity which is quantized [16] and well-defined. In the Weyl semimetals such quantization is absent, implying the chiral anomaly from the gapless fermions, but is restored when the Weyl nodes move to the boundaries of the BZ and annihilate forming a topological insulator, see e.g. [16,52,53].
Systems with flat bands are strongly susceptible to interaction induced broken symmetry phases such as superconductivity [30]. There, the (mean field) transition temperature T c is proportional to the volume of the flat band, if the flat band is formed in the bulk [54], or to the area of the flat band if it is formed on the surface of the sample [29]. Topological insulators have a larger area of the flat band compared with the flat bands on the surface of nodal line semimetals, and thus they may have a higher T c . This is in contrast for example to the Moiré superlattice in magic angle twisted bilayer graphene, where the flat band extends across the first Brillouin zone of the superlattice [55][56][57]. At the magic angle the superlattice unit cell contains a large number N ∼ 10 4 of atoms, implying a rather small flat band with area ∼ 1/ (N a 2 ), where a is the graphene lattice constant. Nevertheless, the recent measurements [58,59] indicate superconductivity with a T c around a few K. This means that topological insulators with much larger flat bands may be included in the competition whose final goal is room-temperature superconductivity. Other current participants in the race are hydrogen-rich materials, such as H 3 S, LaH 10 , Li 2 MgH 16 , YH 6 , etc. [60][61][62][63][64] In these systems, the large transition temperature results from the large vibrations of light hydrogens, which increase the electron-phonon coupling. The contributions of flat band and vibrations would ideally be combined. The manipulation and control of acoustic vibrations in insulators (which represent massive and masless "gravitons" in terms of the lattice metric and elasticity tetrads with elastic energy, respectively [23]) is not an easy task. But if the surface flat band of the insulator is in contact with hydrogen-rich material, then the electron-phonon interaction between phonons in hydrides and electrons in flat band may conspire in increasing T c even further.
Lastly, we note that a phase with periodic string-like order parameter in spin chains was recently found to lead to topological flat bands of Majoranas [65]. This is probably related to the polarization in the crystalline superconductors. On the other hand, another recent flat-band work describing lattices of fermions with random interactions [66,67] is rather related to the Khodel-Shaginyan Fermi condensate [54].
This work has been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 694248) and the Academy of Finland (project No. 317118). (21). The plotted quantity corresponds to the two center eigenvalues, which are the lowest-energy eigenstates at µ = 0 for the particle-hole symmetric Hamiltonian. Figure 4 finds the eigenstates of the Hamiltonian H ij − µ j δ ij with a layer dependent potential µ j . To mimic an electric field in the direction perpendicular to the layers, we follow Ref. 8 and choose µ j = E z (j − N/2).
Using the resulting eigenstates and -energies, we then calculate the charge density Eq. (22) and polarization Eq. (23). Note that this approach neglects the changes into µ j that would come from solving the Poisson equation. It hence corresponds to the limit κ → ∞ or α → 0 in Ref. 8. The case of a finite κ would lead to a possibility of broadening of the polarization step, but would not affect the size of the step. Moreover, we have studied the effects of chiral symmetry breaking terms (that do not anticommute with σ z ). They lead to a non-vanishing bandwidth of the surface states similar to what happens in rhombohedral graphite [51]. As long as such terms are weak, they only broaden the polarization jump but do not change its overall magnitude. | 2020-08-06T01:01:10.469Z | 2020-08-05T00:00:00.000 | {
"year": 2020,
"sha1": "0395f0b72f768561bd2f6dbfbaf29a6cf3db2231",
"oa_license": null,
"oa_url": "https://jyx.jyu.fi/bitstream/123456789/77042/2/PhysRevB.103.245115.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1b9334ea59c636dd80b07030809ee7b2f48ce4f9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
265060996 | pes2o/s2orc | v3-fos-license | Formation of Research Skills of Future Teachers of Mathematics in Solving Olympiad Problems
This article addresses the critical need to enhance the professional training of teachers, specifically in the realm of teaching and research activities. The objective is to improve their cognitive and organizational skills to enhance the effectiveness of future educational endeavors. The article's primary focus is the development of a comprehensive model for instilling research skills in prospective mathematics educators. The study employs the "Diagnostics of Personal Creativity" test by E.E. Tunic as a principal assessment tool. This test aids in evaluating personal curiosity, analytical aptitude, the ability to think innovatively, and problem-solving capabilities, all of which are pivotal indicators of an individual's inclination towards research activities. The proposed model for cultivating research skills among future mathematics teachers hinges on a set of methodological parameters. These parameters are designed to elevate intellectual acumen by encouraging diverse cognitive comparisons within mental activities, forming the foundation for research-oriented thinking. The model underscores the gradual development of cognitive abilities across varying levels of depth, applied to solving complex Olympiad problems with intricate structures. In summary, this article presents a model that promises to significantly enhance the research skills of future mathematics teachers, thereby contributing to the advancement of the teaching profession and the holistic development of young learners.
Introduction
Currently, there are trends to improve the educational process, in which great importance is given to the formation of the skills of teachers, who in their future professional activities will be responsible for each element of the educational process, where they will have to present knowledge of the subject at a high level, select and organize teaching process within the framework of applying an individual approach, implying integrality, a creative component, recognition of the informational part of students' knowledge, who can receive additional education in subjects as part of extracurricular education, where each teacher, explaining the topic and forming disciplinary thinking on the subject, uses different existing types of methodological approach (Rahmawati et.al., 2019).
The information area of the subject can be correctly presented from the standpoint of different didactic positions with the degree of deepening knowledge on the subject, and in this aspect, from an academic point of view, the component of understanding the main parameters, decisions, concepts, relationships existing in the field of discipline, including the subject of mathematics under consideration, is important (Andreeva et.al., 2019).But at present, there is a tendency to reduce the assessment score within the framework of the indicator of designing a solution to an example or a task that each teacher requires to perform precisely from the position of his personal explanation, although it is known that there are many factors and methods for this, which, within the framework of the correct answer, have a different approach in design, gradualness of the solution with consistent logical reasoning based on the existing theorems, provisions, parameters, rules of mathematics, which has a deep analytical value, and allows you to use the entire available subject pedagogical arsenal to get the right answer, which, stimulating mental activity, allows the student to gradually solve emerging chains of tasks based on existing knowledge in the field of discipline (Charalambous & Praetorius, 2018).Such a position, figuratively supported by the position of studying the discipline, in fact is often absent, since the teacher of mathematics requires the student to use only those solution models that he presented, and in the form in which he showed them, and so this fact often has an overwhelming influence on development the personality of the student, not allowing him to fully use his knowledge, mental activity based on the learning criteria that he had formed during training, for example, in another school, and are also true in the field of discipline (Krokhina, 2016).
The current situation is due to the insufficient development of the research function of teachers, which includes parameters that allow them to quickly assess the student's train of thought, predict further conclusions and trends of a possible solution at the level of personal flexibility of the mind, which is reflected in the adequate perception of the information received with multifaceted possibilities.a mathematical discipline that has many tools for solving problems.In fact, a mathematics teacher should have a high level of thinking, which implies consideration of a nontraditional, non-standard approach to solving the proposed educational materials against the background of personal creative and creative implementation with a high intellectual and analytical synthesis of the information received, and the factor under consideration is the basis for the formation of research activity at the level of its inherent developed personal abilities and parameters that make it possible to form the success of research skills, in which an integral methodological approach to learning will be provided, assessment of knowledge within the framework of mathematical disciplines, which in turn will maintain a high motivation for learning on the part of students who will understand that they can study the discipline itself by studying with any teacher, who, even within the framework of training in one institution, change from year to year , and due to personal development, knowledge is used in explaining the topic and evaluating the work of students by various methodological criteria (Maass et.al., 2019).
Materials and Methods
The main leaving indicators responsible for the research activities for the development of E.E.Tunic "Diagnostics of personal creativity" defining the characteristic parameters within the framework of the manifestation of personal interest in the new, the ability to synthesize knowledge against the backdrop of a deep and detailed analysis, that against the background of the criteria of the ability to find non-standard types of solutions in the conditions of the provided implementations with the manifestation of personal creativity, which combines a variety of concepts important for research, and characterized by the presence of flexibility of the mind, which allows the teacher to deviate from generally accepted schemes and respond adequately in a timely manner against the background of competent semantic approach using all available knowledge, including within the vast discipline of mathematics, which is built on logical chains, and has many different types of decision based on the consideration of the position of all knowledge and methodological approaches in the field of discipline.So the selected helps us expand his horizons against the background of curiosity will be traced, which reflects the personality's properties for cognitive activity and the desire to understand various ways of solving the necessary, and within the discipline in tasks, examples, equations and many the other until a competent answer is achieved, but at the level of a different course of decision, which indirectly indicates the presence of creativity factors of the personal characteristics of the studied (Kaurav et.al., 2020).
The selected diagnostic testing method includes 50 questions that allow you to evaluate the different properties and characteristics of the respondent's personality, which will create conditions for selecting parameters to enhance their function to manifest themselves at a successful level in the cognitive research process at the level of personal perception of a multi-level research system, which implies a set of different parameters and knowledge of the studied students.
The research employed a purposive sampling method to select participants for the study.The participants consisted of undergraduate students pursuing a degree in mathematics education at a reputable institution.This selection criteria ensures that the sample represents individuals with a specific interest in mathematics education and aligns with the study's focus on future mathematics teachers.
Data Collection Procedures
Administering the "Diagnostics of Personal Creativity" Test: The central data collection tool used in this study was the "Diagnostics of Personal Creativity" test developed by E.E.Tunic.The test was administered to the selected participants to assess their personal curiosity, analytical abilities, creativity, and problem-solving skills.The administration of the test was conducted in a controlled environment to ensure uniform conditions for all participants.
Olympiad Problem Solving: Following the initial assessment using the creativity test, participants were engaged in a series of mathematical problem-solving sessions.These sessions involved tackling Olympiad-style mathematical problems with varying levels of complexity.The participants' solutions were observed, recorded, and analyzed for their problem-solving strategies, creative thinking, and depth of mathematical knowledge.
Data Analysis Techniques: The collected data were subjected to a systematic and comprehensive analysis to derive meaningful insights and conclusions: Quantitative Analysis: The quantitative data from the "Diagnostics of Personal Creativity" test were analyzed using statistical methods.Descriptive statistics such as means, standard deviations, and correlations were calculated to understand the participants' baseline characteristics and the relationships between different variables.
Qualitative Analysis: The qualitative data obtained from the observation of participants' problem-solving sessions were analyzed using qualitative coding techniques.Transcripts and recordings of the sessions were coded for themes related to creative problem-solving, depth of knowledge, and non-standard approaches.
Integration of Data: The quantitative and qualitative data were integrated to provide a holistic understanding of the participants' research skills development.This integration allowed for a comprehensive assessment of the impact of the model on future mathematics teachers.
Ethical Considerations: Ethical guidelines and principles were strictly adhered to throughout the research process.Informed consent was obtained from all participants, ensuring their willingness to participate in the study.Confidentiality of participants' data was maintained, and all ethical approvals and permissions were obtained from relevant institutional review boards.This comprehensive research methodology ensured the collection of robust data and rigorous analysis, thereby enhancing the validity and reliability of the study's findings.It facilitated a deeper understanding of the effectiveness of the research skill development model among future mathematics teachers.
Results and Discussion
Data processing approach was used, where a detailed analysis of the test parameters according to the chosen method of the studied students was carried out, who will associate their future professional activities with the teaching of mathematics, taking into account the identification of elements of reflective indicators obtained during the pedagogical experiment respondents' personalities as target, corrective, performance-evaluative and other structural components.A detailed analysis of the properties and characteristics of the personality of the subjects, including an ordered set of psychological characteristics, qualities, intellectual, behavioral and emotional properties, is related to the target personal characteristics on the basis of which it is possible to trace the relationship of the individual's predisposition to research activities within the framework of personal curiosity and criteria that reflect the desire for knowledge new against the background of personal manifestation in a flexible system of perception of the components of thinking with a target setting for the effectiveness of the task (Goldan, 2019).
The corrective activity of a teacher within the framework of studying the subject has its own characteristics, and at a competent level it should be taking into account existing personal qualities, thanks to which the future specialist will understand the student's train of thought and evaluate the correctness of his decision, taking into account the versatility of the subject of mathematics, and its analysis based on the depth of knowledge based on the correct result, which may have been identified on the basis of the cognitive-exploratory thinking of the student, who studies the subject additionally during extracurricular time with other teachers, and who, within the framework of academic education, can use all their knowledge at the level of internal talent and depth of knowledge of the discipline using all its factors, elements and existing various methods that allow to come to the correct conclusion in the framework of solving the problem, and the professional competencies of a teacher, even on the basis of their professional education, should be formed, also taking into account the development of personal research qualities and skills that will allow them to perform the above at a successful level, because this lies precisely in the competence of the considered personality traits (Faritov, 2016).
Such an approach will allow the future teacher to competently perform his correctional activities without suppressing the student's creative abilities, thinking and self-esteem, given that the system of education, especially in the field of mathematics, where there are many ways to solve even one problem at the level of logical chains of conclusions to it within the framework of using various provisions and axioms, as well as the rules of discipline (Sawant & Sankpal, 2021).The subject of mathematics is very multi-faceted and multi-level, and there are many options for solving some problems, equations or complex examples, and the result of the correct solution, but when using a different sequence or design, or when emphasizing in the solution on different rules, theorems, formulas of mathematics that exist a large number, and which are lawful and competent in their use, it is precisely on the basis of the personality traits that lie within the framework of the propensity for research activities among future specialists that it will be possible to correctly evaluate the work, consider the course of its solution or design from the point of view of the correct result obtained with a detailed analysis of the trainee's thought process manifested in solving the problem (Schweisfurth & Elliott, 2019).
Indeed, many scientists, when they discovered the next formulas, they were looking for ways to solve the tasks at hand in stages that differed from the generally accepted solution, and it was the search for other types of mathematical moves that often led researchers to discover new mathematical components, formulas or patterns, so the formation of research skills in teachers of mathematics, will allow them to improve and deepen their mathematical thinking abilities, which will allow them in the future to see the talent for the subject in their students, noting particularly 339 gifted students who, under certain conditions created by the teachers of mathematics themselves, can develop their abilities, which is also an important task in the field of education (Venkat & Askew, 2018).
The performance-evaluative indicators of the study made it possible, in a detailed analysis of the results obtained, to identify the main factors and parameters that will contribute to the development of the cognitive-research component of the personality against the background of the formation of his research skills, which will allow him to perform his professional work at the level of high efficiency and success in the future, forming their students have a deep knowledge of mathematics against the background of their creative implementation and a systematic analysis of their integral knowledge (Brinkmann, 2018).
A detailed analysis of the results of diagnostic testing against the background of a given task made it possible to identify criteria for the development of a model that will allow students of the pedagogical direction to form research thinking at a high level.The described components of the research in interaction with research skills.The result of the functioning of the study under consideration is understood as a formed model for the formation of research skills among future teachers of mathematics.
So this research work with the created model of the successful formation of research skills against the background of the effective development of research thinking, which will significantly improve the level of professional competencies of the teacher, was introduced into the educational process in several stages, which included determining the initial level of personal qualities responsible for the manifestation in research activities against the background of predicting the result, analyzing the structure of work, searching for different ways of solving using all the mathematical laws and formulas necessary for the stage-by-stage achievement of a competent result, which will contribute to the manifestation of an individual form in the approach to the attitude towards students in the educational process as part of the consideration of the sphere of solving various mathematical problems by students, taking into account their non-standard approach to solving, excellent from the generally accepted on the indicative background of mental activity and creative manifestation, which, with the form of organization of the lesson, based on a personal motivating fact for research activity, will create a high success of the educational process with the ability to establish fruitful communication links, and the parameters necessary for developing the model were refined and deepened during pedagogical supervision.
The data obtained were further subjected to statistical processing, and then at the next stage, the development and implementation of a relevant and important for the field of education in the framework of the training of future teachers based on the above elements of the model for the formation of research skills as part of their successful application for its subsequent implementation in the field of practical teacher education took place., which made it possible to determine the level by the aspects of solving the problems facing the educational field for the full formation of the professional competencies of teachers, in which they would show themselves at the level of highly qualified specialists, in the field of which research skills, skills and properties play an important role, and the results obtained after its implementation will allow us to talk about its successful implementation.Thus, the study covered 75 students and the analysis of the results of diagnostic testing in their generalized values showed that the level of personal creative approach in active and mental productive work was at the level of the majority at the level of low and medium indicators, and only a fifth of them had a high level of creative thinking, which shows the low indicators of research characteristics characteristic of the respondents, and this reality is reflected in Figure 1.A detailed arrangement of indicators in the framework of determining the degree of curiosity, solving complex problems, the inclination to follow a new path, in the position under consideration, as a factor of personal predisposition to risk, which is necessary in order to find new successful solutions, moving away from the generally accepted traditional design of work, as well as figurative thinking or imagination made it possible to determine the missing components of the parameters that will contribute to the development of the active component of the research work, which will allow solving different options for search problems in solving the given examples, which is shown in Figure 2. 341 ensure the acquisition of skills necessary for this area.and skills, improving the educational process at a qualitative level, which is reflected in Figure 3.
Figure 3. Distribution of students according to characteristic indicators of propensity for research activities
Informations received allows us to talk about the need to introduce methodological parameters into the system of practical education that would form personality traits at the level of acquiring skills for research activities, which will allow future teachers to organize and provide the educational process at a high level, based on factors that would satisfy the needs students at the level of their development and the formation of their high self-esteem (You, 2018).
The selected parameters make it possible to determine that at the moment the level of formation of the inclination to conduct research activities within the framework of the corresponding thinking, they say that teachers with the selected parameters will proceed from personal perception and presentation of the subject of mathematics, which will not meet the high framework of cognitive and evaluative activity, since based on the results obtained, future teachers will reduce the evaluation score at the level of the indicator for completing the solution of an example or task, which each teacher requires to perform precisely from the standpoint of his personal explanation, as well as the fact that the student solved the example or task correctly, he will be ignored taking into account the remarks about the presentation of the design part, which is unacceptable and illiterate from the point of view of knowledge of the discipline, which was explained to the student against the background of methodical presentation, for example, by a private tutor, who may have an academic degree or title, and pedagogical status, the experience is higher than that of a school teacher, and the scheme of methodological parameters of the topic developed by him has a higher level of effectiveness in presenting knowledge in terms of understanding the subject by the student, and at the same time, for the stages of non-standard recording, even the correct solution of a problem or equation, the teacher in official training will be significantly lowered, and sometimes interpreted as incorrect, with the explanation that the grade was lowered due to the incorrect course of solving this problem, which essentially shows his unprofessionalism and the low level of research thinking, which does not allow him to solve this problem with several types and examples, within the framework of its correct and competent presentation, but from different methodological positions that exist in the world (Charalambous & Litke, 2018).
In this context, it is worth noting that in different countries the explanation of the same topic occurs from the side of methodology and didactics in completely different presentations, and at the same time they all have an indicator of the correct study or solution, or explanation of the subject, although when the student moves to another geographical region or country, when moving to continue education in another school, where there is a different methodological basis, the student is faced with the problem of analyzing and evaluating his knowledge by the teacher, which, by virtue of its administrative advantage, misinforms the student by violating his adaptation and correct knowledge of the subject, explaining to him that he solves an example or problem incorrectly, although this is not true against the background of a lack of coincidence in didactic foundations with the usual form of presenting knowledge for a teacher (Askew et.al., 2019).This approach on the part of the teacher undermines the student's self-confidence, lowers his self-esteem, significantly reduces his desire to learn, and also makes him distrust the educational process, in which each teacher says that only he is right, and the teacher in another school or even the one who taught the subject in another year of study was mistaken, since the form of solving a problem or example that the student was taught earlier is perceived by the new teacher at the level of an incorrect solution (Rappleye & Komatsu, 2017).
The fact under consideration disorients the student, who, while learning, begins to understand what is possible, and the real teacher gives incorrect information, which the next teacher will evaluate from a similar position of another form of solving the example.The current trend also undermines friendly relations among various teachers who compromise each other as part of the above-described assessment of the situation, where the form of registration and the considered options for solving problems will be different.This aspect suggests that teachers lack the flexibility of thinking and research skills that will allow them to study in detail and carefully the course of solving problems, examples, to see various methodological elements at the level of a general assessment of the correctness of solving a problem, example, equation or other in the area of mathematics under consideration.Given these parameters, we can talk about many aspects and trends that are associated with the teaching process by a conducting teacher without well-formed research skills, both in terms of their negative impact on students at the level of lowering their grades, on which further successful learning in various educational institutions depends at this stage.institutions, and there is a suppression of the personal development of the student and his emotional state, so this manifests itself both from the position of negative impact and in relation to the teaching staff, within the framework of not maintaining professional subordination at the level of creating a negative attitude towards the teacher on the part of students, who has different requirements for the approach in explaining the topic, the design and progress of solving problems, examples, and others, which was said by the teacher when explaining why the assessment score was reduced ( Maruthavanan, 2020).So, on the basis of a methodical analysis of the criteria and parameters described above, a model was developed for the formation of research skills among future teachers of mathematics, which allows to significantly increase their intellectual level with developed research thinking, which allows analyzing and searching for answers to multi-level tasks with an integral approach in their compilation, which are indicated at different levels of complexity within the framework of various Olympiad problems in the form of their various solutions, taking into account giving several different types of solutions to one Olympiad problem, based on a deep knowledge of mathematics and a detailed analysis of its course, which is possible on the basis of the development of a creative approach, imaginative thinking, various searches for problem points that will allow finding different stages to obtain a competent answer within the framework of the solution multilevel, integral Olympiad problems, examples, equations and more.Providing at an individual level the solution of Olympiad problems in several of their variants that differ significantly from each other when using various mathematical formulas and rules will allow students to develop flexibility and breadth of mental activity, which will create prerequisites for an individual approach, and for analyzing the solution of many mathematical problems within the framework of a competent answer according to different schemes based on the depth of knowledge and giftedness of their future students, allowing them to realize themselves at a high-quality and competent level with the manifestation of their creative potential (Scherer et.al., 2019).At the control stage of the study, after the introduction of the created model into the practice of practical education, repeated testing was carried out, which made it possible to identify the dynamics of indicators responsible for the formation of research thinking and skills in future mathematics teachers.The data obtained make it possible to determine the success of the developed model, so the generalized results reflect a high degree of development of research skills against the background of formed research thinking in the majority of respondents, which is shown in Figure 4. indicators of personal determination of flexibility and breadth of mental activity and understanding, so that a competent result obtained when solving a problem could be against the background of many types of paths and directions in its solution using various mathematical formulas, rules , concepts that can be used in various approaches, which increases the cognitive and research function of teachers against the background of the development of their intellectual abilities, the speed of reaction of their analytical sphere, observation and imaginative thinking.The pedagogical clarification also made it possible to reveal that all students noted an increase in the depth of their knowledge in the field of mathematics and a significant logical strengthening of this part in the solution in its various varieties when giving answers as part of the consideration of Olympiad problems, the solution of which they provided in three or more options, which strengthened their coordination abilities to master mathematical knowledge and skills, deepening their knowledge in this area.
Thus, the analysis of the results of the study at the control stage of the pedagogical experiment made it possible to determine that the developed model for the formation of research skills allows a significant increase in the effectiveness of professional pedagogical education at the level of a qualitative improvement in professional competencies of future teachers who will be able to significantly improve the organization of the educational process and the evaluation factor of the work of their future teachers.students against the background of successful personal achievements in the field of internal aspirations for cognitive research activities.The obtained success of the pedagogical model of teaching the discipline of mathematics on the basis of topical aspects of increasing the research activity of future teachers at the level of their application of various methodological approaches and didactic foundations in solving the set mathematical problems within their classes will allow students to master the full range of mathematical knowledge and apply them at a high level.at a competent level in practice, using a diverse approach to solving problems with a full variety of mathematical formulas and patterns within the framework of a full-fledged synthesis of multi-level knowledge, which allows you to form an image of the problem posed in the conditions of successful solution of the Olympiad problem in the form of a diverse approach, taking into account non-standard types of solutions, which will enhance their intellectual activity, thinking, increase the level of personal development and conduct search mental relationships allowing find different approaches to getting the right answer within the framework of a competent presentation of the solution as a whole, which allows you to increase self-expression, increase self-esteem, creative self-realization and develop your abilities in general (Kaurav et.al., 2019).Increasing research thinking on the basis of a diverse solution of Olympiad problems of increased complexity and integrity allows you to increase both the personal level of pedagogical and mathematical professionalism against the background of self-improvement of your knowledge in the field of mathematics, which in turn enhances the motivational component to study the subject, and in the future will allow teachers to prove themselves at a high level of activity in conducting classes, where their ability to captivate and interest in the study of mathematics will be significantly expressed against the background of their own interest in explaining interesting logical chains within the discipline.This approach will also create conditions for strengthening the creative communication between the teacher and the student, which will help organize the creation of a favorable learning atmosphere for the educational process.(Grigorenko, 2018).
Self-improvement of personal professionalism with many variations of solving Olympiad integral, multi-level problems will also contribute by creating a training effect, where the future teacher will be able to apply all his knowledge in the field of mathematics at its deep level, and having achieved results in several options for solving a complex Olympiad problem, one will be able to speak about their ability to use their personal mathematical knowledge in full with the manifestation of a good orientation in them, and also created within the framework of the developed model of the formation of research skills will allow students to realize their own significant potential as a future specialist at a high level, which will also increase his motivation for further in-depth study of the discipline and create activity factors at the level of increasing personal self-education, and thus this will encourage and activate the function of independent work within the framework of this subject of mathematics, due to the actualization of the student's personal potential, which in turn will increase willpower, observation, concentration, analytical sphere, which will strengthen the goals for learning and achieving better results in educational, scientific, practical and professional activities, and will contribute to raising their status and level of self-esteem with faith in their own strengths, capabilities, thus ensuring a high level of self-determination in vocational guidance as a specialist who, already during his studies, has achieved an indicative, positive result (Ilyasov & Utegenova, 2019).The subject of evaluating the effectiveness of the developed model is the level of cognitive and analytical activity of the student, the speed of assimilation of new information by him, the flexibility and coordination of his mental activity, as well as other indicators of his mental work and the development of intelligence, which will be successfully formed against the background of solving Olympiad tasks in several of them.options.
Thus, the developed model of the formation of research skills, skills, thinking against the background of intellectual development and imaginative thinking has shown itself to be effective and can be used in practice as part of the educational process of professional training of future teachers.
Conclusion
As part of modern trends aimed at improving the educational process, through the introduction into practice of pedagogical models of methods that will increase not only motivational criteria, but also the abilities of students, a model has been developed for the successful formation of research skills among mathematics teachers, as part of their solution of Olympiad tasks in several variants which allows, in the form of searching for non-standard solutions, to develop one's own talents and abilities along the necessary path of ingenuity based on the existing deep knowledge in the field of the considered discipline of mathematics at the level of increasing interest in its study, learning, determining the correctness of the course of one's thoughts and the chosen ways of solving the tasks, a detailed analysis a multifaceted complex Olympiad task at the level of its data integration with the factor of evaluating one's own train of thought at the level of its correctness with a prediction of the possible result obtained, which contributes to the development of talents and mathematical abilities with a factor in the development of a natural phenomenon of performing logical mental operations using a non-standard approach against the background of creative thinking, which makes it possible to collectively increase intellectual abilities, mental coordination, observation, in the created conditions of independent overcoming of possible personal psychological inertia, as well as learning to look at one task from different angles with a deep search for various types of its solution at a competent level.So, in general, from the considered position, this allows the created model for the development of research skills within the framework of the methodological parameters of solving various complex, multi-level Olympiad problems in several versions to become a successful and effective pedagogical tool for the development of personal abilities, talents, development at a deep level of productive thinking against the background of an increase in creative and independent activity.
Figure 1 .
Figure 1.The distribution of future specialists depending on the degree of formation of the level of their creative thinking in the framework of the propensity for research activities
Figure 4 .
Figure 4.The distribution of students depending on the degree of formation of the level of their creative thinking at the control stage of the pedagogical experiment Figure 5 visualizes the specification of the criteria as part of the development of research skills withindicators of personal determination of flexibility and breadth of mental activity and understanding, so that a competent result obtained when solving a problem could be against the background of many types of paths and directions in its solution using various mathematical formulas, rules , concepts that can be used in various approaches, which increases the cognitive and research function of teachers against the background of the development of their intellectual abilities, the speed of reaction of their analytical sphere, observation and imaginative thinking.
Figure 5 .
Figure 5. Detailed parameters responsible for the propensity for research among respondents at the control stage of the study | 2023-11-09T16:11:39.099Z | 2023-11-05T00:00:00.000 | {
"year": 2023,
"sha1": "fb6d43daf93631d80006c2a7adcbcc17446d9cd9",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/ajis/article/download/13566/13135",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "de38256c4e4078114b8a6eb3bb433fc51e5e1119",
"s2fieldsofstudy": [
"Education",
"Mathematics"
],
"extfieldsofstudy": []
} |
55842213 | pes2o/s2orc | v3-fos-license | Chinese College English Teachers ’ Ability to Develop Students ’ Informationized Learning in the Era of Big Data : Status and Suggestions
College English teachers should vigorously promote the integration of the latest information technology and curriculum teaching. The application of modern information technology to college English teaching has not only modernized, diversified and facilitated the teaching methods, but also changed the teaching concepts, teaching contents and teaching methods. The ultimate goal of integration is to promote the development of students’ informationized learning ability. Therefore, the purpose of this paper is to investigate college English teachers’ development of students’ informationized learning ability in the era of big data, and explore a motivation system that motivates teachers to develop students’ informationized learning ability. A questionnaire survey was conducted in three universities with different degrees of informationization. It is found that the universities do not make enough investments to accommodate the future development needs of smart education; the teachers show weak ability to use the software, do not master the use of software and fail to implement the design, development and the nation has not issued institutional policies and offered enough time and opportunities for teachers’ training. It is advised that the development of teachers’ informationized teaching ability is to strike a balance between promoting the development of teachers’ teaching ability and enhancing the development of students’ learning ability; China should make dynamic adjustments to the assessment and monitoring system of teachers’ informationized teaching ability and China should keep pace with the times in terms of investment in education teaching cause, and gradually improve its policy supports and incentives.
INTRODUCTION
Undoubtedly, the wide application of modern information technology (especially the Internet and multimedia technology) injects infinite fresh power into college English teaching.The new teaching platform definitely provides college students with the convenience to develop their comprehensive English abilities.In the traditional teaching concept, the teachers assume the responsibility of "propagating the doctrine, imparting professional knowledge and resolving doubts", so they are absolute authority of knowledge (Mei,2011,pp. 165-170).However, in the new teaching model, the teachers are responsible for designing and developing the learning process and resources, guiding and promoting the students' learning process, organizing and coordinating the students' collaborative learning, monitoring and assessing the students' learning process, so they are no longer the "center" of the class.With the continuous development of informationized learning, the education service includes not only schools, but also social institutions and various networks that can provide education services.As a result, both on-campus and off-campus learning services are available, realizing an integrated on-campus and off-campus as well as the online and offline education service experience.What's more, the convenient information services have prompted the reconstruction of foreign language teaching and learning model.The independent learning, U-Learning, social learning, game-based learning, inquiry-based learning under the simulation environment, remote real-time collaborative learning and social interactive learning will become the main forms of student learning in the future information society (Fang & Chen, 2018, pp. 57-62).
As the information society puts forward new requirements on the teaching ability of teachers, the students' learning abilities also change accordingly.The relevant previous studies (Ran, 2014;Ran & Yang, 2014;Shen & Li, 2017;Wang, 2009) focus on the improvement of teachers' effective teaching ability as well as the promotion of teachers' professional development in the information environment.At present, much attention is paid to the development of students' abilities.This phenomenon thus indicates that the improvement of teachers' teaching ability is to promote the development of students' learning ability.This trend of change is proved by other countries' requirements on the teachers' technical competence standards.Furthermore, it is believed that the development of teachers' informationized teaching ability aims at promoting the development of informationized learning ability of students who adopt different learning styles and strategies.In other words, although this study focuses on the development of teachers' informationized teaching ability, the purpose of this ability is to promote the development of students' informationized learning ability.A questionnaire survey was conducted in three universities with different degrees of informationization to investigate college English teachers' development of students' informationized learning ability in the era of big data, and explore a motivation system that motivates teachers to develop students' informationized learning ability.LITERATURE REVIEW Wang (2012, pp. 45-53) argued that "informationized teaching ability was divided into six sub-abilities: the informationized teaching transfer ability, informationized teaching integration ability, the informationized teaching communication ability, the informationized teaching assessment ability, the informationized collaborative teaching ability and the ability to promote students' informationized learning".Zhao and Guo (2010, pp. 28-31) emphasized that "the informationized teaching ability referred to the ability that the teachers took advantage of the information and communication technologies, promoted the students to transform the learning methods, and enhanced students' comprehensive utilization competence over learning resources and the learning environment in the information literacy process by adopting the teaching design, teaching implementation, teaching assessment and other methods."According to Wang, "the informationized teaching aimed at making full use of information resources to promote the students' development and to efficiently complete the teaching tasks."He classified the teachers' informationized teaching ability into six aspects -the informationized teaching transfer ability, informationized teaching integration ability, the informationized teaching communication ability, the informationized teaching assessment ability, the informationized collaborative teaching ability, and the ability to promote students' informationized learning (Wang, 2009, pp. 106-111).
The TSACK (Technological Strategic and Content Knowledge) framework explains the requirements over the abilities and knowledge of teachers who are in an implicit position.The TSACK framework focuses on teachers but ignores students.Ruan (2014, pp. 20-26) held that they must grasp the subject knowledge, learning methods or tactical knowledge as well as the information technology knowledge in learning.TSACK refers to Technological Knowledge, Content Knowledge and Strategic Knowledge (See Figure 1).The vigorous development of "researchbased learning", "inquiry-based learning", "task-based learning", "project-based learning", "action learning", "independent learning" and the "flipped classroom" are to improve the students' knowledge and ability.In addition, the TSACK framework can be used to explain the ability and knowledge of students who are in an explicit position, and it is the expansion and complement of the TSACK framework.
Contribution of this paper to the literature
• This paper contributes to increasing teachers' motivation to study teaching English with IT and know the present situation of teachers' ability to develop students' informationized learning in the Era of Big Data.
• The paper provides instructions and suggestions to enhance College English Teachers' ability to develop students' informationized learning in the Era of Big Data.
• By considering the researches made by scholars as mentioned in the literature review, this study develops a more comprehensive framework to analyze and evaluate teachers' ability to develop students' informationized learning in the Era of Big Data.
Fundamentally, both the ability of teaching methods (teachers) and strategy for learning methods (students) fall into the category of methodology.They are an advanced thinking of asking, analyzing and solving questions, that is, methodological knowledge.To sum up, the ultimate goal of educational activities is to achieve common development of teachers and students.The knowledge and ability structure required in this process is TMACK (Technological, Methodological and Content Knowledge), which can be expressed as: TMACK=TPACK+TSACK (as shown in Figure 2).Shen and Li (2017, pp. 55) argued that the prerequisite ability for teachers to achieve the informationized teaching was the ability to realize the high integration between informationized technology and the college English teaching, including the informationized teaching design ability, the English content knowledge ability, the ability to use information tools for implementation of teaching, the informationized communication ability, the informationized assessment ability and the informationized teaching, research and reflection ability.The ability to use information technology tools was the foundation and premise of integration.The teaching research and reflection of teachers may develop and enhance the teachers' integration ability.The ultimate goal of integration was to promote the development of students' informationized learning ability.Besides, the college English teachers should facilitate their students to develop the following abilities: (1) online independent learning ability; (2) selfcontrol ability; (3) information acquisition ability; (4) information evaluation ability; (5) information management ability; (6) information processing ability; and (7) communication and collaboration ability (as shown in Figure 3).
RESEARCH DESIGN
To learn more about the current situation of college English teachers' informationized teaching ability, the author designed the Questionnaires on College English Teachers' Ability to Use Information Tools.The Likert scale was used in all questionnaires and consisted of: 1 = "Strongly disagree", 2 = "Disagree", 3 = "Neither agree nor disagree", 4 = "Agree", 5 = "Strongly agree".
In the first half of March 2015, the author made an online survey, when a total of 135 questionnaires were returned.Of these, 131 valid questionnaires was were recollected.The success rate was 97%.Using SPSS software, the recovered data were processed, the reliability of the questionnaire was examined, and the questionnaire items were analyzed.
In order to examine whether the specific content of the questionnaire survey has discriminatory ability, that is, whether the questionnaire project has discriminating power, the author used the independent sample T-test and related analysis methods to analyze the questionnaire.If the correlation coefficient is equal to or greater than 0.30, and the significance level is reached (p≦0.05), it ind icates that the int ernal consistency of the questionnaire is high or the questionnaire item has a good differentiation (Qin, 2009, pp. 209).After summing up and analyzing the scores of all the items (34 items) of the 131 student questionnaires, it was found that only one item's correlation coefficient was less than 0.30, and the requirement for the significance level (p≦0.05)w as not reached .A s far as p ossible to ensure that the questionnaire has a good degree of differentiation, the author deleted this item.
Reliability refers to the stability of the questionnaire measurement results.Reliability can be analyzed by the external reliability test and the intrinsic reliability test.Since the study was conducted via online testing, the questionnaire could not be re-tested.Therefore, instead of using the external reliability test, the author used the Cronbach's alpha coefficient to examine the reliability of the questionnaire in the intrinsic reliability test method.According to Qin (2009, pp. 220-221), the generally accepted Cronbach's alpha reliability coefficient should not be lower than 0.70; and using the SPSS 19.0 statistics package, the Cronbach's alpha coefficient of this questionnaire was 0.931, indicating that the internal consistency of the questionnaire was high.
After adjustment, the questionnaire consists of 33 items.
In May 2015, the author conducted a questionnaire survey on three universities: A, B and C. A total of 450 students and 160 college English teachers participated in the survey.The author entrusted some teachers from the three universities to hand out the questionnaires.He required the teachers to inform students of the purpose, significance and matters needing attention of this survey in advance, and required the teachers to collect the questionnaires after being filled out.A total of 450 questionnaires were handed out to students, with 420 questionnaires recovered.Through careful observation and analysis, 33 invalid questionnaires (the same answers were given for all questions or the questionnaires were only filled out partially) were removed, and 387 effective questionnaires were collected as the data source for the survey.Afterwards, all questionnaires were entered into the SPSS 19.0 software for future use after careful sorting.
Analysis of Questionnaire Data
The foreign language learners dock with "Internet +" in the big data era.In the face of massive learning resources, the learning methods have undergone tremendous changes, from the traditional models to e-learning, mobile learning, U-learning, smart learning and deep learning.These new learning methods provide a solid platform and basis for human information exchange and innovation development.However, these big data-based learning methods have not been adequately reflected in China's foreign language teaching.The traditional class teaching is still the popular.For example, the students accept teaching with the same mode, contents and progress, and they are subject to limited choice and utilization of technologies and resources.Therefore, it is essential to reconstruct a new appropriate and effective paradigm of foreign language teaching by relying on "Internet +", so as to make China's foreign language teaching keep pace with the big data era (Chen, 2017, pp. 5).As the informationized learning methods continue to expand, the education service providers include not only schools, but also social institutions and various networks.As a result, both on-campus and off-campus learning services are available, realizing an integrated on-campus and off-campus as well as the online and offline education service experience.
According to Table 1, Q1 -"Our hardware facilities can meet our informationized learning requirements" (M=4.05)indicates that, students from the three universities are basically satisfied with the hardware conditions.As the computer, multimedia and other information technologies enter the campuses, the universities have invested heavily in the configuration of English teaching facilities, so the hardware facilities continue to be improved.The universities regularly broadcast English programs (Q2, M=4.63), and the students use the campus Wi-Fi to assist in English learning (Q3, M=4.53).The teachers often use computers, networks and other informationized tools for college English teaching (Q5, M=4.36), but the students think that the English teachers' computer skills need to be improved (Q4, M=3.36).Consequently, the universities shall continue to increase investment, so as to accommodate the future development needs of smart education: online decision-making, learning analysis and data mining.
The ability to use information tools refers to the ability of using teaching equipment, teaching software, network and other information tools in college English teaching (Vance & Carlson, 2013, pp. 35-43).For teachers who undertake teaching tasks, certain technical skills and ability is one of the teaching abilities that the teachers must possess.This requires teachers to complete classroom tasks by making use of simple teaching software in multimedia devices.According to Table 2, Q6 -"When teaching college English, my teacher can freely operate the multimedia teaching platform devices such as the computer, projector and video presenter, and other teaching equipment such as the camera" (M=4.02)indicates that, through the training conducted by factories and the universities, the college English teachers can freely operate the multimedia teaching platform devices such as the computer, projector and video presenter, and other teaching equipment such as the camera.Q9 -"When teaching college English, my teacher can use the network platforms (such as QQ and MicroBlog) to publish discussion questions to discuss with us" (Q9, M=4.06) implies that, the teachers can timely interact with students by using the network platforms (such as QQ and MicroBlog).The teaching in the information society is not only the imparting of knowledge and skills, but also the development of students' learning ability and the growth of students.Therefore, it is necessary for teachers and students to interact effectively.The teaching methods in the information society embody the characteristics of selection and interaction.Correspondingly, the students' learning methods are inclined to cooperation, dialogues, exchanges, inquiries and similar practices.With the popularization of MicroBlog, more and more teachers begin to introduce MicroBlog to the field of teaching practice, making it a tool for students' learning and network interaction between students and teachers.The application of MicroBlog to college English teaching may facilitate students to carry out independent learning.However, the teachers show weak ability to use the software, do not master the use of software (such as office software, image processing software, animation software, and video-audio editing software), and fail to implement the design, development, processing and rendering of college English teaching resources with these software (Chen, 2017, pp. 3-16).
According to Table 3, Q10 -"I saw in the courseware the data and resources about English learning collected by my English teacher with a camera" (M=3.53)shows that, most teachers can use the information technologies of images, videos and animations to express English knowledge, transmit information and promote learners' understanding and meaning construction according to the modern teaching concept and teaching needs.Q11 -"My English teacher can vividly present college English learning contents by using images, video, animation and other information technologies" (M=3.62,M=3.73) suggests that, most teachers can use the information technology tools such as cameras to record the data and resources required for English teaching, and process the data and resources by using the image processing software and video editing software according to the needs of teaching, yet the teachers show weaknesses.Q12 -"My English teacher can use the image processing software and video editing software to process the collected data and resources according to the needs of teaching, so that we can use them in class" (M=3.78)indicates that, most teachers can use the computer, network hard drive, web favorites and other information technology tools to classify and save digital college English teaching resources.Q14 -"My English teacher allows us to copy his/her digital college English teaching resources" (M=3.90)shows that, most teachers can use the information technology tools (such as online survey platform) to evaluate the relevant digital college English teaching resources, but they fail to give full evaluation.With the continuous development of the information era, the classroom teaching also proposes new requirements for teachers' roles.On the one hand, the teachers should master various common information tools for assistance in teaching.On the other hand, the teachers are required to transmit these tools with students, mobilize their enthusiasm to participate in classroom activities, and play the main role of students.The ability to manage teaching links by using information tools refers to the ability to manage all links of college English teaching implementation by using information tools so as to achieve the teaching objectives.Examples include the ability to stimulate and maintain students' learning motivation, as well as the ability to respond to emergencies.As can be seen from Table 4, Q16 -"My English teacher stimulates our motivation to learn in the informationized teaching" (M=3.90) and Q17 -"My English teacher can use the network communication technologies (E-mail, QQ, MNS, etc.) and guides us to learn collaboratively" (M=3.59)show that, the teachers have developed the ability to stimulate and maintain students' learning motivation.Q18 -"My English teacher can monitor our online learning activities and learning process and guide us to resolve problems by using the information technology tools" (M=3.60)indicates that, most teachers have the ability to monitor the students' online learning activities and learning processes and give timely instruction to students' problem-resolving ability by using the information technology tools, yet there is room for improvement.Q20 -"My English teacher sends us the learning materials collected online before class" (M=3.33),Q21 -"My English teacher encourages us to report in class and share relevant materials searched on the Internet" (M=3.59), and Q23 -"Before class, my English teacher sends us the courseware created through the software (such as Authorware, PowerPoint and FLASH)" (M=3.48) demonstrate that, most teachers have the ability to share and regenerate the teaching contents, and provide students with digital college English teaching resources via the Internet.Q22 -"My English teacher spends a third of the time in the class to explain and spends the rest of the time to guide us to identify problems, and help us find an effective way to resolve the problems through discussions and exchanges" suggests that, most teachers gradually get rid of the traditional teaching concept, establish the informationized teaching concept, and apply it in practice.Q24 -"In the college English informationized teaching process, the teacher interacts with classmates, organizes, manages and coordinates our study" (M=3.46)shows that, in the process of college English teaching, most teachers use network chat rooms, audio-visual conference and other information technologies to monitor the online learning, answer questions, and give prompt feedback.Q25 -"My English teacher assesses not only our knowledge and skills but also our development of practical ability and emotional development in the informationized learning" (M=3.68)indicates that, the universities have developed the online teaching evaluation management mechanism, and some teachers may receive timely feedback through the Internet.Q26 -"My English teacher tracks our learning process, collects data, mines data, and automatically adjusts teaching strategies" (M=3.68)states suggests that, the English teacher tracks our learning process, collects data, conducts data mining, and adjusts teaching strategies accordingly.The application of big data in education mainly refers to the three elements of online decision-making, learning analysis and data mining.Its main role is to carry out predictive analysis, behavioral analysis, academic analysis and other applications and researches.Big data refers to the analysis of large amounts of data generated in the process of students' learning (data sources include explicit behaviors and implicit behaviors, among which the implicit behaviors include posting, extracurricular activities, online social networking and other activities that are not directly taken as education evaluation; explicit behaviors include test scores, homework completion and classroom performance).The big data model and the data displayed can provide reference for universities and teachers' teaching, so as to evaluate the academic status of students in a timely and accurate manner, identify the potential problems of students, and thus predict their possible future performance (Kay, Kleitman, & Azevedo, 2013, pp. 124-134).
In traditional society, the purpose of teachers' teaching ability development is mainly to realize the imparting of knowledge and skills and to improve the teaching ability of teachers.However, in the information society, the teaching ability of teachers not only pays attention to the development of teaching ability, but more importantly, the development of teaching ability serves the development of different informationized learning abilities of students.As can be seen from Table 5, Q27 -"My English teacher often tells us how to learn English through the Internet" (M=3.92)states that, the teachers can actively cultivate the students' online independent learning ability.The independent learning ability is the basic ability of independent learning by using online resources, which is the synthesis of various abilities developed through the use of online resources for the independent learning process.Q28 -"Under the guidance of the teacher, now I know what learning methods should be used for independent learning" (M=3.97)shows that, teachers can teach the independent learning methods to students.Q29 -"Under the guidance of the teacher, now I can study dialectically and learn from the original English literature searched online related to professional knowledge" (M=3.91)illustrates that the teachers can teach students how to find online information.Q30 -"Under the guidance of the teacher, I can accurately analyze, judge and evaluate the reliability, effectiveness, accuracy, authority and timeliness of information collected online" (M=3.82)shows that, the teachers can teach students to dialectically evaluate the network information collected.Q31 -"My English teacher develops our information management skills in teaching and teaches us to categorize, store, invoke and inquire different forms of information stored on different media" (M=3.84)demonstrates that the teachers can develop students' information management skills in teaching.Q32 -"My English teacher fosters the ability to integrate and process information in the Internet environment in teaching" (M=3.82)indicates that, the English teachers can cultivate students' information integration and processing ability in the network environment during the teaching process.Q33 -"My English teacher encourages us in teaching to communicate by using informationized tools such as email, ICQ, video conference and the online telephone" shows that, the English teachers encourage students to communicate by using e-mail, ICQ, video conference, the online telephone and other informationized tools.As an old Chinese saying goes, "It's better to teach a man fishing than to give him fish".In this regards, the teachers should thus focus on guiding students to explore suitable learning methods, including helping students develop the informationized learning ability.
Analysis of the Influencing Factors of College English Teachers' Ability to Develop Students' Informationized Learning
As a part of the whole social ecological system, college English teachers are also in a variety of external environments (including the natural environment, social environment and normative environment).Therefore, whether college English teachers use modern information education technology or not is related to the external social environment and atmosphere, and have a close correlation with the surrounding people (Shown in Figure 4) (Shen & Li, 2017, p .91).
In Table 6, for the development of informationized teaching, research and reflection ability, the predictive variables of ecological factors include institutional factors and demands of the time and training factor.MOOCs, the flipped classroom and micro-class have brought challenges to college education.The improvement of students' informationized learning ability is a prerequisite for the successful implementation of the new teaching model.According to College English Curriculum Requirements, "The new model should be built on modern information technology, particularly network technology, so that English language teaching and learning will be, to a certain extent, free from the constraints of time or place and geared towards students' individualized and autonomous learning."Besides, as the Guide to College English Teaching points out, "The objectives of college English teaching are to cultivate students' ability to use English and develop their independent learning ability."However, in order to promote students' informationized learning ability, much attention should be given to practical operations.At the third national conference on education held in 1999, "The key to the implementation of quality education lies in teachers.To build a high-quality education, we must have high-quality teachers."Therefore, the teachers should play an exemplary role, and motivate students in a positive and progressive manner (Tompsett, 2013, pp. 54-68).
RESULTS
1.The universities do not make enough investments to accommodate the future development needs of smart education: online decision-making, learning analysis and data mining.
2. The teachers show weak ability to use the software, do not master the use of software (such as office software, image processing software, animation software, and video-audio editing software), and fail to implement these.The teachers show weak ability in online decision-making, learning analysis, and data mining.
3. The nation has not issued institutional policies and offered enough time and opportunities for teachers' training.
In traditional society, the purpose of teachers' teaching ability development is mainly to realize the imparting of knowledge and skills and to improve the teaching ability of teachers.However, in the information society, the teaching ability of teachers not only pays attention to the development of teaching ability, but more importantly, the development of teaching ability serves the development of different informationized learning abilities of students.This is not just a formal change, but a change in the value orientation of teachers' teaching ability.The development of teachers' informationized teaching ability is to strike a balance between promoting the development of teachers' teaching ability and enhancing the development of students' learning ability (Chen & Huang, 2018, pp. 1635-1643).
From a policy point of view, on one hand, China should make dynamic adjustments to the assessment and monitoring system of teachers' informationized teaching ability according to the development of the times, and pay more attention to it.On the other hand, China should keep pace with the times in terms of investment in education teaching, and gradually improve its policy supports and incentives.At present, however, the teachers benefit little from the national policy level.
From the perspective of teachers, in the face of developments of the information age, the teachers should see the current situation clearly, adapt to society, change the teaching concept, master the new information technology teaching and make constant improvements and developments.However, the teachers have no energy to be concerned about the changing times in the face of heavy teaching and research tasks.
Although the role and significance of teacher training have been recognized, the practice of teacher training has not been satisfactory."Today it is indeed hard to say that trainers are better than trainees, especially in teaching methods and arts.Teacher training is often at a level of memory, and the teachers repeat what the book says.However, the development and utilization of micro-teaching and information technology look pale and weak in some training institutions, yet they are widely used in primary and secondary schools.And so on, it is easy for teachers to lose their enthusiasm in elective courses, leading to a waste of time and reduced confidence in training to a certain extent" (Shen & Li, 2017, pp. 109-110).
In short, it is crucial to strengthen training college English teachers based on the modern information technology, effectively improve their information literacy and information capabilities, including the basic theories of information technology (network learning theory, network courses and teaching theory, etc.).
Figure 3 .
Figure 3. Schematic Diagram for College English Teachers' Informationized Teaching Ability
Figure 4 .
Figure 4.The Sub-factors on the Development of Teachers' Information-based Teaching 172400410404) and the National Science Foundation of Department of Education of Henan (No. 16A880001) and Higher Education Teaching Reform Project of Henan University of Technology (No. 2017SJGLX299).
Table 1 .
Questionnaires on College English Teachers' Ability to Use Information Tools in Three Universities
Table 2 .
Questionnaires on College English Teachers' Ability to Use Information Tools in Three Universities When teaching college English, my teacher can freely operate the multimedia teaching platform devices such as the computer, projector and video presenter, and other teaching equipment such as the camera.
Table 3 .
Questions on Analysis of College English Teachers' Informationized Content Knowledge Ability
Table 4 .
Questions on Analysis of College English Teachers' Ability to Manage Teaching Links by Using Information Tools My English teacher spends a third of the time in the class to explain, and spends the rest of the time to guide us to identify problems, and help us find an effective way to resolve the problems through discussions and exchanges.
Table 5 .
Questions on Analysis of College English Teachers' Ability to Promote Students' Informationized Learning
Table 6 .
Multiple Regression Results of Ecological Factors and Ability to Develop Students' Informationized Learning | 2018-12-12T21:40:40.214Z | 2018-04-26T00:00:00.000 | {
"year": 2018,
"sha1": "4d547f34ae43d53a4147868a479704855fbe5211",
"oa_license": "CCBY",
"oa_url": "https://www.ejmste.com/download/chinese-college-english-teachers-ability-to-develop-students-informationized-learning-in-the-era-of-5464.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4d547f34ae43d53a4147868a479704855fbe5211",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
} |
4948915 | pes2o/s2orc | v3-fos-license | Molecular modelling and simulation of electrolyte solutions, biomolecules, and wetting of component surfaces
Massively-parallel molecular dynamics simulation is applied to systems containing electrolytes, vapour-liquid interfaces, and biomolecules in contact with water-oil interfaces. Novel molecular models of alkali halide salts are presented and employed for the simulation of electrolytes in aqueous solution. The enzymatically catalysed hydroxylation of oleic acid is investigated by molecular dynamics simulation taking the internal degrees of freedom of the macromolecules into account. Thereby, Ewald summation methods are used to compute the long range electrostatic interactions. In systems with a phase boundary, the dispersive interaction, which is modelled by the Lennard-Jones potential here, has a more significant long range contribution than in homogeneous systems. This effect is accounted for by implementing the Janecek cutoff correction scheme. On this basis, the HPC infrastructure at the Steinbuch Centre for Computing was accessed and efficiently used, yielding new insights on the molecular systems under consideration.
Introduction
Molecular simulation provides detailed information on processes on a level which is otherwise inaccessible, on the basis of physically realistic models of the intermolecular interactions. However, despite all efforts to keep these models as simple as possible, such simulations are extremely time consuming and belong to the most demanding applications of high performance computing.
The scientific computing project "Molecular simulation of static and dynamic properties of electrolyte systems, large molecules and wetting of component surfaces" (MOCOS) aims at advancing the state of the art by developing the molecular simulation codes ms2 [1] as well as ls1 mardyn [2], and by applying advanced simulation algorithms to complex molecular systems, employing the HPC resources at the Steinbuch Centre for Computing (SCC) in Karlsruhe. The following peerreviewed publications, which have appeared (or will appear) in internationally accessible journals, contribute to the MOCOS project in various respects: The present article addresses the three main topics of the MOCOS project by presenting recent scientific results which were facilitated by the computing resources at SCC: Section 2 discusses molecular model development for ions in aqueous solution and Section 3 reports on molecular simulation of large molecules, i.e. of enzymes and fatty acides. Interfacial phenomena such as the wetting behaviour of fluids in contact with a solid surface are discussed in Section 4 and a conclusion is given in Section 5.
Force field development for alkali cations and halide anions
Aqueous electrolyte solutions play an important role in many industrial applications and natural processes. The general investigation of electrolyte solutions is, hence, of prime interest. Their simulation on the molecular level is computationally expen-sive as the electrostatic long range interactions in the solution have to be taken into account by time consuming algorithms, such as Ewald summation [3]. The development of ion force fields in aqueous solutions is a challenging task because of the strong electrostatic interactions between the ions and the surrounding water molecules. Previously published parameterization strategies for the adjustment of the ion force fields of all alkali cations and halide anions yield multiple parameter sets for a single ion [4,5] or are based on additional assumptions, e.g. the consideration of the aqueous alkali halide solution as a binary mixture of water and cations as well as anions, respectively [6].
The recent study of Deublein et al. [7], however, succeeded in obtaining one unique force field set for all alkali and halide ions in aqueous solution. Thereby, the ions are modelled as Lennard-Jones (LJ) spheres with a point charge (±1 e) in their centre of mass. Hence, the ion force fields have two adjustable parameters, namely the LJ size parameter σ and the LJ energy parameter ε. The σ parameter of the ions was adjusted to the reduced liquid solution density. The LJ energy parameter showed only a minor influence on the reduced density and was estimated to be ε = 100 K for all anions and cations [7].
In the present work, the influence of the LJ energy parameter on the self-diffusion coefficient of the alkali cations and the halide anions in aqueous solutions as well as the position of the first maximum of the radial distribution function (RDF) of water around the ions was investigated systematically. Based on these results, a modified value is proposed for the LJ energy parameter.
The new ε i parameter of the ion force fields is determined by a two step parametrization strategy. First, the LJ energy parameter of the ion force fields is adjusted to the self-diffusion coefficient of the ions in aqueous solution. Subsequently, the dependence of the position of the first maximum in the RDF of water around the ions on ε i is used to restrict the parameter range derived by considering the selfdiffusion coefficient.
Methods and simulation details
In the present study, the self-diffusion coefficient of the ions in aqueous solutions is determined in equilibrium molecular dynamics by the Green-Kubo formalism. In this formalism, the self-diffusion coefficient is related to the time integral of the velocity autocorrelation function [8]. The radial distribution function g i−O (r) of water around the ion i is defined by where ρ O (r) is the local density of water as a function of the distance r from the ion i and ρ O,bulk is the density of water molecules in the bulk phase. In this case, the position of the water molecules is represented by the position of the oxygen atom. The radial distribution functions are evaluated by molecular dynamics (MD) simulation as well.
The calculation of the self-diffusion coefficient by the Green-Kubo formalism and the evaluation of the RDF is time and memory consuming. The determination of D i and g i−O (r) in electrolyte solutions is considerably more expensive due to additional time consuming algorithms, e.g. Ewald summation [3], required for permitting a truncation of the long range electrostatic interactions; however, it should be noted that a completely explicit evaluation of all pairwise interactions would be even much more expensive.
In a first step, the density of the aqueous alkali halide solution was determined in an isobaric-isothermal (NpT) MD simulation at the desired temperature and pressure. The resulting density was used in a canonical (NVT) MD simulation at the same temperature, pressure and composition of the different alkali halide solutions. In this run, the self-diffusion coefficient of the ions as well as the radial distribution function, respectively, was determined. In case of the calculation of D i , the MD unit cell with periodic boundary conditions contained N = 4 500 molecules, both for the NpT and the NVT simulation run.
For the evaluation of the RDF, there were N = 1 000 molecules in the simulation volume, i.e. 980 water molecules, 10 alkali cations and 10 halide anions. The radial distribution function was sampled in the NVT simulation within a cutoff radius of 15Å with 500 bins.
Self-diffusion coefficients and radial distribution functions
The self-diffusion coefficient D i and the position of the first maximum r max,1 of the RDF g i−O (r) of water around the ions was investigated for all alkali cations and halide anions in aqueous solution for ε = 200 K. These data were calculated at high dilution so that correlated motions and ion pairing between the cations and anions were avoided. Hence, D i and r max,1 are independent on the counter-ion in solution.
The results for the self-diffusion coefficient D i are shown in Fig. 1. The overall agreement of the simulation results with the experimental data is excellent. The deviations are below 10 % for all ions, except for the sodium cation, where the deviation is about 20 %.
These simulation results also follow the qualitative trends from experiment, i.e. D i increases with cation and anion size, respectively. This ion size dependence is directly linked to the electrostatic interaction between the ions and water. In aqueous solution, the cations and anions are surrounded by a shell of electrostatically bonded water molecules (hydration shell). The ions diffuse together with their hydration shell within the bulk water. For small ions, the hydration shell is firmly attached to the ion. Hence, the effective radius, that typically dominates ion motion, is larger for smaller ions than for larger ions, where the hydration shell is less pronounced.
The results of r max,1 are shown in Fig. 2. In case of the alkali cations, the simulation results are within the range of the experimental data, except for Na + where the deviation from the experimental data is 5.3%. For the halide anions, only the simulation result for r max,1 of the RDF of water around the iodide anion is within the range of the experimental data. The deviations of the simulation results for r max,1 from the experimental data are 12.1 % around F − , 6.5 % around Cl − , and 2.9 % around Br − . Comparing D i and r max,1 of the cesium cation and the fluoride anion, which have almost the same size, it can be seen that Cs + diffuses faster in aqueous solution and the water molecules of the hydration shell around F − are closer to the ion. This can be attributed to the different orientations of the water molecules around the oppositely charged ions. The water molecules are able to build a stronger attached hydration shell around the fluoride ion which is closer to the ion.
Computational demands
The molecular simulations in Section 2 were carried out with the MPI based molecular simulation program ms2, which was developed in our group, cf. Deublein et al. [1]. The total computing time for determining the self-diffusion coefficient of ions in aqueous solutions was 138 hours on 36 CPUs (48 hours for the NpT run and 90 hours for the NVT run). For these simulations a maximum virtual memory of 1.76 GB was used.
For the evaluation of the radial distribution function of water around the ions a total computing time of 31 hours on 32 CPUs (10 hours for the NpT run and 21 hours for the NVT run) was required.
Catalysed hydroxylation of unsaturated fatty acids
Producing polymers from regenerative feedstock is a highly interesting alternative to polymers made of naphtha. The first step for obtaining biopolymers is the synthesis and study of all possible basic materials that can be used as building blocks. Such materials can for instance be obtained by an enzymatically catalysed reaction of unsaturated fatty acids to dihydroxy-fatty acids.
In nature, there are only mixtures saturated and unsaturated fatty acids rather than the pure compounds. Therefore, separation -before or after the reaction -is necessary to obtain pure products, e.g. by chromatography, using hydrotalcite as adsorbent and a mixture of water and isopropanol as solvent.
Cytochrome P450 monooxygenase, an enzyme well-known for catalysing the hydroxylation of organic molecules, is a suitable catalyst for this process as well. The critical aspect of the catalytic reaction is the contact between the heme group, which contains the active centre of the enzyme, and the double bonds in the carbon chain of the fatty acid. The enzyme is denaturalized in the presence of organic molecules, and is only active in an aqueous phase. Fatty acids have a small dipole moment and are almost insoluble in water. So how does the contact between the active centre of the enzyme and the fatty acid take place?
A series of molecular simulations was conducted to learn more about the distribution of molecules around the enzyme and the behaviour of the system at different conditions. For this purpose, the mixing behaviour of the systems fatty acid + water, fatty acid + water + isopropanol, and fatty acid + water + cytochrome P450 was investigated. In particular, it is relevant to know whether the fatty acid builds micelles in the water-isopropanol solvent, or how the enzyme catalyses the reaction despite the different phase behaviour of the solvent and the fatty acid. The fatty acid of interest in the present work is oleic acid.
Simulation details
Molecular simulation of biological systems poses a challenge to scientific computing. The most important limitation in molecular simulation is the system size. As the number of molecules in the simulation box N increases, the computing time required for the simulation increases with O(N 2 ) if it is implemented in a naive way. The reason for this steep dependence is found in the computation of pair potentials, so that the distance and energy between interacting atoms needs to be computed at every simulation step.
One option for decreasing the simulation time consists in following a coarsegraining approach [12]. By coarse-graining, a group of atoms is modelled as a single interaction centre, reducing the number of pair interactions that have to be calculated. As we are interested in obtaining detailed atomistic information, we prefer to use a full atomistic model. The model we selected is OPLS [13], which has been successfully used to simulate a large variety of biological systems. The model for the heme group, inexistent in the original OPLS force field, was taken from a recent parameterization of this group compatible with the OPLS force field [14]. In these models, repulsion-dispersion interactions are treated with LJ potentials, while electrostatic interactions are taken into account considering point partial charges at the atomic positions. Internal molecular degrees of freedom are modelled via bond, bend, and torsion potentials.
The cytochrome P450 enzyme contains more than 7 000 atoms, and together with the fatty acid and the solvent molecules, we need to simulate up to 180 000 atoms. Molecular simulation of such an enormous system would need months of simulation time unless we use advanced parallel simulation programs. For the present simulations of molecular biosystems, the GROMACS 4.6 MD program was used [15]. At the beginning of the simulation, the molecules were arranged in a randomized fashion within a periodic simulation box, and the energy was minimized. This procedure for generating a starting configuration attempts to mimic experimental conditions where initially the solutions are vigorously mixed. Then every molecule is assigned a random velocity (corresponding to the temperature of the system), and a short simulation of 2 ps is carried out for an initial equilibration. Subsequently, a long simulation (over a minimum simulation time of 20 ns) is run, where the properties of interest are calculated.
The molecular positions are updated using a leapfrog algorithm to integrate Newton's equations of motion. We truncate the LJ potential at a cut-off radius of r c = 1.5 nm, and use the particle-mesh Ewald summation method [16] to calculate electrostatic interactions. Temperature is controlled by velocity rescaling with a stochastic term, and pressure with a Berendsen barostat. More details about these simulation techniques can be found elsewhere [17].
Our simulations are performed under conditions for which reliable experimental data are available, i.e. at a temperature of 298 K and a pressure of 100 kPa. For the water-oleic acid system, we use a fatty acid mass fraction of 60 % and two different simulation boxes with approximately the same volume: A cubic box with V = (7.14nm) 3 , and a orthorhombic box of the dimension 5×15×5 nm 3 . For the isopropanol-water-oleic acid system, we added 10 mg/ml of oleic acid to a mixture water/isopropanol with a mass fraction of 60 % (for isopropanol) in a cubic simulation box of (12 nm) 3 . Finally, the oil/water concentration in the cytochrome P450 + water + oil system corresponded to a volume fraction of 30 % (for oleic acid) in a cubic box of (10 nm) 3 , wherein a single cytochrome P450 enzyme was placed.
Simulation results
Molecular simulations of oil in the presence of water show a swift phase separation, as it is expected from experimental observation. The phase separation takes place only after a few picoseconds. Independently of the shape of the simulation box, the oleic acid forms a bilayer, while other possible structures are not observed. Oil molecules form a ordered phase, where their acid groups point in the direction of the water phase, and their hydrocarbon tails are aligned. This is a consequence of the well known fact that the acid group is hydrophilic while the hydrocarbon tail is hydrophobic. When we add isopropanol to the water-oil mixture, the situation changes, as shown in Fig. 3. Water and isopropanol are miscible, and their behaviour is similar as in a pure water-isopropanol solution. Oil molecules are mostly surrounded by isopropanol, which can be easily inferred from the radial distribution functions. On the other hand, the double bonds in the oil clusters tend to be in contact with each other.
Oil in the presence of water and cytochrome P450 behaves similarly as in wateroil solutions, cf. Fig. 4. After a few nanoseconds a clear phase separation takes place, where oil forms an ordered, separate phase. Cytochrome P450 stays solvated in water and in contact with the oil phase only at specific points. At the beginning of the simulation there are several oil molecules in the vicinity of cytochrome P450 at favourable contact sites. These molecules maintain their contact with the enzyme during the whole simulation in a position where an interaction with the active centre (which is situated within the heme group) is possible.
Computational demands
The computational demands of our simulations were highly dependent on the type of simulation. For the energy minimization simulations, it was sufficient to use a single processor for several minutes. For equilibration, we used 16 CPUs running in parallel for a maximum of 3.5 hours. The most demanding simulations during production required 256 CPUs running for 180 hours and a virtual RAM of 2.8 GB.
Planar vapour-liquid interfaces
For simulating vapour-liquid coexistence on the molecular level, a long range correction is needed to obtain accurate results. Thereby, the dispersive contribution to the potential energy U i of a molecule i is calculated as a sum of the explicitly computed part and the long range correction (LRC), where the latter consists of a summation over N s slabs, employing a periodic boundary condition. The LRC is only applied in the direction normal to the planar interface, corresponding to the y direction here given that the system is homogeneous in the other directions. The correction terms ∆ u LRC i,k (y i , y k ) for the slabs are calculated as an integral over the slab volume. The resulting term for the potential energy only depends on the distance between the slabs r, the density ρ and the thickness ∆ y of the slabs [18]. The slabwise interaction is computed in pairs, so that 1 2 N 2 s individual contributions have to be computed. Usually, the number of threads is much smaller than the number of slabs. For a scaling test, a system with N = 256 000 two-centre Lennard-Jones (2CLJ) particles was simulated with different numbers of threads and 512 slabs. Fig. 5 shows the results for the strong scaling behaviour.
For small numbers of threads, the program scales almost ideally, i.e. linearly. For larger numbers of threads, the communication between the threads and the decomposition of the particles becomes more time consuming. Eventually, the long range correction requires even more communication between the threads than the rest of the program, since the density profile has to be sent to every thread before the slab interaction begins. After the slab interaction has been calculated, the correction terms for the potential energy, the force and the virial have to be distributed to every thread.
On the basis of the long range correction discussed above, vapour-liquid equilibria were considered by explicit MD simulation of the fluid phase coexistence (i.e. including an interface), so that the vapour-liquid surface tension γ could be was obtained, where the LJ critical temperature is given by Pérez Pellitero et al. [19] as T c = 1.3126 ε.
Curved vapour-liquid interfaces
The influence of curvature on vapour-liquid interfacial properties was examined for the trunacted-shifted Lennard-Jones fluid (TSLJ), with a cutoff radius of r c = 2.5 σ , by simulating both curved (i.e. bubble and droplet) as well as planar interfaces by MD simulation of the canonical ensemble at a reduced temperature of T = 0.75 ε.
For the system with planar symmetry, half of the simulation volume was filled with vapour and the other half with liquid, using a simulation box with a total elongation of 17 σ . The density profile from this simulation was compared with that of a bubble as well as a droplet with an equimolar radius of R ρ = 12.5 σ , cf. Fig. 6. A system containing a bubble is subsaturated, whereas the fluid phases coexisting at the curved interface of a droplet are supersaturated with respect to the thermodynamic equilibrium condition for the bulk phases coexisting at saturation. In agreement with this qualitative statement from phenomenological thermodynamics, the vapour density (i.e. the minimal value from the density profile) was found to be smaller for the system containing a bubble (ρ ′′ = 0.0097 σ −3 ) and larger for the system containing a droplet (0.0140 σ −3 ), whereas the vapour density over the planar interface was 0.0127 σ −3 . Analogous results were obtained for the chemical potential µ total = µ id + µ conf , which was determined as the sum of the ideal contribution µ id = T ln ρ and the configurational contribution µ conf . The Widom test particle method [20] was implemented to compute a profile of µ conf , which in equilibrium is complementary to the logarithmic density profile, yielding an approximately constant profile for the total chemical potential, cf. Fig. 6. For the droplet, an average value of µ total = (−3.31 ± 0.04) ε was determined, in comparison to (−3.51 ± 0.02) ε for the bubble and (−3.37 ± 0.02) ε for the planar slab. This shows that in the present case, curvature effects are more significant for a gas bubble than for a liquid droplet of the same size, suggesting that the surface tension of the bubble is larger than that of the droplet.
Temperature dependence of contact angles
The research on the wetting behaviour of surfaces is an active field in materials science. The goal is to reliably predict the wetting properties of component surfaces for design of components with new functional features. In this regard molecular simulations are particularly useful due to their high resolution. Moreover, molecular simulation permits the systematic investigation of the influence of particular parameters on the wetting behaviour.
For the present study, the TSLJ potential was employed and the contact angle dependence on temperature for a non-wetting (θ > 90 • ) and a partially wetting (θ < 90 • ) scenario, cf. Fig. 7 was investigated by MD simulation. The solid wall was represented by a face centered cubic lattice with a lattice constant of a = 1.55 σ and the (001) surface exposed to the fluid. For the unlike interaction between the solid and the fluid phase the size and energy parameters σ sf = σ ff and respectively, were applied. The simulations were carried out with the massively parallel MD program ls1 mardyn [2]. A total number of N = 15 000 fluid particles was simulated in the NVT ensemble. In most cases, 64 processes were employed for parallel computation. The efficient simulation of heterogeneous systems, containing vapour-liquid interfaces and even a solid component, was facilitated here by a dynamic load balancing scheme based on k-dimensional trees [2]. At the beginning of the simulation, the fluid particles were arranged on regular lattice sites in form of a cuboid. The equilibration time of the three phase system was 2 ns, whereas the total simulation time was chosen to be 6 ns. Periodic boundary conditions were applied in all directions, leaving a channel with a distance of at least 35 σ between the wall and its periodic image, avoiding "confinement effects" at near-critical temperatures [21]. The contact angles were determined by the evaluation of density profiles averaged over a time span of 4 ns.
Sessile drops were studied at two different interaction parameters, ζ = 0.35 and 0.65, corresponding to partially wetting and non-wetting conditions. The simulations were carried out in the temperature range between 0.7 and 1 ε, covering nearly the entire regime of stable vapour-liquid coexistence for the TSLJ fluid, as indicated by the triple point temperature T 3 ≈ 0.65 ε, cf. van Meel et al. [22], and the critical temperature T c ≈ 1.078 ε, cf. Vrabec et al. [23]. It is well known that the respective wetting behaviour of the sytem, i.e. wetting or dewetting, is reinforced at higher temperatures, cf. Fig. 8. Close to the critical point of the fluid, the phenomenon of critical point wetting occurs [24], i.e. either the liquid or the vapour phase perfectly wets the solid surface |cos θ (T → T c )| → 1.
Conclusion
Within the MOCOS project, a substantial progress was made regarding software development for massively-parallel molecular simulation. In particular, suitable algorithms for the long range contributions to the intermolecular interaction were implemented in ms2 (Ewald summation) and ls1 mardyn (Janeček correction for long range dispersive effects). In ls1 mardyn, a k-dimensional tree-based domain decomposition with dynamic load balancing was implemented to respond to an uneven distribution of the interaction sites over the simulation volume. Furthermore, GRO-MACS [15] was used to simulate large molecules, taking their internal degrees of freedom into account. In this way, systems with up to 300 000 interaction sites were simulated for over 20 nanoseconds. New models were developed for alkali cations and halide anions in order to treat the dispersive interaction accurately in molecular simulations of electrolye solutions. These models were optimized by adjusting the LJ size parameter to the reduced density of aqueous solutions and the energy parameter to diffusion coefficients as well as pair correlation functions. On this basis, quantitatively reliable results were obtained regarding the structure of the hydration shells formed by the water molecules surrounding these ions. Important qualitative structural information on biomolecules at oil-water interfaces was deduced by analysing MD simulations of the cytochrome P450 enzyme, and the critical wetting behaviour as well as the properties of planar and curved vapour-liquid interfaces were characterized by massively-parallel MD simulation with the ls1 mardyn program.
The scenarios disussed above show how molecular modelling can today be applied to practically relevant systems even if they exhibit highly complex structures and intermolecular interactions, including significant long range contributions. In the immediate future, molecular simulation as an application of high performance computing will therefore be able to play a crucial role not only by contributing to scientific progress, but also in its day-to-day use as a research and development tool in mechanical and process engineering as well as biotechnology. | 2013-05-17T04:43:34.000Z | 2013-05-17T00:00:00.000 | {
"year": 2013,
"sha1": "95279de680d9d75675852023900b470e5d6e3034",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1305.4048",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0e7d5b7e59b6ede9d1aca9eed6b3fa0946b6cb63",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Computer Science",
"Materials Science",
"Physics"
]
} |
723745 | pes2o/s2orc | v3-fos-license | Numerical Studies of the two-leg Hubbard ladder
The Hubbard model on a two-leg ladder structure has been studied by a combination of series expansions at T=0 and the density-matrix renormalization group. We report results for the ground state energy $E_0$ and spin-gap $\Delta_s$ at half-filling, as well as dispersion curves for one and two-hole excitations. For small $U$ both $E_0$ and $\Delta_s$ show a dramatic drop near $t/t_{\perp}\sim 0.5$, which becomes more gradual for larger $U$. This represents a crossover from a"band insulator"phase to a strongly correlated spin liquid. The lowest-lying two-hole state rapidly becomes strongly bound as $t/t_{\perp}$ increases, indicating the possibility that phase separation may occur. The various features are collected in a"phase diagram"for the model.
I. INTRODUCTION
The last decade has seen a great deal of interest in spin and/or correlated electron systems on a ladder structure formed from two coupled chains. This work has been motivated both by the discovery of real materials with S = 1 2 ions forming a ladder structure [1], and because ladder systems exhibit a number of interesting and surprising properties (see e.g. the review of Dagotto and Rice [2]).
Particularly interesting behaviour may be expected if the system includes charge degrees of freedom. This can be achieved by doping, to create a system of strongly correlated mobile holes, as in the cuprate superconductors. Systems studied include LaCuO 2.5 doped with Sr [1] and Ca 14 Cu 24 O 41−δ doped with Sr [3], the latter showing superconductivity under pressure. The t-J and Hubbard models provide alternate representations of the physics of such systems, and both have been studied extensively. References to most of the existing work on the t-J ladder system, as well as the most recent evidence for the form of the phase diagram can be found in Müller and Rice [4].
We are here interested in the repulsive (U > 0) Hubbard model on a 2-leg ladder.
The first study of this system, to our knowledge, was by Fabrizio, Parola and Tosatti [5] who used a weak-coupling renormalization group approach to investigate the role of the interchain hopping t ⊥ in driving the system out of a Luttinger liquid phase. Earlier work, in this context, had considered a 2-dimensional system of weakly coupled chains [6]. Further work [7,8], using bosonization techniques, valid for weak interchain coupling, has identified a number of possible phases which the 2-leg Hubbard ladder may exhibit. In the notation of Balents and Fisher [8], these are denoted as CnSm where n, m represent the number of gapless charge and spin modes respectively. (Here n, m = 0, 1, 2 giving 9 possible phases).
Numerical studies of the Hubbard ladder have been carried out for both static and dynamic properties. The Density Matrix Renormalization Group (DMRG) technique [9] has been used to calculate both spin and pairing correlations [10,11], and to calculate the spin and charge gaps [11,12] as functions of the parameters of the model. The DMRG technique, at least in its present form, is unable to compute the dynamic response, particulary momentum dependent properties. The one-hole spectral function has been obtained using exact diagonalizations [13] (limited to 2 × 8 sites) and quantum Monte Carlo techniques [14]. This latter paper also reports results for two-hole spin and charge excitations. The spin-gap at half-filling, obtained by DMRG on a 2 × 32 lattice [11] showed a very sudden decrease near t/t ⊥ ≃ 0.5, particularly for small U. However finite size effects are present and may mask the true behaviour. This work was motivated in part by a need to confirm this, and in fact to look for possible evidence of a phase transition near this point, as well as by a desire to explore the form of dispersion curves for spin and charge excitations, which cannot be obtained by DMRG methods.
The Hamiltonian of the Hubbard ladder is written as where i labels the rungs of the ladder, a (=1,2) is a leg index, σ (=↑, ↓) is the spin index, and the operators have the usual meaning. There are several instructive limiting cases. When U = 0 the electrons are non-interacting and the system has two simple cosine bands of width 4t and separation 2t ⊥ . The system is metallic for all electron densities n, except for the case n = 1 (half-filling) and t ⊥ > 2t when there is a band gap and the lower (bonding) band is completely filled. The case is aptly referred to as a "band insulator" [11]. On the other hand when the interchain hopping t ⊥ = 0 (and U > 0) the ladder decouples to two Hubbard chains for which there are exact, but highly nontrivial solutions. The system is then in a Luttinger liquid phase, and also an insulator at half-filling.
When both t ⊥ and U are non-zero there are no exact results known. However the form of the phase diagram can be reasonably inferred from the analytic and numerical calculations referred to above. A nice discussion is given by Noack et al. [11]. In the limit of large U doubly occupied sites are suppressed and the model reduces to a t-J ladder, with parameters J = 2t 2 /U, J ⊥ = 2t 2 ⊥ /U. The Hubbard ladder has electron hole symmetry under the transformation The phase diagram and properties of the model are thus symmetric about the n = 1 case.
We see manifestations of this in the series for various excitation energies, discussed below.
In this paper we study the Hubbard ladder using both the method of series expansions [15][16][17] and DMRG [9]. The series expansion method is complementary to other numerical methods and is able to provide ground state properties, excitation spectra and T = 0 critical points to high accuracy. Another advantage is that one deals with a system in the thermodynamic limit and finite size corrections are not needed. We have used this approach recently in studies of the t-J model on the square lattice [18] and on a 2-leg ladder [19], and we refer to those papers for details of the method and references to previous work. The emphasis of our work is on the half-filled case n = 1, and on one and two-hole excitations from half-filling. The series method is not well suited to handle variable electron density.
We are not aware of any previous series work on the Hubbard ladder. Shi and Singh [20] have used a somewhat different series approach to study the Hubbard model on the square lattice. Their method, which introduces an artificial antiferromagnetic Ising term into the Hamiltonian is not appropriate here, as we do not expect any magnetic long range order in the ladder system. This paper is organized as follows. In Sec. II, we discuss briefly the methods used. In Sec. III, we study the system at half-filling. In Sec. IV, we consider the system with one and two holes. The last section is devoted to discussion and conclusions.
II. METHODS
The series expansion method is based on a linked cluster formulation of standard Rayleigh-Schrödinger perturbation theory. We use a "rung basis" and write the Hamiltonian in the form where is taken to be the unperturbed Hamiltonian and is the perturbation. Thus H 0 describes decoupled rungs, and can be solved exactly while the intrachain hopping term couples the rungs and is treated perturbatively.
The Hamiltonian for a single rung has 16 possible states. These are shown in Table I.
For U < 3t ⊥ the lowest energy rung state is a spin-singlet containing two electrons (state 0 in Table I). At U = 3t ⊥ there is a level crossing and the lowest energy rung state becomes a doublet S = 1 2 state with a single electron in a symmetric (bonding) state for U > 3t ⊥ . The eigenstates of H 0 are then direct products constructed from the possible rung states.
The ground state of H 0 for the half-filling system (n = 1) is the state that has each rung in a spin singlet. This is true even for U > 3t ⊥ , as transferring an electron from a doubly occupied rung to another doubly occupied rung costs energy.
To compute the perturbation series we fix the values of t ⊥ and U and expand in powers of x ≡ t/t ⊥ . Without loss of generality we set t ⊥ = 1 to define the energy scale. The series are then evaluated at the desired value of t using standard Padé approximants and integrated differential approximants [21].
We have also carried out large-scale DMRG calculations [9]. Two different DMRG algorithms have been employed. Both are 'infinite lattice' algorithms [9] with open boundary conditions. The first method uses a superblock consisting of the usual system and environment blocks with two added rungs in the middle. The system/environment blocks are augmented by one rung at a time and the superblocks always have an even number of rungs.
The second method is similar except that only one rung is kept in the middle, meaning that the superblocks have an odd number of rungs. The second method allows more states to be retained in the blocks, whilst the even lattices dealt with in the first method are usually considered to be more desirable for finite-size scaling (FSS) studies. For the first method typical calculations involve ladders with up to 60 rungs, keeping up to 550 states per block.
For the second method the superblocks studied typically reached 81 rungs and up to 1500 states were retained per block. It should be noted that the 'infinite lattice' algorithm, despite its name, can be used to obtain accurate results for finite lattices, as we shall show in what follows.
A. Ground state energy
We first consider the ground state energy E 0 at half-filling. The series method yields an expansion of The series have been computed to order x 14 for various U/t ⊥ . The coefficients for U/t ⊥ = 8 are given in Table II, and the other coefficients can be provided on request. The cluster data for this one-dimensional problem are trivial. The limiting factor is the size of the matrices used in obtaining the cluster energies. DMRG gives the ground state energy directly for a given lattice, and finite-size scaling must be used to extract the bulk result.
In Figure 1 we show our results for the ground state energy in the half-filled case versus t for various U/t ⊥ obtained both by series expansions and DMRG calculations. For given t the energy increases with U as expected. For fixed U the energy decreases slowly with increasing t. The series are well converged for t/t ⊥ < ∼ 0.6, but the convergence deteriorates rapidly at that value. The DMRG is, however, well converged even for larger t and agrees well with the series results for smaller t. A remarkable feature of the ground state energy is the sharp downturn which occurs near t/t ⊥ ∼ 0.5 − 0.6 for U/t ⊥ < ∼ 1, but which becomes smoother for larger U. For the case of free electrons (U = 0) the ground state energy is and, as mentioned above, the crossover represents a transition from band insulator to a conductor with gapless charge and spin excitations. The qualitative similarity between the U = 0 case and small U (e.g. U/t ⊥ = 0.5 in the figure) suggests that spin excitations may remain gapless, or nearly so, for small finite U and large t.
We can compare our results for the ground state energy with those of Kim et al. [22], obtained via a variational method. These are shown in Figure 2. The results are in good agreement for small t/t ⊥ , but it appears that the variational method significantly over estimates the ground state energy for large t/t ⊥ .
B. Spin excitations
We now turn to the excitations, which we have computed directly using Gelfand's method [17]. The lowest energy spin excitation branch results from exciting a rung from a spin singlet (state 0 in Table I) to a triplet (any of states 5,6,7 in Table I). Nonzero t allows this to propagate coherently along the chain, giving rise to a triplet magnon excitation. Figure 2 shows dispersion curves ∆ s (k) for this excitation for fixed t/t ⊥ = 0.5 and various U/t ⊥ .
For larger t/t ⊥ the series become too erratic. For all cases the energy minimum occurs at k = π. As U increases the bandwidth of the spin excitation decreases.
The excitation energy at k = π defines the spin gap ∆ s ≡ ∆ s (π). The raw data are shown in Figure 3, where we plot ∆ s /t ⊥ versus t/t ⊥ for various U/t ⊥ . For small t our series give rather precise estimates. In the limit t = 0 the spin gap is 1 2 ( U 2 + 16t 2 ⊥ − U) which is 2t ⊥ for U = 0 and decreases for increasing U. Near t/t ⊥ ∼ 0.4-0.5 there is a crossover and beyond this point the spin gap for large U exceeds that for small U. For U/t ⊥ < ∼ 2 this crossover becomes very sharp. The series extrapolations would indicate that the spin gap actually vanishes at t/t ⊥ ≃ 0.5 − 0.6, but we believe this is an artefact of poor convergence in this region. It is expected [8,11] that at half-filling the system will have a finite spin gap throughout the phase diagram for any U > 0, due to 'umklapp' processes. It is clear however that the spin gap becomes quite small for t/t ⊥ > ∼ 0.5.
We have also used our DMRG algorithms to calculate ∆ s . In order to obtain insight into the potential limitations of DMRG methods for this problem, we consider the finite-size scaling of ∆ s in the exactly solvable U = 0 case. As mentioned, for U = 0, the system is gapped for t ⊥ > 2t and gapless for t ⊥ < 2t. In Fig. 4 we plot ∆ s as a function of 1/N rung for three values of the ratio t/t ⊥ . For t/t ⊥ = 0.49 (just below the critical ratio), the system has a small spin gap, and ∆ s scales smoothly to its bulk value, the finite-size corrections vanishing as 1/N 2 rung (as is to be expected for a gapped system with open boundary conditions). For t/t ⊥ = 0.51 (just above the critical ratio), the gap suddenly begins to display a "sawtooth" dependence on N rung , with the points of the saw scaling towards zero as N rung → ∞. This behaviour becomes more pronounced as we move further beyond the critical ratio. It appears that multiple crossovers are occurring, with new states crossing over to the bottom of the spectrum as N rung increases. These sawtooth oscillations are presumably due to the 2-band structure: every so often it is is energetically favorable for an extra electron to be added to the top band rather than the bottom band, and a crossover occurs. This oscillatory behaviour hs not been exhibited before, as far as we are aware.
In order to calculate the bulk spin gap successfully with DMRG, we must first accurately calculate ∆ s for a number of lattice sizes N rung , and then extrapolate to the bulk limit, assuming some scaling ansatz for the finite N rung corrections. An example of this is shown in Table III, where we compare DMRG and exact results for ∆ s for finite lattices and the bulk limit. In this case, where the FSS is smooth, it is possible to obtain a reasonable estimate for the bulk spin gap, the real error being around 0.1%. Two potential problems emerge from our studies of gapless systems in the U = 0 case for t/t ⊥ > 0.5. Firstly, because of the oscillating FSS it might be difficult to obtain accurate DMRG results for very large lattices in a regime where the gap is small or vanishing. Convergence of finite lattice results with the number of states, m, retained per block, must be monitored over a large range of m values. For a given lattice size, improved DMRG estimates can be obtained by using a finite lattice algorithm [9]. However, even if highly accurate results are available for a number of lattice sizes, a second and more pressing problem is that, as a result of the oscillations or erratic FSS, extremely large lattices may be needed in order to reach a regime where a suitable scaling ansatz can be reliably used to extrapolate to the bulk limit, as can be seen for the gapless cases in Fig. 4.
Fortunately, the presence of electron repulsion smooths out the FSS, allowing reliable DMRG estimates of ∆ s some way beyond t/t ⊥ = 0.5. In Fig. 5, we show the FSS of DMRG estimates of ∆ s in the U/t ⊥ = 1 case for various t/t ⊥ . We find that up to around t/t ⊥ = 0.55, the corrections to the bulk results for ∆ s scale as 1/N 2 rung , as in the gapped case for U = 0. As t/t ⊥ is increased, however, we observe initially oscillatory or erratic FSS, followed by a crossover to linear dependence of ∆ s on 1/N rung , as might be expected for a system with a small gap. This is illustrated in Fig. 5 for the t/t ⊥ = 0.6 case, where a crossover to linear behaviour is observed as the lattice size reaches around 30 rungs. As mentioned, it is important to first assess the DMRG convergence for the finite lattices before attempting extrapolations. In Table IV, we show the DMRG convergence of ∆ s with m for the oddrung algorithm, with m ranging from 200 to 1500. It can be seen in this case that the finite lattice estimates are sufficiently well converged to afford reliable extrapolations. In Fig. 5 results from the odd-rung algorithm with m = 750 are plotted along with results from the even-rung algorithm with m = 800. Good agreement can be seen between the two sets of results in the linear regime. In order to make bulk estimates for t/t ⊥ ≥ 0.6, we assume a linear scaling ansatz. Presumably, if the resulting estimate is non-zero, as is the case, e.g., for t/t ⊥ = 0.6, there should be a crossover from linear to quadratic scaling for larger lattices still, in which case the estimates are (tight) lower bounds on the spin gap. For the case of U/t ⊥ = 1, we can carry this out up until t/t ⊥ reaches around 0.75. Beyond this value, the FSS is too erratic to permit accurate finite lattice estimates of the spin gap for sufficiently large lattices. In the inset to Fig. 5 this erratic FSS is shown for the t/t ⊥ = 0.8 case.
Plots of the DMRG bulk estimates of ∆ s as functions of t/t ⊥ are included in Fig. 3. The spin gap obtained in this way is indistinguishable from the series values for small t. However, the DMRG also provides well converged results for larger t provided that U is not too small.
In the U/t ⊥ = 2 case, for example, ∆ s undergoes a very rapid decrease up till t/t ⊥ ≃ 0.6, and then flattens out at a small but finite value. The same behaviour was already seen in the previous DMRG calculation of Noack et al. [11]. Noack et al. [11] did their calculation at a fixed lattice size 2 × 32 sites, however, whereas we performed a careful finite-size scaling analysis to confirm that the spin gap remains finite, rather than scaling to zero in the bulk limit.
In the case U/t ⊥ = 1 case, a similar effect occurs, but the FSS behaviour becomes completely erratic and the extrapolations fail at around t/t ⊥ ≈ 0.75. In Fig. 6 we depict the region in the (U/t ⊥ )-(t/t ⊥ ) plane where the DMRG either fails to give reasonable results due to erratic FSS, or indicates a very small or vanishing gap, by marking it with an "F".
Also included in Fig. 6 is the line where the series appear to give a vanishing spin gap. This line indicates the crossover in the physics of the system from band-insulator to strongly correlated Mott insulator.
As mentioned in Section I there is considerable interest in the doped Hubbard ladder
where the electron density n < 1. The series method is not well suited to study the effect of finite doping. However we are able to compute the ground state and excitation energies when the system contains one or two holes.
A. 1-Hole Case
In the t = 0 limit a single hole will change one of the singlet rung states into an S z = 1 2 or − 1 2 bonding state, with an energy increase of −t ⊥ − λ 1 . Finite t will allow the hole to propagate along the ladder, giving a quasiparticle band. Figure 7 shows the quasiparticle excitation energy ∆ 1h /t ⊥ as a function of wavenumber for the case t/t ⊥ = 0.5 and various U/t ⊥ . For U = 0 we have the exact results ∆ 1h (k) = t ⊥ + 2t cos(k) (8) and this is seen through the vanishing of all higher terms in the series. For the choice t/t ⊥ = 0.5 this gives ∆ 1h = 0 at k = π. We also show, for comparison, the approximate dispersion curve for U/t ⊥ = 4 obtained by Endres et al. [14], through their "local rung approximation". This appears to overestimate the energy by about 0.4t ⊥ , although the overall shape is very similar.
For increasing U the energies are depressed and there is a small decrease in the quasiparticle bandwidth although the overall cosine shape remains. The minimum of the quasiparticle spectrum occurs at k = π throughout. The energy zero is taken to be the ground state energy of the half-filled ladder. Each quasiparticle band crosses the zero level, indicating that the overall energy of the system is reduced when a hole is created, and this is also marked in Fig. 6. Figure 8 shows the "quasiparticle gap" as a function of t/t ⊥ for various U/t ⊥ obtained from the series and from DMRG. Both methods agree up to t/t ⊥ ∼ 0.5, beyond which the series fail to converge. This is the same crossover region seen in the spin gap studies. The DMRG results suggest an upturn or change in slope for larger t/t ⊥ and not very large U, as shown in the Figure for U = 1. However this remains somewhat speculative as the scaling behaviour of the DMRG estimates is unusual. We illustrate this in Figure 9, where we plot the gap (for U/t ⊥ = 1) versus 1/N 2 . The behaviour shows a qualitative change between t/t ⊥ = 0.5 and 0.6 becoming oscillatory and with a change of sign of limiting slope. Nevertheless the larger N points scale quite well. Noack et al. [11] saw s similar break in the behaviour of the charge gap at the crossover for U/t ⊥ = 4.
B. 2-Hole Case
We would also like to explore the system doped with two holes, to see if binding occurs between the holes. From Table I, one can see that at zeroth order (t = 0), the energy gap for two holes sitting on the same rung is 1 2 ( U 2 + 16t 2 ⊥ − U), which is larger than the gap ( U 2 + 16t 2 ⊥ − U) − 2t ⊥ for two holes on different rungs. Thus it is not energetically favourable to have two holes on the same rung. Our present series methods, unfortunately, are unable to treat the latter case of two holes on separate rungs: so here we restrict ourselves to exploring whether the former state becomes bound at larger t. One starts from a state with both holes on a single rung and all other rungs in spin singlet states; and then the hopping term allows these holes to move along the ladder, and so generates a dispersion relation for this state. For example at U/t ⊥ = 8, our second order series result for the dispersion is One can see that the minimum energy is at k = π, rather than 0, due to the fact that this is not the lowest energy two-hole state. Comparing with the one-hole dispersion relation, we find no evidence for binding of this state at higher t.
Our series results do not preclude the existence of a two-hole bound state of more complex structure, and there are indications from other work that this can occur. Noack et al. [10,11] find, for a 2 × 32 lattice, a small binding energy (E b ≃ 0.14) for two holes, and a pair wavefunction which has roughly equal amplitudes on a rung and between nearest neighours along the same leg of the ladder. Finite size effects may be large enough to mask the true behaviour. Kim et al. [22] also discuss pair formation using both DMRG and a variational method. They conclude that, at least for large U, holes favour adjacent rungs to minimise the Coulomb energy. Our series method is unable to explore such complex pair states.
Instead, we have used the DMRG method to compute the minimum energy of the system with one and two holes, up to ladders of size 2 × L (L = 60), and hence the binding energy defined by where E 0 (N ↑ , N ↓ ) is the ground state energy with N ↑ (N ↓ ) up (down) electrons. We show this as a function of t/t ⊥ for U/t ⊥ = 2, 8 in Figure 10. As can be seen, there is no binding for small t/t ⊥ , and the binding energy is zero, corresponding to an unbound, well seperated pair of holes. For t/t ⊥ larger than a critical value (t/t ⊥ ) c , however, hole binding becomes energetically favourable. In fact as we can see from Figure 10, E b increases from zero extremely quickly once past the critical (t/t ⊥ ) c . Now the binding of two holes is expected to be a necessary though not sufficient precondition for the phenomenon of phase separation (binding of many holes). Figure 10 leads us suppose that phase separation may well occur for large t/t ⊥ ; or else at fixed t/t ⊥ = 1, phase separation may well occur at small U/t ⊥ . This would accord with our current knowledge of the t − J ladder, where the exchange coupling J ∼ 4t 2 /U, and phase separation occurs [23] at small U/t, i.e. small t/J. The boundary for two-hole binding is shown in our "phase diagram", Figure 6. No explicit search for phase separation has yet made in the Hubbard ladder, as far as we are aware.
V. DISCUSSION AND CONCLUSIONS
We have made the first application of the series expansion method to the Hubbard model on a two-leg ladder, and also used the DMRG method to explore its properties at half-filling at T = 0. Our series approach starts from a basis of rung states, appropriate for small values of the chain hopping parameter t, and we obtain perturbation series for various quantities in powers of t up to typically 10 terms. The series are well behaved out to typically t/t ⊥ ∼ 0.6. but the convergence becomes problematical beyond that point. Our results are complementary to both analytical weak coupling calculations and other numerical (DMRG and QMC) results.
We have calculate the ground state energy and the triplet spin excitation energies at halffilling, and the excitation energies of one-hole and two-hole states relative to the half-filled case. At half-filling the system is believed to be in a spin-gapped insulating phase [8,11] for all values of the parameters U, t ⊥ . Our results confirm those of Noack et al. [11], showing a sharp crossover at smaller U/t ⊥ from strongly correlated spin-liquid or Mott insulator (as for a simple Hubbard chain) to "band insulator" behaviour as t ⊥ is increased. The spin gap becomes very small in the spin-liquid region. We have peformed a careful finite-size scaling analysis using DMRG to show that the spin gap remain finite, however, rather than scaling to zero in the bulk limit, at least for those moderate values of U/t ⊥ where a definite statement is possible.
A single hole doped into the Hubbard ladder will propagate as a well defined quasiparticle, and we have compute the dispersion relation for this quasiparticle. It was found that the lowest energy two-hole state is not amenable to our series approach; but the DMRG calculations show that two holes become strongly bound at larger t/t ⊥ . This raises the question of whether phase separation will occur in this region, as it does in the t − J ladder [23]. This question must await future investigations.
State label Eigenstate Eigenvalue Name | 00 0 hole-pair singlet singlet TABLE II. Nonzero coefficients of (t/t ⊥ ) n for the ground state energy per site E 0 /N , the spin gap ∆ s , the 1-hole energy ∆ 1h (π), and the 2-hole energy ∆ 2h (π) at U/t ⊥ = 8. The solid lines are extrapolations of the series using different integrated differential approximants, while the points connected by dashed lines are the results of DMRG calculations. Also shown are the exact results for U = 0, and the results of a variational approach [22] for U/t ⊥ = 4, 8 and t/t ⊥ = 0.5, 1 (full circle points). The region that two holes bind (do not bind) is marked by B(NB). The region in which DMRG fails to determine whether the system has a spin gap, due the irregular finite size scaling, is marked by F. Also marked is the region that the 1-hole gap is positive/negative.
FIG. 7.
Excitation spectra for one-hole bonding states for t/t ⊥ = 0.5 and various U/t ⊥ , obtained from series expansions. Also shown are the results of the local rung approximation [14] at U/t ⊥ = 4 (dashed line). | 2014-10-01T00:00:00.000Z | 2000-07-25T00:00:00.000 | {
"year": 2000,
"sha1": "fed4d4f95ddeead882c933979f105c2777e2ed2c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0007388",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fed4d4f95ddeead882c933979f105c2777e2ed2c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
49412392 | pes2o/s2orc | v3-fos-license | Age-moderating effect in prepotent response inhibition in boys with Asperger syndrome: a 2.5 years longitudinal study
Following our previous cross-sectional analysis, indicating age-related improvements of response inhibition in a random-motor-generation task (MPT) in adolescents with Asperger syndrome (AS), the present study reports data from a 2.5-year follow-up examination in the original sample. We found more marked improvements within the follow-up interval in younger AS children, while older AS boys as well as typically developing (TD) boys remained at a relatively constant level throughout. The current longitudinal study further substantiates the notion that AS children (on average) catch up with TD children when they grow older as regards the basic inhibition of developing routine response patterns.
Introduction
Neurocognitive dysfunction and especially deficits in specific executive domains are highly prevalent in autism spectrum disorder (ASD) and play an important role in the core problems of this pervasive developmental disorder, including difficulties in social interaction and communication as well as restricted and repetitive behaviors. In a recent meta-analysis involving a large number of ASD individuals, a moderate overall effect for impaired executive functions (EFs) was found that was relatively stable across different age groups [1]. Furthermore, no specific EF pattern unique to individuals diagnosed with ASD was identified across different EF subdomains (concept formation, mental flexibility, fluency, planning, response inhibition, working memory).
Maturational changes of the prefrontal cortex from childhood to adolescence result in marked improvements in EFs as children age [2,3]. Despite quite extensive literature on executive dysfunction in ASD, relatively little is known about the developmental trajectories in ASD, and especially long-term follow-up studies are scarce. Up to now, the rare studies with a longitudinal design show a substantial variability in the growth trajectories of autistic children's executive functions (for a review see [4]).
Previously we reported cross-sectional data on a sample of 23 boys with Asperger syndrome (AS) and 23 matched healthy controls using a random-motor-generation task that examined two components of cognitive flexibility (inhibition of prepotent responses and memory monitoring/updating) [5]. We found poorer inhibition and more repetitive response patterns only in young children diagnosed with AS but no such group differences between AS children and typically developing (TD) children (independently of age) in memory monitoring and updating. Following our previous cross-sectional analysis, indicating age-related improvements of response inhibition in AS adolescents, the present study reports data from a 2.5-year follow-up examination in the original sample.
Participants and procedure
Thirty children from the initial study (n = 46) were available for further testing [15 boys with AS (M = 11.2 ± 2.8 years) and 15 TD boys (M = 12.2 ± 2.6 years; t(28) = 1.055, p = 0.30)]. Diagnostic criteria of AS conformed to ICD-10 (F84.5; DIMDI [6]), as diagnosed by a child psychiatrist 1 3 in the initial study. Seven boys of the AS group (none of the TD group) had an additional diagnosis of attention disorder and four boys were treated with Ritalin, Atomoxetin. The two diagnostic groups did not differ regarding their non-verbal intelligence (AS: M = 117.3 ± 6.4, TD: M = 115.4 ± 9.2; t(28) = 0.646, p = 0.523), as assessed by the age-appropriate form of the German adaptation of the Culture Fair Intelligence Test (CFT) in the initial study (CFT1 [7]; CFT2 [8]). The study was in accordance with the 1964 Declaration of Helsinki and was approved by the authorized Ethics Committee. Informed written consent was obtained from parents of all participants prior to participation.
Mittenecker pointing test (MPT)
In line with the procedures of the initial study, all participants were tested individually. They were introduced to the MPT by a child psychologist and were given some practice trials to ensure task comprehension. The MPT is a computer-based test that instructs participants to press (with their index finger) the keys of a keyboard with nine unlabelled keys irregularly distributed over the board in the most random or chaotic order possible, requiring 180 responses in total (for more details concerning the task please see [5]). The responses were paced by an acoustic signal (1.2/s) to control the rate of production.
As outcome variables we used two quantitative measures of deviation from randomness, namely Symbol Redundancy (SR) and Context Redundancy (CR). SR taps the memory component (memory monitoring/updating) of random sequence generation [9,10]. A SR score of zero denotes maximal equality of the relative frequencies of chosen keys and, thus minimal predictability (best possible performance), whereas a (theoretical) score of 1.0 denotes maximal redundancy and, thus a complete lack of randomness.
CR examines the inhibition of prepotent response sequences and is based on the sequential probability of each chosen key. In true random series all possible dyads (pairs of adjacent responses) are approximately equiprobable, whereas their frequencies deviate from equality if responses are continuously influenced by previously chosen alternatives. The major part of the interindividual variance in CR is due to the tendency to repeat certain response sequences en bloc [11]. Hence, CR reflects the inhibition of developing routines [9]. A CR score of zero denotes the complete absence of any regular pattern while a score of 1.0 denotes the presence of a fixed, repetitive response pattern (i.e., maximal perseveration). For detailed information on the test and how to compute SR and CR, see [10].
Results
The hypothesis that younger AS children catch up with their TD counterparts when growing older was tested using an analysis of variance with measurement (first vs. second) as a within-subjects variable, diagnosis (AS vs. TD) as a dichotomous between-subjects variable, age at first measurement as a continuous between-subjects variable, and inhibition of developing routines (CR) as the dependent variable. Most relevant for the research question are the two-way interaction effect of measurement by diagnosis and the three-way interaction effect of measurement by diagnosis by age, which were both significant (F(1,26) = 15.4, p = 0.001, η p 2 = 0.37 and F(2,26) = 6.2, p = 0.006, η p 2 = 0.32, respectively). Cell means for the two-way interaction are depicted in Fig. 1. In AS boys, inhibition of prepotent responses, which was at a relatively poor level at the first measurement, significantly improved within the 2.5 years observation period (t(14) = 4.5, p = 0.001), with the effect that the marked average difference between AS and TD boys at the first measurement (t(28) = 4.3, p < 0.001) was not present any more when they were older (2nd measurement, t(28) = 1.3, p = 0.210).
The significant three-way interaction effect indicated that the improvements in CR varied with the participants' age at first measurement. To illustrate this interaction effect, Fig. 2 shows the raw improvements in CR scores for all participants (calculated as the difference between CR at the first minus CR at the second measurement), as well as the respective regression lines (estimated improvements of CR in boys aged 5 to 15 years). The regression line in AS boys was significant (β = − 0.58, p = 0.024). The regression line in TD
Discussion
The current longitudinal study complements and strengthens prior cross-sectional findings (for a review see [4]) by demonstrating that deficits in inhibition skills in children with ASD, i.e., the weak ability to inhibit or override the tendency to produce a dominant or prepotent but inappropriate response, become less marked with age, at least in cognitively able children with autism. More precisely, we found more marked improvements within a 2.5 year follow-up interval in younger AS children, while older AS boys as well as TD boys remained at a relatively constant level throughout. Similar findings of an age-moderating effect in prepotent inhibition tasks were shown in a metaanalysis, with younger individuals diagnosed with autism exhibiting poorer response inhibition in tasks such as the Go/No-Go test or the Stop signal test compared to adolescents and adults with ASD [12]. The dynamic nature of brain development in autism is also supported by some neuroimaging studies showing complex changes from childhood into adulthood in whole and regional brain volumes [13][14][15], patterns of functional connectivity [16], and white matter maturation [17].
In the present study, no group differences were found for the memory monitoring/updating component of random sequence generation. Previous research proposed typical and atypical developmental trajectories of distinct EFs in ASD with intact performance for prepotent response inhibition tasks, planning tasks and set-shifting tasks in older youths with ASD, but no age-moderating effect for spatial working memory or interference control tasks [12,18,19]. Additionally, several studies found an association between executive functions and adaptive behavior [20][21][22][23][24][25]. Compared with clinical psychometric tests of EFs, the MPT do not draw on academic skills that may vary considerably across different age-groups and may, therefore, better apt to disentangle more specific processes of EFs in juvenile AS (please see [5] for more details) and thus may be especially useful to study the association between executive functions and adaptive behavior in ASD with no intellectual deficits.
In summary, the current longitudinal study further substantiates the notion that AS children (on average) catch up with TD children when they grow older as regards the basic inhibition of developing routine response patterns. | 2018-07-04T00:07:14.963Z | 2018-06-25T00:00:00.000 | {
"year": 2018,
"sha1": "f4248c92c0ce5e49a07516c4af637851030c3c43",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00406-018-0915-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d205b94dce46ea924d6a8aeb9fd016183affeda",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
221798854 | pes2o/s2orc | v3-fos-license | The Dioscorea Genus (Yam)—An Appraisal of Nutritional and Therapeutic Potentials
The quest for a food secure and safe world has led to continuous effort toward improvements of global food and health systems. While the developed countries seem to have these systems stabilized, some parts of the world still face enormous challenges. Yam (Dioscorea species) is an orphan crop, widely distributed globally; and has contributed enormously to food security especially in sub-Saharan Africa because of its role in providing nutritional benefits and income. Additionally, yam has non-nutritional components called bioactive compounds, which offer numerous health benefits ranging from prevention to treatment of degenerative diseases. Pharmaceutical application of diosgenin and dioscorin, among other compounds isolated from yam, has shown more prospects recently. Despite the benefits embedded in yam, reports on the nutritional and therapeutic potentials of yam have been fragmented and the diversity within the genus has led to much confusion. An overview of the nutritional and health importance of yam will harness the crop to meet its potential towards combating hunger and malnutrition, while improving global health. This review makes a conscious attempt to provide an overview regarding the nutritional, bioactive compositions and therapeutic potentials of yam diversity. Insights on how to increase its utilization for a greater impact are elucidated.
Introduction
The nomenclature "Yam" applies to members of the Dioscorea genus of the Dioscoreaceae family within the order Dioscoreales [1]. The yam crop was initially referred to as Inhame by New Guinea users who predominantly used them as a starchy food source [2]. In the course of the 16th century, French sailors erroneously changed the name from Inhame to Igname. Within this period, English seamen called the crop "yam." Yam was a source of food for enslaved people during their East to West historic migration [2]. The roots, tubers and rhizomes of yams have been used since pre-historic times by aboriginal peoples as a food, as well as for traditional medicine [3]. Dioscorea comprises over 600 species, with varying global distribution across Africa, Asia, Latin America, the Caribbean and Oceania ( Figure 1A) [4]. Among the wide species reported, only about 10 species are estimated to have been domesticated across Africa, Asia and Latin America for food and income generation [5]. Yam plants have unique climbing and twining vines that sprout from their characteristic rhizomes or tubers. These rhizomes and tubers most often serve as photosynthetic sinks for starch and other secondary metabolites [6]. Yam's potential as a source of food is attributed to its high levels of carbohydrates including fiber, starch and sugar, contributing about 200 dietary calories per person per day to 300 million people in the tropics [12]. It also provides other nutritional benefits such as proteins, lipids, vitamins and minerals [16]. According to International Institute of Tropical Agriculture (IITA), the global annual consumption of yam is placed at 18 million tons, with 15 million tons only in West Africa amounting to about 61 kg per capita in the region [17]. In West Africa, yam tuber may be eaten boiled, fried, baked or roasted in combination with tomato stew, sauces and in some cases, typically in poor Yam's potential as a source of food is attributed to its high levels of carbohydrates including fiber, starch and sugar, contributing about 200 dietary calories per person per day to 300 million people in the tropics [12]. It also provides other nutritional benefits such as proteins, lipids, vitamins and minerals [16]. According to International Institute of Tropical Agriculture (IITA), the global annual consumption of yam is placed at 18 million tons, with 15 million tons only in West Africa amounting to about 61 kg per capita in the region [17]. In West Africa, yam tuber may be eaten boiled, fried, baked or roasted in combination with tomato stew, sauces and in some cases, typically in poor rural communities, with traditional palm oil. The tuber may also be pounded into moldable dough which is consumed with traditional African soups. The consumption of raw yam tubers of species D. soso, D. nako and D. fandra in Madagascar has also been reported [12]. In Asia, especially Japan and China, D. japonica and D. polystachya, usually eaten raw, can also be grated and used as an ingredient in tororo udon/soba noodles [18,19].
In the quest to identify other benefits of Dioscorea species, studies have revealed yam therapeutic potentials as a result of its bioactive compound content. A bioactive compound is defined as a substance that can exert biological effect, thus, causing a reaction or triggering a response in a living tissue [20]. A study on seven different varieties of yams (Dioscorea spp.) reported reasonable quantities of these compounds including flavonoids, phenols, saponins, tannins and alkaloids [21]. Another study described the pharmacological activities of yam peptides and proteins such as antioxidant, immunomodulatory, estrogenic, angiotensin I-converting enzyme inhibiting, carbonic anhydrase and trypsin inhibiting, chitinase, anti-insect, anti-dust mite, lectin and anti-proliferative activities [22]. These authors reported the therapeutic potentials of peptides and proteins isolated from several species of yams including D. alata, D. cayenensis, D. japonica, D. pseudojaponica and D. polystachya (formerly known as D. opposita or D. batatas), as well as the possible clinical applications for the treatment of inflammatory diseases, cardiovascular diseases, aging disorders, menopause, cancers and osteoporosis. Furthermore, the use of different species of Dioscorea for birth control and skin infections has been reported [23,24]. Nashriyah et al. [25] reported the use of D. hispida in cosmetics for pigmentation remedy. This is not surprising as since time immemorial, the utilization of natural products with therapeutic potentials including mineral, plant and animal substances as main sources of drugs have existed [26]. This long standing historic use of plants as therapeutic resources serves as a proof of their efficacy. The diversity in yams has the potential to enrich the human body with starch and energy [27], as well as supplemental metabolites while serving as a source for medicinal use at level of traditional therapeutics and industrial medical pharmacy. The therapeutic potential of yam is of interest especially in developing countries where a majority of the population lacks access to standard health care, which even when available is far beyond the reach of many locals due to the financial burden, thus, yam may contribute to providing health benefits beyond its nutritive values.
While there are pharmacological prospects of yam, its antinutritional components cannot be overlooked. The utilization of some species of yam, such as D. bulbifera and D. hispida, have been hindered due to the bitter taste caused by the presence of furanoid norditerpenes (diosbulbin) and dioscorine, respectively [28,29]. However, in the context of extreme food scarcity, processing such as soaking, boiling and roasting are used to reduce or eliminate the bitterness. In addition, diosbulbin and dioscorine have been reported to trigger fatal paralysis of the nervous system [30]. This is evident in the utilization of these yam extracts in the preparation of arrow poison or sedative drugs often used for hunting in different countries including Malaysia, Indonesia, South Africa and Bangladesh [4,25]. In addition, other toxic compounds and allergens such as oxalate, saponin, phytic acid, tannin and histamine have been reported [31], with some species such as D. hispida having cyanide [32]. Shim and Oh [33] described histamine as one of the major compounds that induce allergic reactions such as an itch on the skin. Although histamine may be the principal allergen in yam, studies have also reported the potential of dioscorin from D. batatas (presently known as D. polystachya) to cause allergic reaction [34].
To utilize the full potential of Dioscorea spp. given their contributory role in food security as a staple crop to a large number of the world's population and their beneficial health effects, the need for harmonization of indigenous and scientific knowledge of this crop becomes imperative. Such knowledge will help to promote its utilization, thus contributing to the attainment of United Nations (UN) sustainable development goals (SDGs 1, 2 and 3) especially in the developing countries. In this framework, the present review aims to reveal the global importance of yam by providing a comprehensive report on the nutritional and bioactive composition of Dioscorea species. In addition, the review will highlight the therapeutic benefits and impact on human health associated with the consumption of yam, as well as future perspectives of yam production, utilization and research.
Yam Nutritional Value
Over the years, several scientific studies have evaluated and reported the nutritional qualities of different Dioscorea species, as shown in Tables 1 and 2. The nutritional abundance of yam varies depending on the species and variety, as well as the environmental conditions and agricultural practices engaged during planting [35,36]. The analytical method used for estimation also plays a significant role in nutritional levels recorded in yam. The major component of yam is water, which contributes up to 93% of fresh/wet weight of the tuber especially in D. bulbifera, D. delicata and D. pentaphylla [37][38][39]. While the moisture content of other Dioscorea species ranges between 51% to 90% (Table 1), it is noteworthy to highlight that D. hispida varieties were reported to have the lowest moisture content, ranging from 15.8%-37.8% of fresh weight [32]. High values of moisture have also been reported in other root and tuber crops, with values ranging from 60%-79% in cassava, potato and sweet potatoes [40,41]. The moisture content of roots and tubers plays a very important role in determining the susceptibility of the crops to microbial spoilage and maintaining the shelf life of produce. Thus, species and varieties with low moisture content have longer shelf life and are more suitable for prolonged storage [42,43]. According to FAO, an estimate of about 22% and 39% postharvest losses of yam occur in the major and minor seasons, respectively, due to high moisture content [44], hence contributing significantly to income loss for both farmers and traders. In addition to spoilage caused by high moisture content, it is important to highlight the importance of moisture as it relates to nutritional content of yam. A study on the effect of storage on nutritional content of yam revealed an increase in protein content, total sugar and reducing sugar from 13.0%-14.6%, 6.5%-9.8% and 1.7%-2.3%, respectively, as moisture decreased by 67.8% to 56.5% [45].
Yam as a Source of Dietary Energy
Yam is classified as an energy food source to consumers especially in sub-Saharan Africa (SSA) because of its high starch content which amounts up to 80% in dry weight basis (Table 1) [84]. Among the Dioscorea spp., D. alata has been reported to contain a relatively high starch content when compared to others, up to 84.3% [54]. A study by Afoakwa et al. [85] evaluated the starch composition of seven of the cultivated yam species (D. cayenensis, D. rotundata, D. alata, D. bulbifera, D. esculenta, D. praehensalis, D. dumentorum) in SSA and reported a range between 63.2% and 65.7%. The variation in the starch content of D. alata recorded by these authors may be dependent on several environmental factors and agronomic practices, as well as the degree of maturity. The degree of maturity of yam tuber has a great influence on the physicochemical quality of food [86]. In addition, van Eck [87] highlighted the importance of maturity as it influences starch and tuber yield of potatoes when compared to other genetic variations. Similar values of starch have been recorded in other root and tuber crops including potatoes, cassava and cocoyam, as well as cereal grains. While studies have shown high starch content in yams, low (below 1%) starch content were reported in D. delicata and D. olfersiana (Table 1) [37]. Wang et al. [88] reported a starch content ranging from 20% to 30% for Chinese yam. Yam starch granules consist of a mixture of branched (amylopectin) and un-branched (amylose) chain polymers of D-glucose usually occurring at a percentage ratio of 78:22 [89]; nevertheless the values may vary depending on species as well as genotype. Using the iodo-colorimetric method, Otegbayo et al. [90] reported a wide variability of amylose content between 15.1% and 27.0% of 43 genotypes in 5 species (D. alata, D. rotundata, D. cayenensis, D. dumetorum and D.bulbifera). Amylose content as high as 39.3 g amylase/100 g starch was reported in Thai yam (D. hispida) using a simplified amylose assay [91], comparable to the value reported in Taiwanese D. alata (39 g amylose/100 g starch) [92]. A much lower amylose content ranging from 1.4% to 8.7% of starch was recorded for six D. trifida of the Venezuelan Amazon [81]. It is important to point out that the discrepancies observed by the latter authors, who reported a wide variability in amylose content of 1.4% to 8.7%, 1.4% to 3.6% and 2.2% to 5.9% was as a result of analytical methods including colorimetric (iodine binding with amylose), differential scanning calorimetry and amperometric, respectively. The ratio of amylose to amylopectin content of yam starches is very crucial as it affects the starch properties and functional characteristics such as crystallinity and digestibility. While high amylopectin content of starch granules results to low levels of retrogradation susceptibility and high peak viscosity, starch granules with high amylose content demonstrates high retrogradation and absorbs limited water content during cooking [93].
In addition, yam contains dietary fiber, which plays a vital role in the digestive system of humans as well as animals. Adequate intake of fiber increases water holding capacity, aids in regular bowel movement, fecal bulkiness and less intestinal transit. It also promotes beneficial physiological effects such as reduction of blood sugar and cholesterol level, trapping of toxic substances and encourages the growth of natural microbial flora in the gut [94][95][96][97]. The crude fiber reported in different species of yam ranged between 0.17% and 18.2% with the minimum and maximum concentrations being recorded in D. cayenensis and D. bulbifera, respectively. Several dietary fiber constituents such as hemicelluloses, cellulose, lignin and pectins have been reported in yam. Abara et al. [98] examined the dietary fiber components of four raw and cooked Dioscorea species (D. alata, D. bulbifera, D. cayenensis and D. rotundata) using detergent system analysis and reported low levels of fiber components ranging from 0.08%-0.27% (lignin), 0.80%-1.13% (cellulose) and 0.15%-0.28% (hemicelluloses). Interestingly, no significant difference in dietary fiber was observed between the raw and cooked yams, irrespective of their species. Among the species investigated, D. bulbifera had the highest cellulose and hemicelluloses while lignin was higher in D. alata. A recent study investigated cell wall carbohydrates of 43 genotypes from five yam species (D. rotundata, D. alata, D. bulbifera, D. cayenensis and D. dumetorum) using detergent system analysis and recorded the highest cell wall carbohydrate in D. bulbifera at 2.1%, 3.2% and 1.1% for hemicelluloses, cellulose and lignin, respectively [99]. The discrepancies of the values in the two studies may be attributed to genotypic variations as highlighted by Otegbayo et al. [99]. In line with this fact, Shajeela et al. [51] observed a higher crude lipid content in D. oppositifolia when compared to the other nine Dioscorea species investigated. Among the two varieties of D. oppositifolia tubers studied by these authors, D. oppositifolia var dukhumensis (7.42 g/100 g) was reported to contain higher crude lipid than the variety oppositifolia (4.40 g/100 g). This trend was also observed for crude protein, with D. oppositifolia var dukhumensis having higher crude protein content of 13.42 g/100 g compared to 8.44 g/100 g recorded for D. oppositifolia var oppositifolia [51]. This is in agreement with an earlier study by Arinathan et al. [100].
Protein is an essential nutrient required for growth and organ development in humans and animals. It helps in the repair of body tissue, synthesis of enzymes and hormones and also contributes to energy supply. Although roots and tubers are known for their low protein content when compared to pulses/legumes (beans 15%-38%, pea 14%-36%, cowpea 20%-34%, soya bean 29%-50% and groundnut 17%-31%) and cereals such as maize (9.4%), sorghum (11.6%), rice (7.1%), wheat (12.6%) [40,101], yam is reported to have higher dietary protein compared to other root and tuber crops including cassava [102,103]. Yam species like D. alata have been reported to have comparable higher protein levels (18.7%) than grains [56]. Contrary to the high protein reported in D. alata, Alinnor and Akelezi [78] recorded very low protein content (0.09%) in D. rotundata as against 8.28% reported in the same species by another study [77]. Yam tubers are a considerably good source of essential amino acids including phenylalanine and threonine but are limited in tryptophan and sulphur amino acids [102]. In addition, a more recent study on amino acid profiling of different yam species including D. alata, D. bulbifera, D. esculenta, D. oppositifolia, D. pentaphylla, D. spicosa, D. tomentosa and D. wallichi revealed the prevalence of aspartic acid (5.21-9.36 g/100 g) and glutamic acid (3.20-8.12 g/100 g) in all the yam species investigated [104]. In Africa, consumption of starchy staples, primarily yam and cassava, contributes a great proportion of protein intake in the region ranging from 5.9% in the Southern and Eastern Africa to 15.9% in West Africa [102].
Other minor components such as lipids have also been reported in yam. Although these components are present at a very small fraction [103], they have a great impact on the functionality of starch [84]. A wide range of concentrations of lipid between 0.03% and 10.2% have been reported. The highest lipid level (10.2%) was recorded in D. hamiltonii [38]. It is important to note that lipid content is highly influenced by the extraction solvent used, as this determines the lipid fraction (bound or unbound) extracted [105]. While lipids supply energy to humans and animals and act as building blocks for cell membranes, they may also serve as pharmacological agents in the body [106]. Mondy and Mueller [107] highlighted the possibility of tuber lipid being of limited nutritional importance; however, it enhances the cellular integrity of the cell membrane, proffers resistance to bruising and reduces enzymatic browning of the tuber.In addition, the ash content of yams is reported to range from 0.1% to 8.8% (Table 1) which is comparable to the values reported in other roots and tubers including potatoes, cassava and cocoyam [108][109][110]. Ash refers to the inorganic residue in any food material and it directly signifies the total amount of minerals present within the food. However, recent studies have shown that ash content measurement of yam starch can be influenced by inefficient starch purification methods, thus leading to higher values.
Yam as a Source of Minerals
Yams also contain inorganic components such as minerals which play very important roles in the body metabolism (Table 2). These components can be divided into two groups based on their body requirement. They include the macrominerals (potassium, sodium, calcium, phosphorus, magnesium, chloride and sulfur), which are required in larger amounts; and the microminerals or trace minerals including copper, iron, manganese, zinc, iodine, cobalt, fluoride and selenium, needed in the body in small amounts. A study on the mineral profiling of 43 genotypes from five yam species revealed the intra-and inter-species variation in the mineral content of yam [99] as observed for other nutritional components. Potassium, sodium and chloride play a crucial role in the maintenance of total body fluid volume and charge gradients across cell walls [111] and are also responsible for nerve transmission and muscle contraction. The recommended daily allowance (RDA) of potassium in adults and children is 4700 mg/day and 3000 mg/day, respectively. Yams are better sources of potassium than other root and tuber crops (cassava, potatoes and sweet potatoes) as well as cereals (maize, rice, wheat) [112]. Otegbayo et al. [99] reported a range of 775 to 1850 mg/kg of potassium in yam, which correlates with the values previously reported [42,55]. High levels of 1157-2016 mg/100 g dry matter of potassium in different cultivars of D. alata was also recorded by a much earlier study [54], thus suggesting that yam contributes immensely to the RDA of potassium for the consumers.
In contrast to the high potassium concentration recorded in different yam species (Table 2), sodium was detected at lower concentrations (0.35-380 mg/100 g dry weight). Another important macro mineral present in yam is calcium. Calcium is the most abundant mineral in the body with 99% found in bone and teeth, while 1% is found in serum. It plays a vital role in muscle functions, nerve transmission, vascular contraction, intracellular signaling, vasodilation and hormonal secretion [113]. The RDA of calcium (1000-1300 mg) in individuals varies depending on the age, with younger individuals requiring more calcium for the development of bone and teeth [114]. As with other nutrients, calcium content in yam varies with yam species and/or variety. So far, available studies revealed that D. bulbifera has a higher calcium content of up to 1410 mg/100 g ( Table 2) [66]. It is important to note that metabolism of calcium involves other nutrients such as amino acids and vitamin D, as well as phosphorus. Phosphorus is important in maintaining healthy bones and teeth, acid-base balance of the body and DNA and RNA structure [115]. Phosphorus content was below the RDA (700 mg) recommended for healthy adults in all yam species studied except for D. remotiflora (720 mg/100 g) [74]. In addition to the macrominerals mentioned above, yam is reported to be a good source of magnesium. Magnesium plays a vital role in the body metabolic processes, nerve transmission, appropriate muscle tasks and cardiac tempo, as well as synthesis and stability of DNA [116,117]. Shajeela et al. [51] reported magnesium range of 540-634 mg/100 g in two varieties of D. oppositifolia, while the authors recorded 532 mg/100 g and 578 mg/100 g of magnesium in D. pentaphylla and D. wallichi, respectively.
In addition, microminerals such as iron, manganese, zinc and copper have been reported in different yam species, hence, contributing toward the RDA of these nutrients in the body of consumers. Iron is crucial for the formation of hemoglobin in red blood cells that bind and transport oxygen in the body. Bashiri et al. [118] reported the importance of iron in respiration and energy metabolism processes. It also plays a very important role in the immune system and has been implicated in the amalgamation of collagen and neurotransmitters. Otegbayo et al. [99] reported a range between 1.1 and 3.9 mg per 100 g of iron content in different species of yam, with D. dumetorum > D. bulbifera > D. alata> D. cayenensis > D. rotundata. Although the iron content of different species of yam has been reported to be low as compared to cereals (maize, rice and wheat), Mohan and Kalidass [38] observed very high values of iron (103 mg/100 g) in D. pentaphylla. Importantly, the range at which iron is found in a majority of the yam species meets the RDA (11-18 mg/day) of iron. Copper, zinc and manganese are components of numerous enzymes and have also been reported in different species of yam tubers, with values in wild yam being as high as 13.3 mg/100 g (D. pentaphylla), 7.1 mg/100 g (D. remotiflora) and 9.4 mg/100 g (D. bulbifera), respectively [51,74]. While yam contributes immensely to the nutritional requirement of consumers, the non-nutritional components and benefits will be discussed herein.
Bioactive Compounds in Yam
In addition to the nutritional constituents of Dioscorea species, few studies have explored the pharmaceutical potentials of different species of Dioscorea. They contain substantial amounts of secondary metabolites referred to as bioactive compounds. Bioactive compounds are produced within the plants besides the primary biosynthetic and metabolic routes associated with plant growth and development. These compounds are not needed for their daily functioning but may provide various functions such as protection, attraction or signaling to the plant [119]. Bioactive compounds can be described as phytochemicals found in plants/food that have the capacity to influence the cellular or physiological activities in humans as well as in animals. They modulate metabolic processes by exhibiting numerous beneficial health effects such as anti-oxidative, anti-hypertensive, anti-inflammatory and anti-diabetic activities, inhibition of receptor activities, inhibition or induction of enzymes and induction and inhibition of gene expression, thus resulting in the promotion of better health [120]. However, it is worth noting that bioactive compounds can also have antinutritional properties, thus eliciting toxicological effects in humans and animals. A wide list of bioactive compounds such as phenolics, flavonoids, allantoin, dioscin, dioscorin, diosgenin, polyphenols, tannins, hydrogen cyanide, oxalate, saponin and alkaloids have been reported in yam by several studies as listed in Table 3. Their content in yam varies within and between species as reported by Wu et al. [35]. These authors reported a varied range of 0.032%-0.092% dry weight and 0.62%-1.49% dry weight of dioscin and allantoin detected in 25 yam landraces from four species (D. alata, D. polystachya, D. persimilis and D. fordii), respectively. Inter-and intra-species diversity as it relates to bioactive compounds content has also been highlighted by Price et al. [121]. Using high performance liquid chromatography (HPLC), dioscin ranging between 0.086 and 0.945 µg/mLwas reported in yams of African origin including D. cayenensis, D. mangenotiana and D. rotundata. In addition, dioscin has been reported in other parts of yam plant such as rhizomes and roots as reviewed by Yang et al. [122]. Hence, this section will elucidate the bioactive constituents of yams.
Steroidal Saponin
Saponins are a diverse group of glycosidic compounds containing triterpenoid and steroidal aglycone that occur naturally in plants and in lower marine organisms. While triterpenoid saponins are mostly found in dicotyledonous angiosperms, steroidal saponins are mainly present in monocot species such as Dioscoreaceae [134]. Steroidal saponins vary in their structural constitutive frameworks, sugars and aglycones, leading to a broad range of biological activities exerted by these compounds. Depending on the aglycone moiety, steroid saponins can be classified into spirostane, stigmastane, furostane, cholestane, ergostane and pregnane families [135]. Members of the Dioscorea genus mostly contain spirostane and furostane steroid glycosides; however, studies have reported the possible presence of other steroidal saponins [136,137]. Predominantly, yams contain a spirostane steroidal sapogenin known as diosgenin with a structure of 25R-spirost-5-en-3b-ol consisting of a hydrophilic sugar moiety linked to a hydrophobic steroid aglycone with a molecular formula C 27 H 42 O 3 [138,139].
The presence of diosgenin, an aglycon of dioscin has also been reported in several species of yams, making yam one of the leading sources of steroidal sapogenin diosgenin, a precursor used for the synthesis of the steroidal drugs estrogen and progesterone in the pharmaceutical industry [140]. The synthesis of cortisone and hormonal drugs such as sex hormone, progestational hormone and other steroids with diosgenin extracted from D. zingiberensis, D. villosa and D. composite have been reported [141,142]. The utilization of diosgenin is linked to its pharmacological activities and medicinal properties including decreasing oxidative stress, inducing apoptosis, suppressing malignant transformation, preventing inflammatory events, promoting cellular differentiation/proliferation and regulating the T-cell immune response, thus, resulting in antidiabetes, anticancer, neuro-and cardiovascular protective, immunomodulatory, estrogenic and skin protective effects [134,138,143,144]. A study by Tada et al. [145] examined the efficacy of diosgenin extracted from D. composita or D. villosa against skin aging. Their findings revealed potential of diosgenin to enhance DNA synthesis of skin using a human 3D skin equivalent model anda restoration of keratinocyte proliferation in aged skin. In addition, spirostanes possess better antimicrobial activity when compared to other steroid glycosides; however, the activity is dependent on the type and sequence of the sugars [135].
The underlying mechanism of action of diosgenin may vary depending on the disease and has been reviewed by Cai et al. [139], Chen et al. [143] and Raju and Rao [138]. Although diosgenin has been reported in several yam species, its content varies considerably within the Dioscorea genus with D. barbasco (Mexican wild yam) and D. zingiberensis (Chinese yam) being very important sources for diosgenin [146,147]. Perhaps this explains the reason why China and Mexico account for more than half (about 67%) of world diosgenin production [148] and the utilization of this compound takes preeminence in these countries. The presence of diosgenin has also been recorded in D. alata [144]. Contreras-Pacheco et al. [149] use gas chromatography-mass spectrometry (GC-MS) to quantify and characterize diosgenin in sixty accessions of two Dioscorea species (D. sparsiflora and D. remotiflora) collected from the city of Jalisco, Mexico. Their findings showed diosgenin at a range between 0.02 and 0.16 mg/kg in dry basis. In the same vein, Yi et al. [146] recorded a range of 0.78 mg/g to 19.52 mg/g in three Dioscorea species including D. zingiberensis, D. septemloba and D. colletti; however, diosgenin was not detected in D. polystachya. Huai et al. [150] revealed that the intra-species diversity with respect to significant differences in the amount of disogenin in different yam varieties may be attributed to climatic factors and environmental conditions such as growing and storage conditions.
In addition to diosgenin, other steroidal saponins have been reported in several Dioscorea species. An extensive review by Sautour et al. [151]
Dioscorin
Dioscorin is the main storage protein of the Dioscorea tuber, accounting for approximately 90% of extractable water-soluble proteins [152,153]. Yam tubers contain two dioscorin proteins, dioscorin A and dioscorin B, encoded by genes that share about 69% sequence similarity [154,155]. Using Raman spectroscopy, Liao et al. [155] showed that the secondary structure of dioscorin A (molecular weight [MW]~33 kDa) of D. alata L. is mostly of alpha-helix whereas that of dioscorin B (MW3 1 kDa) belongs to anti-parallel β-sheet. The authors also highlighted that the major amino acids (phenylalanine, tyrosine, methionine, tryptophan and cysteine) microenvironment exhibited a clear difference between dioscorin A and B [155]. An earlier study by this group of authors on the secondary structure of dioscorin of three yam species (D. alata L., D. alata L. var. purpurea and D. japonica) reported the similarity in molecular mass across the three species. However, they observed dissimilarity in the amino acid composition and secondary structure of dioscorin between D. alata L., D. alata L. var. purpurea and D. japonica [156].
In contrast to other storage proteins, dioscorin also exhibits enzymatic activities, such as carbonic anhydrase, trypsin inhibitor, dehydroascorbate reductase, monodehydroascorbate reductase and lectin activities [152,[157][158][159]. The antioxidant potential of dioscorin purified from yam tuber has also been reported [160,161]. Liu et al. [161] examined the antioxidant activities of dioscorin in the tubers of two Dioscorea species (D. alata L. and D. batatas [presently known as D. polystachya]) using (2,2-diphenyl-1-picryl-hydrazyl-hydrate) and hydroxyl radicals scavenging activity assay, reducing power test and anti-lipid peroxidation test. Their findings revealed that dioscorins from the two yam species exhibited different scavenging activities against DPPH (1, 1-diphenyl-2-picrylhydrazyl) and hydroxyl radicals, with D. alata dioscorin showing higher antioxidant and scavenging activities as compared to that of D. polystachya. This variation is ascribed to the variation in amino acid composition and protein conformations [161]. Dioscorin has been reported to inhibit the activities of angiotensin converting enzymes, thus suggesting its potential for control of hypertension [162]. In addition, Fu et al. [163] demonstrated the potential of dioscorin isolated from D. alata as a Toll-like receptor 4 (TLR4) activator as well as an inducer of the cytokine expression in macrophage through TLR4-signalling pathways, thus, resulting in the activation of innate and adaptive immune system.
Alkaloids
Alkaloids are a large and structurally diverse group of amino acid-derived heterocyclic nitrogen compounds of low molecular weight, widely distributed across plant kingdoms, microorganisms and animals and deriving their name from their alkaline chemical nature [164]. Due to the complexity of alkaloids, no single taxonomic principle could completely classify them [165]. Alkaloids can be grouped into classes based on their natural and biochemical origin, as well as by chemical structures (heterocyclic and non-heterocyclic alkanoids). Structurally, they can be divided into classes such as quinolines, isoquinolines, indoles, pyrrolidines, pyridines, pyrrolizidines, tropanes and terpenoids and steroids [166]. Presently, over 18,000 alkaloids have been reported in different plant species [167] including those of Dioscorea. While alkaloids have been utilized pharmaceutically because of their therapeutic activities such as anti-microbial, anti-hypertensive, anti-cancer, anti-inflammatory, anti-human immunodeficiency virus (HIV) and many others, some alkaloids are highly toxic to humans and animals [164,165,168]. They may contribute to undesirable sensory qualities such as bitterness in food crops such as yams [21]. Alkaloids have been reported in several species of yams (D. alata, D. oppositifolia, D. hamiltonii, D. bulbifera, D. pubera, D. pentaphylla, D. wallichii, D. glabra and D. hispida) at values between 7.2 and 16 mg per 100 g dry weight [123] (Table 3). A study from West Africa characterized the antinutritional factors in flour samples from four Dioscorea species and reported alkaloid levels ranging between 0.02 and 0.11 mg/100 g [169]. Similarly, Senanyake et al. [170] recorded alkaloid levels of 0.94, 1.64 and 1.89 mg/100 g in D. alata (Rajala), D. alata (Hingurala) and D. esculenta (Kukulala), respectively. Alkaloid has also been reported in D. belophylla (Prain) Haines at a concentration of 0.68 mg/100 g [125].
One of the major alkaloids in yam is dioscorine, a toxic isoquinuclidine alkaloid with molecular formula C 13 H 19 O 2 N [171,172]. Dioscorine has been reported in several yam species including D. hispida, D. hirsute, D. dumetorum and D. sansibarensis (Table 3) [128]. The presence of dioscorine in yam is associated with bitter taste and has been shown to induce nausea, dizziness and vomiting. Dioscorine has exhibited the potency to trigger fatal paralysis of the central nervous system when ingested [4], a reason that explains the use of dioscorine in the production of poisons for hunting purposes. Due to the water solubility of this toxin, it is easily removed by the traditional processing methods used for yam processing such as washing, boiling and soaking.
Flavonoids
Flavonoids, ubiquitous in photosynthesizing cells, naturally occur as aglycons, glycosides and methylated derivatives [173]. Structurally, flavonoids (C 6 -C 3 -C 6 ) contain a 2-phenyl-benzo(α)pyrane or flavane nucleus, which comprises two benzene rings (A and B) linked through a heterocyclic pyrane ring (C) [174]. Based on the position of the carbon of the C ring (on which B ring is attached), the degree of unsaturation as well as oxidation of the C ring, flavonoids can be classified into subgroups [175]. For isoflavones, the B ring is linked in position 3 of the C ring, while the B ring of neoflavonoids is linked to position 4 of the C ring. Other subgroups of flavonoids in which the B ring are linked to position 2 include chalcones, flavones, flavonols, catechins, flavanonols, flavanones and anthocyanins. The pharmacological potential of these compounds cannot be overemphasized. Flavonoids have shown antioxidant, anti-inflammatory, antihypertensive, antidiabetic, antimicrobial, anticonvulsant, sedative, antidepressant, anti-proliferative, anticancer, cardioprotective, antiulcerogenic and hepatoprotective activity [176].
The presence of flavonoids has been reported in wide varieties of yams (Table 3). A recent study by Padhan et al. [177] investigated the flavonoid content of nine Dioscorea species including D. alata, D. oppositifolia, D. hamiltonii, D. bulbifera, D. pubera, D. pentaphylla, D. wallichii, D. glabra and D. hispida. Their findings revealed flavonoid content ranging from 0.62 to 0.85 mg/g dry weight, of which levels detected in D. alata and D. hispida were significantly lower compared to other Dioscorea species. In addition, the authors reported potential antioxidant activities of the yam tuber extracts to range from 1.63 to 5.59%. D. bulbifera and D. pubera with significantly higher amount of bioactive compounds such as flavonoids exhibited higher radical scavenging activity compared to other Dioscorea species irrespective of the screening method (DPPH, ABTS, nitric oxide and superoxide radical scavenging assay) used [177]. Flavonoids have also be quantified in D. belophylla (Prain) Haines (8.8 mg/100 g), D. alata (Rajala) (5.2 mg/100 g), D. alata (Hingurala) (9.8 mg/100 g) and D. esculenta (Kukulala) (12.4 mg/100 g) [125,170]. Another Nigerian study also reported flavonoid content as well as the associated antioxidant activity of three yam species (D. cayenensis, D. dumetorum and D. bulbifera) [60].
Phenols and Phenolic Acids
Phenols and phenolic acids are a group of abundant secondary metabolites found in plants. Simple phenol is characterized by one or more hydroxyl groups (-OH) attached directly to the aromatic system and comprising of resorcinol, phenol, phloroglucinol and catechol [178,179]. On the other hand, phenolic acids are used to describe phenolic compounds having a benzene ring, a carboxylic group and one or more hydroxyl and/or methoxyl groups in the molecule [180]. Phenolic acids are rarely present in free form, occurring in bound form such as esters, amides or glycosides [181]. They comprise two parent structures, the hydroxybenzoic acid and hydroxycinnamic acid. While the hydroxybenzoic acid (vanillic, gallic, protocatechuic and syringic acid) are the simplest phenolic acids found in nature consisting of seven carbon atoms (C 6 -C 1 ), hydroxylcinnamic acids (ferulic, caffeic, sinapic and p-coumaric acid) are the most common in fruits and vegetables and have nine carbon atoms (C 6 -C 3 ) [182].
Dioscorea species have been identified as a possible source of phenols as well as phenolic acids (Table 3). Zhao et al. [183] evaluated the total phenolic acids of two yam species (D. oppositifolia and D. hamiltonii) using an HPLC system. Their findings reported the presence of total phenolic acid in both yams; however, the content in D. oppositifolia (297.3 mg/mL) was almost double that of D. hamiltonii (158.2 mg/mL) which contributed to the significantly better antioxidant, anti-inflammatory and immune regulation effects of D. oppositifolia compared to D. hamiltonii. Among the phenolic acids detected in the two yam species, syringic acid was recorded the highest in both yams [183]. Similarly, a study profiled the phenolic compounds in D. alata andreported the presence of ferulic, sinapic, caffeic and p-coumaric acid and vanillic acid [184]. An earlier study reported phenolic constituents in 10 yam cultivars from five species highlighting the prevalence of these compounds in D. alata and D. bulbifera when compared to other species (D. cayenensis, D. dumetorum and D. rotundata) irrespective of the cultivar [185]. The phenolic concentration of D. rotundata (12-69 mg catechin/100g) was the lowest among the five species and Graham-Acquaah et al. [186] reported a similar range (20-37 mg catechol/100 g) in two cultivars of D. rotundata. The latter authors observed a significant variation across tuber sections. In one D. rotundata cultivar (Puna), the order of concentration of phenol in the sections were head > mid-section > tail, whereas the head and mid-section of Bayere fitaa cultivar had a similar phenol concentration but significantly higher than that of the tail section [186]. Similarly, Padhan et al. [177] found significant variation in the phenol content (2.1-9.62 mg/g dry weight) of various yam species (D. alata, D. bulbifera, D. oppositifolia, D. pubera, D. hamiltonii, D. pentaphylla, D. glabra, D. hispida and D. wallichii), with a significantly higher concentration in D. bulbifera compared to other species.
Other Bioactive Compounds
In addition to the bioactive compounds described above, tannins, phytates and oxalates have been reported in different species of yam (Table 3) [42,54,99,187]. Their content in yams varies depending on species, variety, soil type and other environmental factors. The presence of tannin, phytate and oxalate ranging from 56-1970 mg/kg, 270.7-379.4 mg/kg and 487-671 mg/kg on a dry matter basis, respectively, were recorded in 43 genotypes from five yam species (D. alata, D. rotundata, D. dumetorum, D. bulbifera and D. cayenensis) of major landraces in Nigeria [99]. These compounds are referred to as anti-nutritional compounds because of the toxic effects associated with their consumption. Tannins are water-soluble polyphenols known for their astringent taste and ability to bind to and precipitate various organic compounds including proteins, amino acids and alkanoids, thus decreasing digestibility and tastiness [188]. Structurally, tannins are classified into two groups, the hydrolysable and the condensed tannins. Studies on experimental animals showed possible effects of tannins on feed intake and efficiency, net metabolizable energy, growth rate and protein digestibility [189]. Their relationship with reduced sensory quality of food cannot be neglected. Other adverse effects of tannins such as increase in excretion of protein and essential amino acids and damage to the mucosal lining of the gastrointestinal tract have also been reported [189]. On the other hand, studies have also shown the pharmacological potential of tannins including antioxidant and free radical scavenging activity; anticarcinogenic, antimutagenic, cardio-protective properties; and antimicrobial activities [190].
Phytate (myo-inositol hexakisphosphate), a salt form of phytic acid, is the major storage form of phosphate and inositol found in a wide range of plants [191]. Its classification as an antinutrient is associated with its capacity to form complexes with nutrients especially dietary minerals including zinc, calcium and iron, thus reducing their availability in the body and causing mineral related deficiency in humans. In addition, the formation of insoluble complexes by phytate with other food components such as protein, lipids and carbohydrate have been reported thereby negatively impacting the utilization of these nutrients [191,192]. Notwithstanding these negative effects, dietary phytate exerts numerous positive health effects on humans including anticancer and antidiabetes activities and protection against renal lithiasis, dental caries, HIV and heart related diseases as extensively reviewed by Kumar et al. [191]. On the other hand, oxalate, salt of oxalic acid, occurs as an end product of metabolic processes in plant tissues. Oxalates may occur as insoluble calcium oxalate, soluble oxalate or in combination of the two forms as reported for yam tubers [99]. They bind to minerals especially calcium, magnesium and iron, resulting in unavailability of these minerals to human and animal consumers [193]. Other detrimental effects such as intense skin irritation as a result of contact with Dioscorea mucilage has been linked to the presence of calcium oxalate crystals.
Furthermore, hydrogen cyanide which is formed as a result of hydrolysis of glycosides by enzymes in plants and is a neurotoxin found in cassava (Manihot esculenta), has been reported in yam though at lower concentrations. Shajeela et al. [51] reported hydrogen cyanide ranging from 0.16 to 0.34 mg per 100 g in nine Dioscorea species with the highest level recorded in D. tomentosa and D. oppositifolia var oppositifolia. Using spectrophotometry methods, cyanide was also reported in D. alata and D. hispida Dennst sampled from Sleman, Yogyakarta [194]. Albeit of the antinutritional properties of yam tubers, steps and methods of processing before consumption have proven to efficiently destroy these toxic compounds [99,195].
Therapeutic Potentials of Yams
Pharmaceutical and phytomedical products derived from plants have a long history of use by natives as traditional medicine and a proven evidence of efficacy. Gurib-Fakim [196] highlighted that tribal people in the tropics use plants for medicine as direct therapeutic agents and starting points for the elaboration of semi-synthetic compounds. A majority of secondary plant compounds used in modern medicine were identified through ethnobotanical investigations. Ethnobotany is an interdisciplinary field of research with specific focus on the empirical knowledge of indigenous people with respect to natural plant substances that influence health and wellbeing and their associated risk [196]. Natives of different ethnic communities that either cultivate or have wild Dioscorea spp. have utilized them for medicinal purposes (Table 4). Unfortunately, documentation of the importance and utilization on Dioscorea is still limited. Research has shown that yam bioactive compounds and its supplementations play vital roles in weight changes, activities of carbohydrate digestive and transport enzymes, changes in the morphology of intestines, alterations in blood lipids, lipid peroxidation reduction and liver damage prevention [197]. A recent study by Pinzon-Rico and Raz [198] highlighted the high demand and robust market of four wild yam species including D. coriacea, D. lehmannii, D. meridensis and D. polygonoides in Bogota, Colombia. The four species have been implicated in blood purification probably because of their effect on reducing blood cholesterol, triglycerides, uric acid and glucose. The general acceptability and long history of local consumption of yams among various communities across the continents may be attributed to its safety and portends high regulatory acceptability [199]. Current research has shown that yams contain substantial amounts of secondary metabolites referred to as bioactive compounds that have pharmaceutical potentials as discussed and the health benefits associated with yam consumption is discussed hereafter.
Antimicrobial Potential of Yam
Over the years human medicine has improved greatly; but infections caused by microbes such as bacteria, viruses, fungi and parasites remain a lingering hurdle to overcome, especially with the emergence of widespread drug resistant forms of these microbes and adverse side effects to certain antibiotics [263]. Research into plant sourced antibiotics has intensified and the antimicrobial potentials of certain yam species have been investigated and reported. Using crude extracts and compounds isolated from the bulbils of the African medicinal plant D. bulbifera, Kuete et al. [264] showed that these extracts and compounds can be effective drugs against a wide range of resistant gram negative bacteria. The inhibitory effect of the extracts was dependent on the concentration but still less effective compared to standard antibiotics. Likewise tuber mucilage extract of D. esculenta have exhibited antibacterial properties against three human bacterial strains including Escherichia coli, Pseudomonas aeruginosa and Staphylococcus aureus [263]. The inhibitory potential of D. alata tuber extracts against Salmonella typhimurium, Vibrio cholerae, Shiegella flexneri, Streptococcus mutans and Streptococcus pyogenes have also been reported [31]. In addition, endophytic fungi isolated from rhizome extract of D. zingiberensis, a Chinese medicinal plant, has shown its antibacterial potential for use for the production of antibacterial natural products [265]. The same trend was observed by Sonibare and Abegunde [127]. Using the agar well diffusion and pour plate method, the authors reported extracts of D. dumetorum and D. hirtiflora tubers as possible sources of antimicrobial agents with their antimicrobial efficacy directly linked to the phenolic contents of the plants and DPPH scavenging activity. Kumar et al. [24] compared the antibacterial activity of D. pentaphylla tuber extracts and antibiotics (penicillin and kanamycin) on five selected bacterial strains (Vibrio cholera, Shigella flexneri, Salmonella typhi, Streptococcus mutans and Streptococcus pyogenes). Their findings revealed a significant inhibitory activity of D. pentaphylla tuber extracts against the tested bacteria. This activity was attributed to diosgenin content in the tubers.
Antioxidant Activities of Yam
Antioxidant activities have been reported in different species of Dioscorea, including D. alata, D. bulbifera, D. esculenta, D. oppositifolia and D. hispida (Table 4) [266][267][268][269][270]. Using a DPPH assay, Murugan and Mohan [268] reported radical scavenging activity of 79.3% for 1000 µg/mL D. esculenta extract with IC 50 value of 38.33 µg/mL, whereas IC 50 value of 18.25 µg/mL was recorded for the reference standard (ascorbic acid). The same trend was observed by the author when the ABTS assay was used, with radical cation scavenging activity range of 46.1% to 64.1% at concentration between 125 and 1000 µg/mL and IC 50 value of 40.50 µg/mLwhile IC 50 value was 20.67 µg/mLfor trolox. The author attributed the antioxidant and free radical scavenging activity to high content of total phenolic and flavonoid compounds. Similarly, Padhan et al. [177] examined the antioxidant activity of nine different yams (D. alata, D. bulbifera, D. pentaphylla, D. pubera, D. glabra, D. oppositifolia, D. wallichii, D. hispida and D. hamiltonii) cultivated in Koraput, India. Their findings revealed antioxidant capacity ranging from 1.63% to 5.59%, with IC 50 values of 101-1032, 77.9-1164, 47-690 and 27-1023 µg/mLfor ABTS, DPPH, nitric oxide and superoxide scavenging activity, respectively. Among the yam species evaluated, antioxidant capacities of D. pubera, D. pentaphylla and D. bulbifera were significantly higher with lower IC 50 values than the standards when compared to the other species. The variation in scavenging activities observed in the different yam species is attributed to the disparity in the content of the bioactive compounds in the yam species [177].
Anti-Inflammatory Activity of Yam
Several animal studies have reported the anti-inflammatory activity of Dioscorea species. Olayemi and Ajaiyeoba [271] investigated the anti-inflammatory potential of defatted methanol extract of D. esculenta tuber on Wistar rats. Their finding showed a significant dose-dependent inhibition of the carrageenan at doses of 100 mg/kg and 150 mg/kg which was comparable to that of 150 mg/kg acetylsalicylic acid (reference standard). Chiu et al. [130] confirmed that D. japonica ethanol extract elicited an in vivo anti-inflammatory effect on mouse paw oedema induced by λ-carrageenan. Pre-treatment using dried yam (Dioscorea spp.) powder on Sprague-Dawley rats before inducement of duodenal ulcer by intragastric administration of cysteamine-HCl (500 mg/kg) revealed that dried yam powder exerted a significant protective effect by reducing the incidence of perforation caused by cysteamine and preventing duodenal ulcer, which was comparable to the pantoprazole effect [272]. The observed effect of yam powder was attributed to its potential to lower inflammatory cytokines as well as scavenging free radicals and up-regulating activity of carbonic anhydrase. The hydro-methanol extract of D. alata tubers which contain different bioactive phytocompound has also shown to significantly down-regulate the pro-inflammatory signals in a gradual manner compared to a reference control (µg/mL) [203]. Mollica et al. [273] reported the anti-inflammatory activity of extract from D. trifida on food allergy induced by ovalbumin in mice. In addition extracts from leaf, rhizome and bulbil have exhibited anti-inflammatory activity.
Anticancer Activity of Yam
Synthetic medications and chemotherapy for cancer management comes with a multitude of side effects that are often intolerable for most cancer patients; thus, naturally occurring bioactive compounds in plants are increasingly becoming better alternatives.In vitrocytotoxicity screening provides insights and preliminary data that help select plant extracts with potential anticancer properties for future work andin vivoreplication. A study by Itharat et al. [240] showed that aqueous and ethanol extracts of rhizome of D. membranacea and D. birmanica were cytotoxic against three human cancer cell lines while remaining non cytotoxic to normal cells. The use of active compounds naphthofuranoxepins (dioscorealide A and B) and dihydrophenanthrene from D. membranacea (locally known as Hua-Khao-yen) rhizome in Thai medicine is highly potent and has exhibited cytotoxic activity against five types of human cancer cells [274][275][276]. This was supported by a more recent study, which highlighted the utilization of dioscorealide B as a possible anticancer agent for liver cancer and cholangiocarcinoma [277]. The hepatotoxic compound diosbulbin B has also been reported as a major antitumor bioactive component of D. bulbifera (air potato) in dose-dependent manner, with no significant toxicity in vivo at dosage between 2 and 16 mg/kg [278,279].
Plants with steroidal saponins have exhibited anticancer effects [280][281][282] and these bioactive compounds are abundant in different Dioscorea species. According to Zhang et al. [283], deltonin exerts an apoptosis-inducing effect, which may correlate with ROS-mediated mitochondrial dysfunction, as well as the activation of the ERK/AKT signaling pathways, thereby suggesting deltonin as a potential cancer preventive and therapeutic agent [284]. Cytotoxicity studies using steroidal saponins from Dioscorea collettii var. hypoglauca showed they were active against human acute myeloid leukemia under in vitroconditions [285]. In an anticancer drug screen by the National Cancer Institute (NCI), USA, protoneodioscin, a furostanol saponin compound isolated from Dioscorea collettii var. hypoglauca, exhibited cytotoxicity effects against most cell lines including leukemia, central nervous system, colon, prostate cancer [227]. It is interesting to note that no compound in the NCI data base shares a similar cytotoxicity pattern to those of protoneodioscin, thus indicating a unique anticancer pathway. The polysaccharide of RDPS-I purified from the water extract of Chinese yam tuber exerted a significant inhibition on the cancer cell line of melanoma B16 and Lewis lung cancer in mice in-vivo [286]. Another study by Chan and Ng [279] investigated the biological activities of lectin purified from D. polystachya cv. Nagaimo. The authors observed after 24 h treatment the inhibitory role of lectin on the growth of some cancer cell lines including nasopharyngeal carcinoma CNE2 cells, hepatoma HepG2 cells and breast cancer MCF7 cells, with IC50 values of 19.79 µM, 7.12 µM and 3.71 µM, respectively. Through the induction of phosphatidylserine externalization and mitochondrial depolarization, it has been revealed that D. polystachya lectin can evoke apoptosis in MCF7 cells [279]. Furthermore, diosgenin has been reported to significantly inhibit the growth of sarcoma-180 tumor cellsin vivo while enhancing the phagocytic capability of macrophages in vitro, thus suggesting that diosgenin has the potential to improve specific and non-specific cellular immune responses [287]. The anticancer mechanism of action for diosgenin may be attributed to modulation of multiple cell signaling events including molecular candidates associated with growth, differentiation, oncogenesis and apoptosis [288].
Anti-Diabetic Activity of Yam
Notwithstanding the availability of numerous anti-diabetic medicines in the pharmaceutical industry and market, diabetes and related complications remain a medical burden. Plants' anti-diabetic potential stems from their ability to restore the function of the pancreatic tissues which leads to three possible outcomes: increasing the insulin output, inhibiting the intestinal absorption of glucose and restoring the facilitation of metabolites in insulin dependent processes [234]. There is minimal evidence on specific action pathways in the treatment of diabetes; however, we can infer that most plants that contain bioactive substances such as flavonoids, alkaloids and glycosides offer a buffer to patient management [289]. D. dumetorum, commonly known as bitter yam, has long been proven to play active role in the treatment of diabetes in traditional medicine due to its hypoglycemic effect [233]. Literature reveals that aqueous extract of D. dumetorum tuber, known for its alkaloid (dioscoretine) content, control hypercholesterolemia, hyperlipidemia and hyperketonemia [234]. In 2015, a study which evaluated the anti-diabetic potential and free radical scavenging activity of copper nanoparticles (CuNPs) synthesize with the aid of D. bulbifera tuber extract revealed a promising antidiabetic and antioxidant properties [210]. In animal studies, extract of D. bulbifera and D. alata tuber showed significant reduction in blood glucose level as well as increased body weight in rats treated with streptozotocin and alloxan, respectively [290,291]. Another study showed, however, consumption of D. bulbifera by female diabetic rats decreased hyperglycemia and bone fragility [292]. A similar trend was observed on dexamethasone-induced diabetic rats treated with D. polystachya extract [293].
The quest for novel drugs in the clinical treatment of diabetic complications such as peripheral neuropathy has led to the discovery of DA-9801, an ethanol extract of D. japonica, D. rhizoma and D. nipponica, as a potential therapeutic agent [294,295]. Peripheral neuropathy is a common disorder among diabetic patients, a result of the malfunctioning of the peripheral nerves. Peripheral neuropathy is characterized by symptoms such as pain, numbness and chronic aberrant sensations, which often disrupt sleep and can lead to depression, thus affecting the quality of life [238]. An investigation conducted by Song et al. [296] on the inhibitory effects of DA-9801 on transport activities of clinically important transporters showed that inhibitory effects in vitro did not translate into in vivo herb drug interaction in rats. Interestingly, Jin et al. [297] and Moon et al. [238] further buttressed the potential therapeutic applications of DA-9801 for the treatment of diabetic peripheral neuropathy. These studies show that DA-9801 reduced blood glucose levels and increased the response latency to noxious thermal stimuli. It is anticipated that DA-9801 can be used as a botanical drug for the treatment of diabetic neuropathy. Transporters are critical in the absorption, distribution and elimination of drugs, thus modulating efficacy and toxicity [296]. This prediction of interaction is vital in clinical studies and the drug development process. Sato et al. [298] demonstrated that the natural product diosgenin remains a candidate for use in acute improvement of blood glucose level in type I diabetes mellitus. Also, Omoruyi [299] supports the use of D. polygonoides extracts in clinical management of metabolic disorders such as diabetes.
Anti-Obesity and -Hypercholesterolemic Activities of Yam
Jeong et al. [300] reported the anti-obesity effect of D. oppositifolia extract on diet-induced obese mice. In their study, a high-fat diet was given to female mice with 100 mg/kg of n-butanol extract of D. oppositifolia for 8 weeks. The authors observed a significant decrease in total body weight and parametrial adipose tissue weight; as well as decrease in total cholesterol, triglyceride level and low density lipoprotein (LDL)-cholesterol in blood serum; female mice associated with the ingestion of D. oppositifolia n-butanol extract. The observed effect of D. oppositifolia n-butanol extract is mediated through suppression of feeding efficiency and absorption of dietary fat [300]. An earlier study, which evaluated the anti-obesity effect of methanol extract of D. nipponica Makino powder, reported the effectiveness of the extract against body and adipose tissue weight gains in rodents induced by a high-fat diet [301]. The anti-obesity potential of extract of D. steriscus tubers extracted using a solvent cold percolation method have been reported [302]. When compared with a commercially available anti-obesity medication (herbex), D. steriscus tubers extract showed a significantly higher anti-obesity activity. The author attributed the result to be associated with the bioactive compounds of D. steriscus tubers, which can act as lipase and α-amylase inhibitors and thus are useful for the development of anti-obesity therapeuticals [302].
Extracts of Dioscorea species have been used in clinical management of other metabolic disorders such as abnormal cholesterol level. Several animal studies have shown the antilipemic effects of sapogenin and diosgenin-rich extract of Dioscorea species like D. polygonoides (Jamaican bitter yam) on hypercholesterolemic animals such as mice and rat, thus resulting in the reduction in the concentrations of blood cholesterol [303]. Another study which investigated the effect of D. alata L. on the mucosal enzyme activities in the small intestine and lipid metabolism of adult Balb/c mice showed constant improvement in the cholesterol profile of the liver and plasma of mice fed with 50% raw lyophilized yam for a duration of 21 days [304]. The authors also observed an increase in fecal excretions of neutral steroid and bile acids whereas absorption of fat was reduced in mice fed with 50% yam diet. Yeh et al. [305] observed a significant reduction in plasma triglyceride and cholesterol in male Wistar rat as a result of consumption of a 10% high cholesterol diet supplemented with 40% D. alata.
Yam as an Agent for Degenerative Disease Management
In an animal study using Swiss albino mice with streptozotocin induced dementia, D. bulbifera tubers were reported as having the potential to preserve memory while serving as a preserving, curing and restorative agent [209]. The authors further highlighted the possible delay of onset of neurodegenerative diseases as well as mitigation with the ingestion of dietary polyphenols that confer protection to oxidative stress and neurodegeneration. Also, the neuroprotective effect of D. pseudojaponica Yamamoto using senescent mice induced by D-galactose indicated the useful potential of yam for treatment of cognitive impairment, a process partly mediated via enhancing endogenous antioxidant enzymatic activities [306]. The steroidal saponin-diosgenin-one of the major bioactive compounds in yam, was found to aid the restoration of axonal atrophy and synaptic degeneration, thus improving memory dysfunction in transgenic mouse models of Alzheimer's disease [307]. Diosgenin administration prior to surgery in rat test models reduced significantly the death rate while improving impaired neurological functions, thus establishing the potential cerebral protection of diosgenin against transient focal cerebral ischemia-reperfusion (I/R) injury [254]. In an in vivo study using mice, the same group of authors reported that diosgenin enhanced neuronal excitation and memory function in normal mice, which is mediated by 1,25D3-MARRS (membrane associated, rapid response steroid-binding) triggered axonal growth [308]. This seems to support a school of thought that sees diosgenin as a new category of cognitive enhancers with potential of reinforcing neuronal networks and thus, formed the basis for the use of humans as test models. Tohda et al. [309] conducted a Japanese version of the Repeatable Battery for Assessment of Neuropsychological Status (RBANS) test on 28 healthy volunteers (between the ages 20 and 81 years) under diosgenin-rich yam extract administration with findings confirming a significant improvement in cognitive function. However, the limitation of this study remains the sample size, non-randomized selection of volunteers (sampled adults were only well-educated Asians) and daily dietary intake and physical activity levels of the individuals were not accessed.
Studies on animal studies have reported the anti-osteoporosis potential of yam [310][311][312]. Extracts of D. alata leaves and roots examined on mouse spleen and bone marrow cells showed the ability of stimulating proliferation on both cells thereby significantly increasing the cell concentrations [312]. Another study on ovariectomized female BALB/C mice revealed that 2 weeks feeding with D. alata powder prevented loss of bone mineral density and improved bone calcium status, however, the uterine hypertrophy was not stimulated [313]. Similarly, Han et al. [311] investigated the in vivo effect of ethanol extract of D. spongiosa on glucocorticoid-induced osteoporosis in rat. Their findings revealed that D. spongiosa extract inhibited glucocorticoid-induced osteoporosis and improved the bone tissue metrology, BMC, BMD and biomechanical indicators. In addition, the authors observed a repair of the microscopic changes of the cancellous and trabecular bones. Based on the changes in the biochemical indexes, these effects were linked to the ability of the yam extract to inhibit excessive bone transition and bone resorption [311]. Other health effects on degenerative diseasessuch as hypertension and osteoarthritis by Dioscorea species have been reported [314][315][316].
Yam as an Agent for the Management of Menopausal Symptoms
Menopause is associated with a decline in estrogen level produced by the ovaries resulting in several side effects including mental changes, hot flashes, skin aging, osteoporosis and cardiovascular problems [317]. Hormone replacement therapies (HRT) such as estrogen and progesterone replacement have been deployed to handle these challenges with side effects [318], however, HRT may predispose users to development of degenerative diseases such as ovarian cancer [319]. Rossouw et al. [320] reported an increase in the incidence of coronary heart disease and breast cancer amongst women on estrogen and progestin therapy, hence necessitating alternative treatment options that are as effective and less detrimental. Many traditional systems have implemented treatment plans with a number of plant species for the management of physiological changes associated with menstruation, conception, pregnancy, birth, lactation and menopause [321]. There is reported evidence that Dioscorea species, while serving as nutritional supplements, proffer medicinal properties and relief of menopausal symptoms [322]. A Taiwanese study examined the efficacy of D. alata in the treatment of menopausal symptoms on 50 women [323]. The authors recorded an evident improvement in the accessed parameters, including feeling tense/nervous or excitable, insomnia, musculoskeletal pain as well as the positive effect of the blood hormone profile among women that received D. alata. Similarly, Wu et al. [324] found that replacing two-thirds of staple food with yam for 30 days positively influenced antioxidant status, lipids and sex hormones of 22 apparently healthy postmenopausal women.
Chinese anti-menopausal medicine formula containing rhizomes of D. oppositifolia L. have shown the potential to regulate serum levels of estrogen, follicle-stimulating hormone and luteinizing hormone thereby alleviating some side effects in post-menopausal women [325]. This is in line with the study by Lu et al. [244], whose research result supports the use of D. oppositifolia in Chinese medicine for easing menopausal disorders. Proteins isolated from D. alata, D. zingiberensis and D. oppositifolia showed potential to upregulate the translational levels of estrogen receptor beta, thus possibly reducing the risk of ovarian cancer [244]. D collettii var. hypoglauca have been implicated in the production of herbal formula feng bei bi xie used primarily for the treatment of cervical carcinoma which is prevalent within female aging period [326]. In Central America, patients with blood stasis and anemic conditions are treated with a decoction obtained from the rhizomes of D. bartletti [207]; while in Latin American communities, the use of decoctions to ameliorate pains of childbirth, painful menstruation, ovarian pains and vaginal cramping have been reported [327]. The diosgenin composition of yam has placed Dioscorea species as major constituents for commercial progesterone production used for treatment of menopausal hot flashes [207]. When administered orally to female Sprague Dawley rats, Higdon et al. [328] reported an increase in uterine weight, vaginal opening, vaginal cell proliferation and reduced bone loss. This estrogenic influence mechanism is consistent with the findings of Michel et al. [207] who reported mild in vitro biding affinity for estrogen alpha and beta receptors in their test models. Although anin vitrobioassay does not necessarily correspond toin vivoefficacy, the data seem to implicate a significant influence of Dioscorea species in management of issues related to women's reproductive health.
Yam as Pharmaceutical Excipient
Although much effort have been shown on the importance of yam starch in relation to food, limited attention has been given to its other potentials such as an excipient for the pharmaceutical industry.
Zuluaga et al. [329] highlighted that yam starch could be used as a pharmaceutical excipient for tablet and capsule formulation comparable to potato starch, with further potential as thickening agent. Nasipuri [330] reported yam starch as an efficient binder/disintegrant in tablet formulations containing both soluble and insoluble organic medicinal substances. Studies have shown that D. dumetorum and D. oppositifolia starches are highly compressible and form tablets with acceptable crushing force. Both species possess small granule size, large specific surface, volume-surface mean, surface-number mean and spherical symmetry. These qualities imply better performance as an excipient especially with respect to product process and better homogeneity of mixes when compared with starches from D. alata and D. rotundata, with larger granules and high amylase content [331,332]. However, under high compression pressures, D. roundata and D. alata can be used for tablet formulations where faster disintegration and dissolution is desired [332,333].
Conclusions and Future Perspectives
Dioscorea species provide safety nets as foods and conventional and unconventional medicine during famine and endangered periods. Yam constituents such as flavonoid, diosgenin and dioscorin, tannin, saponin and total phenols places them as good food source of bioactive compounds to consumers [23]. However, exploitation of the rich diversity within the Dioscorea genus may lead to extinction if proper steps are not taken in terms of advocacy and conservation. This will directly result in loss of this potential source of active compounds for the pharmaceutical industry as well as constituting a huge genetic loss with respect to crop improvement and breeding. Rational and sustainable use is highly encouraged within the array of wild species. Sensible utilization of this diversity entails understanding species availability, ease of access, possibility of preservation, replanting and establishment of priorities in respect to its optimal pharmaceutical use [334]. Plant derived drugs will receive more acceptance in modern medicineand health systems if they can be efficacious, safe and quality controlled as in the case of synthetic products [335]. The understanding of the pharmacologically active compounds within Dioscorea diversity will assist in standardizations and analysis of formulations [334]. The gaps in knowledge of chemical composition, ecological factors and geographical spread of diversity and environmental impacts as relates to chemical biodiversity and plant variability need to be urgently addressed.
Investigation to study the medicinal potential of over 600 wild and domesticated Dioscorea species requires a multidisciplinary dimension involving indigenous natives who have a thorough grasp of these plants while adopting a well thought out strategy that puts into context society, health, conservation and sustainable use of species biodiversity. Numerous synthetic contraceptives and steroid related hormonal medications are made of dioscin. Unfortunately, the global need of dioscin is around 8000 tons but present production status puts it at 3000 tons [336]. The increasing pressure from pharmaceutical industries is laying a high demand burden thus making this vital resource a scarce commodity looking into the future. Another perspective to this problem is the ineffectiveness of methods for extracting bioactive compounds. It has been reported that the rate of extraction and separation are generally low, with only the extraction of diosgenin accorded priority [337]. It is imperative that efficient strategies that concentrate diosgenin from its natural sources are optimized [144] putting into consideration other compounds and eliminating waste. It is however important to develop carrier systems like nanoparticles for targeted delivery of yam bioactive extracts and compounds, thus improving efficacy while reducing side effects [144]. The need to standardize analytical protocols toward achieving optimal extraction should not be overlooked. Although extraction methods, such as soxhlet, maceration and hydrodistillation, have been extensively applied in the extraction of bioactive compounds, with newly developed methods shown to be much cleaner, higher yielding and efficient, such as thein situ pressurized biphase acid hydrolysis extraction reported by Yang et al. [338] should be explored. The recent technological advancement in chromatographic techniques such as liquid chromatography-mass spectrometry (LC-MS) and high-resolution mass spectrometry (HR-MS) should be utilized for identifying and quantifying the various compounds in yam species.
In addition, sophisticated instrumentation such as HR-MS should be applied to unravel possible beneficial unknown compounds in yam crop.
Further in vivo study is highly encouraged with respect to oxidative stress and antioxidant activities using purified compounds isolated from yam species. For most developing and poor countries, it is imperative to diversify into functional foods, including from Dioscorea species. These can be consumed on a regular basis thus serving both nutritional and medicinal purposes. These plants, often in the wild, can be targeted for increased production and conservation. The local populace should be enlightened on the consumption values which directly can lead to reduction in the cost of health care while leading to improved diet. However, due to the huge demand by pharmaceutical industries and agencies, most wild Dioscorea species are threatened in their natural habitat. The indigenous knowledge and therapeutic potential of most of the Dioscorea species is fast eroding as the situation worsens with increasing urbanization, industrialization and over-exploitation. Efforts toward developing comprehensive information on the therapeutic use, dosage and chemical compounds implicated in the treatment of diseases should be accelerated. Most significant is the ascertaining of the safety level and toxicity profiling of these compounds found in undomesticated yams. This will help ease burden in the rural communities that solely depend on these traditional medicine as health remedies. For instance, due to the high cost of steroid based pharmaceuticals in the management of women's health disorders, the alternative reliance on herbal remedies is the preferred option to treat hormonally regulated health situations in most impoverished communities. Thus, there is need to provide sufficient empirical scientific basis to support the traditional use of the diversity inherent in the yam crop for hormone therapy related treatment.
Recently, the poisoning cases are occasionally reported in association with the rising popularity of Dioscorea consumption prescriptions in clinical use. Chronic and excessive exposure to D. bulbifera tuber has caused liver injury in some patients [339]. Also, in vivo and in vitro experimental studies have demonstrated that D. bulbifera tuber could induce hepatotoxicity [340,341], increase relative liver weight and can cause death [214]. It is noteworthy to mention that concentrated herbal preparations of samples of Dioscorea species abound, with most of them having little or no information about the exact composition and required dosage, thereby increasing health risks to potential consumers [214]. It thus becomes imperative to establish estimated toxicity values for Dioscorea species towards efficient utilization in food based clinical management. In view of this, further investigation is required and there is need for relevant government and donor agencies to invest in initiatives that support this research direction. A promising option is the application of CRISPR-Cas (clustered regulatory interspaced short palindromic repeats-CRISPR-associated) mediated genome editing to remove toxic or antinutritive compounds in yams. This precision breeding technique has the potential to alter one or more pathways or traits in a given cultivar, more efficiently than conventional breeding and without disturbing the complement of traits for which it is preferred. The current outlook for non-transgenic genome edited crops is that they may avoid the heavy regulatory burden placed on transgenic "genetically modified" plants [342]. The optimization of protocols for genome editing in yam is well underway [343,344] and the availability of more and better yam genome assemblies is proceeding apace [344][345][346][347]. An understanding of the genetic regulation of desirable nutritional and pharmacological compounds can also be leveraged to increase their amounts, especially with increased variety of possible CRISPR-Cas based manipulations. Genome editing is part of a bright future for scientists working to improve the nutritional quality of yams, while making consumption or clinical use safer.
Scientific investigations into the clinical use of Dioscorea species with focus on reducing the risk of ovarian cancer, treatment of menopause complications and female ageing diseases needs to focus on characterizing the bioactive compounds and proteins isolated from diverse species. This includes amino acid sequencing, in vivo pharmacokinetic study as well as modulating mechanisms. This will help in the establishment of multi-target based anti-menopausal drug screening, towards developing more effective drug candidates for future use [244]. Furthermore, research to support the use of Dioscorea as a therapeutic agent against asthma, urinary tract infections and bladder related complications, rheumatism, arthritis, pelvic cramps and so forth, need to be promoted. Most of the studies have been limited to in vitro and animal models. It is very important to have further insights into the effects of yam on degenerative diseases while putting into consideration the feasibility and long term effects on humans. Limited or no data on safety, toxicity and efficacy as use as contraceptives on human health, during pregnancy, lactation and childhood suggest an issue of concern. The paucity of data on the safety of diosgenin and other bioactive compounds suggests that further investigation should focus on development, toxicity, neurotoxicity and allergenicity. While preclinical and mechanistic findings tend to support the use of diosgenin as a novel, multitarget-based chemopreventive and therapeutic agent against different forms of cancer [288], research should also focus on developing and evaluating standards of evidence. On a commercial scale, the introduction of Dioscorea extracts into the growing international market of natural herbs is highly encouraged. The Mexican experience [348] of biodiversity loss of wild Mexican yams should form the basis for conscious sustainable natural resource management especially in Africa as mentioned earlier. | 2020-09-20T13:05:10.047Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "4c3e0e6a801723a4a6686eb63f84c558d1cd3b7a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/foods9091304",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18f5313969f0363179311ba1307eea9347504f25",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
20154215 | pes2o/s2orc | v3-fos-license | Variation in the use of observation status evaluation in Massachusetts acute care hospitals, 2003–2006
Background Observation evaluation is an alternate pathway to inpatient admission following Emergency Department (ED) assessment. Aims We aimed to describe the variation in observation use and charges between acute care hospitals in Massachusetts from 2003 to 2006. Methods Retrospective pilot analysis of hospital administrative data. Patients discharged from a Massachusetts hospital between 2003 and 2006 after an observation visit or inpatient hospitalization for six emergency medical conditions, grouped by the Clinical Classification System (CCS), were included. Patients discharged with a primary obstetric condition were excluded. The primary outcome measure, “Observation Proportion” (pOBS), was the use of observation evaluation relative to inpatient evaluation (pOBS = n Observation/(n Observation + n Inpatient). We calculated pOBS, descriptive statistics of use and charges by the hospital for each condition. Results From 2003 to 2006 the number of observation visits in Massachusetts increased 3.9% [95% confidence interval (CI) 3.8% to 4.0%] from 128,825 to 133,859, while inpatient hospitalization increased 1.29% (95% CI 1.26% to 1.31%) from 832,415 to 843,617. Nonspecific chest pain (CCS 102) was the most frequently observed condition with 85,843 (16.3% of total) observation evaluations. Observation visits for nonspecific chest pain increased 43.5% from 2003 to 2006. Relative observation utilization (pOBS) for nonspecific chest pain ranged from 25% to 95% across hospitals. Wide variation in hospital use of observation and charges was seen for all six emergency medical conditions. Conclusions There was wide variation in use of observation across six common emergency conditions in Massachusetts in this pilot analysis. This variation may have a substantial impact on hospital resource utilization. Further investigation into the patient, provider and hospital-level characteristics that explain the variation in observation use could help improve hospital efficiency.
Introduction
Observation care is an alternative to inpatient admission designed for evaluation and management of patients during a short stay, usually defined as between 6 and 24 hours [1]. Interest in observation medicine has increased over the past decade, and the variety of clinical conditions considered suitable for observation has expanded from initial pathways for asthma and chest pain to a wider set of conditions such as transient ischemic attack and syncope [1,2]. Despite growing clinical research on observation protocols, there are no published data describing the utilization and variation of observation care for common emergency medical conditions. Inpatient hospitalization provides intensive clinical care but is the most expensive, capacity-limited and potentially dangerous location for health care delivery [3,4]. Inpatient hospitalization rates vary across the US in patterns that cannot be explained solely by patients' medical history or clinical condition, but do correlate with health system factors, such as the supply of inpatient beds [5]. Given that half of all inpatients are admitted through hospital EDs, emergency physicians play a major role in the utilization of inpatient hospitalization. Chest pain alone accounted for nearly 7 million ED visits and 800,000 hospitalizations in 2006, making it the second most frequent ED chief complaint and hospital discharge diagnosis [6]. From 1997 to 2005 the costs of inpatient care for nonspecific chest pain grew 181% to over $10 billion annually [7]. Appropriate use of observation care can reduce the use of inpatient hospitalization as validated observation protocols have demonstrated similar or better clinical outcomes at lower costs for a number of common emergency medical conditions [1,2]. Despite this, there is a dearth of research on the use of observation for emergency medical conditions across systems of care.
We aimed to describe the variation in use of observation evaluation and observation charges between acute care hospitals in Massachusetts from 2003 to 2006. Our primary objective was to describe the frequency of observation visits relative to inpatient hospitalization for common emergency medical conditions. Additionally, we aimed to describe the potential reduction in hospital charges associated with use of observation services in place of inpatient hospitalization for nonspecific chest pain. We hypothesized that there is clinically important variation in the use of observation evaluation for common emergency medical conditions across hospitals.
Study design, setting and selection of participants
This was a retrospective analysis of hospital administrative data. All Massachusetts acute care hospitals submit standardized utilization data for each inpatient hospitalization, ED and observation visit to the Department of Health and Human Services Division of Health Care Finance and Policy (DHCFP). Hospital-level summary utilization data files are available to the public without patient identifiers for both inpatient and observation evaluations. Observation visits are defined as discharges from observation status with a charge for observation. Inpatient hospitalizations are defined as hospital discharges with a recorded charge for inpatient stay. The two definitions are mutually exclusive; patients admitted to inpatient from observation status are defined as inpatients.
Inclusion criteria
Patients with an observation visit or an inpatient hospitalization in a Massachusetts acute care hospital from 2003 to 2006 were included. Patients evaluated and discharged from the ED without a subsequent observation visit or inpatient hospitalization were not included. We included the five most frequently observed emergency medical conditions, grouped by Clinical Classification System (CCS; Table 1) [8]. Additionally, we included congestive heart failure (CCS 108) in the analysis because prior to 2007, it was one of three clinical conditions, asthma (CCS 128) and nonspecific chest pain (CCS 102) being the others, designated for payment by the Centers for Medicare and Medicaid Services, and therefore was hypothesized to have high observation utilization.
Exclusion criteria
Of the eight most frequently observed CCS conditions, we excluded the three conditions unlikely to represent common emergency medical observation pathways. Patients with primary obstetric diagnoses were excluded (other complications of pregnancy, CCS 181; early or threatened labor, CCS 184), as most such patients are seen in labor and delivery units rather than EDs. Patients with cardiac dysrhythmias (CCS 106) were excluded as many such patients may undergo observation outside of the ED for elective procedures. Hospitals with fewer than 20 observation visits for a CCS condition in a calendar year had that year's data excluded from hospital-level analysis as samples less than 20 do not provide stable estimates [9].
Data collection and processing
We obtained summary utilization files for all Massachusetts acute care hospitals that reported to the DHCFP from 2003 to 2006. Between 2003 and 2006, the total number of hospitals reporting inpatient data decreased from 77 to 74, while the number reporting observation visits declined from 74 to 70, both as a result of hospital merger or closure. The DHCFP checks each hospital's data for integrity, cleans the data and applies validated algorithms to group diagnoses into clinical conditions, including the CCS and diagnosisrelated group (DRG). The CCS is a validated algorithm that groups primary International Classification of Diseases version 9, Clinical Modification (ICD-9) discharge diagnoses into 285 clinical categories [8]. DRG 143, including inpatient discharge ICD-9 codes associated with chest pain without clear cardiac, toxic, or operative etiology, was used to compare charges between observation and inpatient [10]. The DHCFP links observation visit codes to inpatient hospitalization codes to ensure appropriate comparison of frequency data.
Outcome measures
Our primary outcome was the "Observation Proportion" (pOBS), defined as the percent of patients who had an observation visit among all patients with an observation visit or inpatient hospitalization for a CCS condition: pOBS ¼ n Observation= n Observation þ n Inpatient ð Þ Secondary outcomes include total observation and inpatient charges for each CCS condition at the hospital level.
Data analysis
We summed the number of observation visits for each of the 20 most frequent CCS conditions and ranked conditions by frequency. We then calculated pOBS for each of the six included CCS conditions (defined above) for each Massachusetts hospital. To illustrate variation, we report the median hospital's observation and inpatient visit census, pOBS and the range across hospitals.
We compared the number of observation and inpatient visits by year using the Mantzel-Haensel test for trend. We illustrate relative observation utilization (pOBS) for nonspecific chest pain stratified by several hospital characteristics: teaching hospital status (defined by membership in the American Association of Medical Colleges' Council of Teaching Hospitals or home institutions to Accreditation Council for Graduate Medical Education accredited resi-dency programs) and hospital size (defined by bed count as small, medium or large per the Healthcare Cost Utilization Project's definition for the National Inpatient Sample) [11].
For nonspecific chest pain, we calculated the difference between average charges for an inpatient hospitalization and an observation visit by CCS conditions. We then modeled potential savings in charges and as a percent of charges if all hospitals had a minimum pOBS equal to the current median hospital or the current 75th percentile hospital.
All p-values were two-tailed, and the α-level was 0.05. Data analysis was performed with SAS 9.1 (SAS, Cary, NC). The study was considered exempt from review by the hospital IRB. Across Massachusetts, hospitals' pOBS rates varied by clinical condition (Table 1). Chest pain had the widest variation, from 25% to 95%, while congestive heart failure had the least variation, 5% to 35%. Figure 1 illustrates the variation in pOBS for nonspecific chest pain across Massachusetts hospitals regardless of teaching status or size.
From 2003 to 2006 observation visits in
From 2003 to 2006, there was a 43.5% (95% CI 43.1% to 43.8%) increase in observation visits for nonspecific chest pain, while inpatient hospitalizations declined by 6.3% (95% CI 6.1% to 6.5%). From 2003 to 2006, the relative frequency of observation diagnoses (pOBS) increased for nonspecific chest pain and syncope, while the relative frequency decreased for asthma and fluid and electrolyte disorders and did not change for CHF and abdominal pain (Table 1; P<0.001).
Observation charges for the median hospital are shown for each clinical condition in Table 1. We modeled the impact of the variation in charges for nonspecific chest pain evaluation across Massachusetts hospitals in Table 2. The median hospital's charge for an observation visit in 2005 was $5,438. Of the 74 hospitals reporting observation data for nonspecific chest pain in 2005, inpatient charge data were also available for 59 (median $7,011) with a median charge 28% higher than for observation. The subset of hospitals analyzed in Table 2
Discussion
Hospital and ED utilization have received intense scrutiny as a significant driver of overall health care costs, yet use of observation services has been ignored [12]. We analyzed statewide, summary-level administrative data to document variation in the use of observation visits and observation charges relative to inpatient hospitalizations across six common emergency medical conditions. We found that the use of observation services grew at 3.9%, more than inpatient hospitalization and less than ED visitation. These changes occurred while the population of Massachusetts was stable at 6.44 million over the study period [13]. Specifically, the utilization of observation services for the evaluation of nonspecific chest pain grew over 40% from 2003 to 2006, while the relative utilization of observation varied widely across hospitals, ranging from 25% to 95% of total inpatient and observation visits. Although we were unable to account for patient-level characteristics, wide variation persisted after stratification of hospitals by teaching status and hospital size. The impact of observation use on statewide health care costs is difficult to estimate from publicly available administrative data. We demonstrated two scenarios, which estimate several million dollars in statewide savings to payers for chest pain, but this represents less than five percent of annual charges for chest pain. These estimates are based on hospital charges, which were 28% less for observation visits than for inpatient hospitalization. Our estimate is conservative, as hospitals disproportionately account overhead to outpatient units, so observation charges likely overstate costs, and accurate cost-to-charge ratios for observation are not available. Previous studies have documented larger differences in actual health care costs between inpatient and observation visits for chest pain, which would result in larger calculated savings [1,2]. A more detailed model that quantifies the effects of increased observation utilization on multiple conditions based on actual cost data might yield a more compelling economic case for the efficiency of observation evaluation.
These pilot data provide an initial view of hospital observation use across one state. It is not clear whether the variation observed represents underuse, overuse or misuse. Future research should evaluate observation care in the context of inpatient hospitalization, emergency care and longer term patient outcomes. For example, with nonspecific chest pain, in addition to looking at observation use, one would need to look at ED discharge practices and proximate outcomes (e.g., 30-day missed acute myocardial infarction rate) to determine if the care pattern is safe and efficient. Although not directly studied, it is reasonable that an observation stay would be as safe as or safer than a similar inpatient admission as the duration in hospital is shorter and the quality of care has been shown to be equal or superior [1,2]. If observation care is not associated with worse in-hospital or proximate outpatient outcomes, then it may advance the value of health care delivery.
This pilot study has several limitations. We analyzed summary level administrative data, which precluded risk adjustment for patients' clinical or sociodemographic characteristics. However, the wide variation in observation use is difficult to explain by a patient's clinical condition alone, as illustrated by the presence of wide variation between similar hospitals types. Second, administrative data are created for billing rather than clinical or research purposes. Discharge codes are subject to biased coding patterns at the hospital or group level. For example, one ED or coding group may routinely diagnose patients with nonspecific chest pain as gastroesophageal reflux while another ED diagnoses chest pain. In the future, patient-level analysis and diagnosis-based sensitivity analyses should be performed to explore the effect of coding patterns on these findings. Third, we did not have a reliable method to determine if a hospital had a dedicated ED observation unit or clinical pathways for observation on inpatient wards; these are likely associated with observation use. Finally, these data are from a single state, limiting their generalizability. Massachusetts is a good state for pilot observation research as it was one of the first states to collect observation data and is one of less than ten states currently collecting such data. Due to large and increasing costs associated with inpatient hospitalization, CMS and private payers have adopted hospital utilization measures as markers of efficiency, specifically focusing on rehospitalization rates and emergency department recidivism [12]. As future efforts are directed at reducing variation in health care utilization in the name of efficiency, acute care hospitalizations will be a primary focus, and the ED will increasingly be recognized as a locus of control. The wide variation in observation use relative to inpatient admission for emergency medical conditions in Massachusetts is intriguing and suggests the potential for efficiencies in health care delivery that deserve further investigation. | 2014-10-01T00:00:00.000Z | 2010-08-20T00:00:00.000 | {
"year": 2010,
"sha1": "e95326aa1d4293472c954e2bf673deeaa5318a03",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12245-010-0188-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3f0061ef07a5841f6884140d59a5c3a79e9c09d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233449838 | pes2o/s2orc | v3-fos-license | The genome of New Zealand trevally (Carangidae: Pseudocaranx georgianus) uncovers a XY sex determination locus
Background The genetic control of sex determination in teleost species is poorly understood. This is partly because of the diversity of mechanisms that determine sex in this large group of vertebrates, including constitutive genes linked to sex chromosomes, polygenic constitutive mechanisms, environmental factors, hermaphroditism, and unisexuality. Here we use a de novo genome assembly of New Zealand silver trevally (Pseudocaranx georgianus) together with sex-specific whole genome sequencing data to detect sexually divergent genomic regions, identify candidate genes and develop molecular makers. Results The de novo assembly of an unsexed trevally (Trevally_v1) resulted in a final assembly of 579.4 Mb in length, with a N50 of 25.2 Mb. Of the assembled scaffolds, 24 were of chromosome scale, ranging from 11 to 31 Mb in length. A total of 28,416 genes were annotated after 12.8 % of the assembly was masked with repetitive elements. Whole genome re-sequencing of 13 wild sexed trevally (seven males and six females) identified two sexually divergent regions located on two scaffolds, including a 6 kb region at the proximal end of chromosome 21. Blast analyses revealed similarity between one region and the aromatase genes cyp19 (a1a/b) (E-value < 1.00E-25, identity > 78.8 %). Males contained higher numbers of heterozygous variants in both regions, while females showed regions of very low read-depth, indicative of male-specificity of this genomic region. Molecular markers were developed and subsequently tested on 96 histologically-sexed fish (42 males and 54 females). Three markers amplified in absolute correspondence with sex (positive in males, negative in females). Conclusions The higher number of heterozygous variants in males combined with the absence of these regions in females support a XY sex-determination model, indicating that the trevally_v1 genome assembly was developed from a male specimen. This sex system contrasts with the ZW sex-determination model documented in closely related carangid species. Our results indicate a sex-determining function of a cyp19a1a-like gene, suggesting the molecular pathway of sex determination is somewhat conserved in this family. The genomic resources developed here will facilitate future comparative work, and enable improved insights into the varied sex determination pathways in teleosts. The sex marker developed in this study will be a valuable resource for aquaculture selective breeding programmes, and for determining sex ratios in wild populations. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-021-08102-2.
Introduction
The genetic basis of sex determination (SD) in animals has long fascinated researchers due to the relationship of this trait with reproduction and Darwinian fitness [1,2]. Traditionally, sex determination was assumed to be a relatively conserved trait across vertebrates. However, recent research on teleost fishes has shown that this is not the case, and that teleosts display a remarkable diversity in the ways sex is determined. These different mechanisms, which include heterogamety for males (males XY females XX) or heterogamety for females (males ZZ females ZW), multiple sex chromosomes and genes determining sex, environmental influences (temperature-dependent), epigenetic sex determination and hermaphroditism, have each independently originated numerous times [1,3,4]. The evolutionary lability of SD, and the corresponding rapid rate of turn-over among different modes, makes the teleost clade an excellent model to test theories regarding the evolution of SD adaptations [5,6].
Teleosts consist of over 30,000 species, making them the largest group of vertebrates [7]. This diversity in species corresponds to a high phenotypic diversity and associated capacity of adaptation in physiological, morphological and behavioural traits. Reproductive systems vary largely, and strategies range from gonochorism, protandrous, protogynous and simultaneous hermaphroditism [8]. These reproductive strategies emerged independently in different lineages, demonstrating a polyphyletic origin. Looking across fish families and genera, the genetic basis of SD can be profoundly different, and can also be determined entirely by external factors, e.g. social structure or attainment of a critical age [9]. Importantly, it should be noted that for most fish species it is unknown how sex is genetically determined and what the genetic architecture is of sex determination (e.g. monogenic vs. polygenic architecture).
The New Zealand silver trevally Pseudocaranx georgianus (hereafter referred to as trevally, Fig. 1) also known as 'araara', its indigenous Māori name, is a teleost fish species of the family Carangidae. This family consists of approximately 30 genera which together contain around 151 species worldwide [10], yet SD has only been studied in a few species of this family. These studies have revealed that all of the carangid species studied to date are gonochoristic and that SD is genetically controlled [8,11,12], which indicates that individuals are either a genetic male or female. While this family is morphologically diverse, the number of karyotypes appears mostly conserved, and species are predominantly characterised by 2n = 48 acrocentric chromosomes, which is likely the ancestral state [13]. Studies on the sex determination systems in carangids appear to be limited, however, in Japanese amberjack (Seriola quinqueradiata) and golden pompano (Trachinotus oyatus), ZZ-ZW systems have been documented [14,15].
Trevally is a pelagic species and abundant in the coastal waters of Oceania, spanning from the coastal regions of the North Island and the top of the South Island of New Zealand to southern Australia [16][17][18]. The fish grows to a maximum length of 1.2 m and 18 kg, and can reach 25 years [19]. Their bodies are elongated, with the upper portion being bluish-silver, the lower portion of the fish is silver and the sides are yellow silver in colour (see Figs. 1 and [19]). They commonly school with size-similar individuals and forage on plankton and bottom invertebrates [16]. The species is highly soughtafter for sashimi in Asia, and several countries are trying to establish aquaculture breeding programmes e.g. [20]. Adults of this species are sexually monomorphic externally, as observed in other carangids [21,22]. Trevally have a firm musculature around their abdominal cavity, making phenotypic sexing difficult. Thus, sex can typically only be determined subsequent to lethal sampling or by gonopore cannulation to retrieve a gonadal biopsy. This technique, however, can only be applied to broodstock in the advanced stages of gametogenesis shortly before or during the reproductive season and can injure the fish. Sexual maturation takes 3-4 years in captivity, meaning that sex information can only be gathered following that stage. Hence, understanding the genetic basis of SD in trevally would allow the design of molecular markers to facilitate sexing of the individuals early in life and in a less-invasive way.
The overarching goal of this study was to identify the genetic underpinnings of SD in trevally. To achieve this, we (1) de novo assembled a reference genome and (2) identified sexually divergent genomic regions based on sequencing depth and variant detection using whole genome re-sequencing of male and female fish. Then, (3) candidate genes for SD were identified and (4) molecular markers were designed and validated using individuals sexed by gonadal histology. We discuss our findings about SD in this species and highlight the resulting applications, and compare them to other teleost species to draw general conclusions about SD in this group.
Materials and methods
Broodstock collection and rearing of F 1 offspring Trevally samples were collected from a founding (F 0 ) wild-caught captivity-acclimated population and a captive-bred (F 1 ) generation produced by The New Zealand Institute for Plant and Food Research Limited (PFR) in Nelson, New Zealand. All fish were maintained under ambient photoperiod and water temperatures of filtered flow-through seawater. Fish were fed daily to satiation on a diet consisting of a commercial pellet feed (Skretting and/or Ridely) supplemented with frozen squid (Nototodarus spp.) and an in-house mixed seafood diet enriched with vitamins. Details about the fish rearing conditions can be found in Supplementary Material 1: Fish rearing details.
In 2017, a single two-year-old F 1 juvenile was sampled for the genome assembly (section Genome sequencing and assembly), while five additional fish were sampled to annotate the genome (tissues sampled: skin, white muscle, gill, liver, kidney, brain and heart tissues) (Supplementary Material 2: RNA extraction for transcriptome sequencing). Three-year-old F 1 individuals (n=96) were lethally sexed and sampled in 2018 and used for validation of the sex marker.
Genome sequencing and assembly Short-insert library preparation, sequencing, and assembly High-quality DNA for the genome assembly was extracted from heart tissue as described in Supplementary Material 3: DNA extraction. Dovetail Genomics (Scotts Valley, CA, USA) was contracted to conduct the de novo sequencing project, which consisted of a short insert library and two long range libraries (Hi-C and Chicago). The Illumina short-insert library was prepared with randomly fragmented DNA according to the manufacturer's instructions. The library was sequenced on an Illumina HiSeq X platform using paired-end (PE) 150 bp sequencing. The data were trimmed for low-quality bases and adapter contamination using Trimmomatic and Jellyfish [23] with in-house software to profile the short insert reads at a variety of k-mer values (25, 43, 55, 85 and 109) to estimate the genome size, and fit negative binomial models to the data. The resulting profiles suggested a k-mer size of 43 was optimal for assembly. The contigs were assembled into scaffolds using Meraculous [24], with a k-mer size of 43, a minimum k-mer frequency of 12, and the diploid nonredundant haplotigs mode.
Chicago library preparation and sequencing
Second, following the de novo assembly with Meraculous, a Chicago library was prepared according to the methods described in Putnam et al. [25]. Briefly, ∼500 ng of high molecular weight genomic DNA was reconstituted in vitro into chromatin and subsequently fixed with formaldehyde. The fixed chromatin was then digested with DpnII, the 5′ overhangs were filled in with biotinylated nucleotides and free blunt ends were ligated. After ligation, crosslinks were reversed and the DNA was purified from any protein. The purified DNA was then treated to remove biotin that was not internal to ligated fragments and the resulting DNA was sheared to ∼350 bp mean fragment size using a Bioruptor Pico. Sequencing libraries were prepared from the sheared DNA using NEBNext Ultra enzymes (New England Biolabs, Inc.) and Illumina-compatible adapters. The biotincontaining fragments were isolated using streptavidin beads before PCR enrichment of each library. The amplified libraries were finally sequenced on an Illumina HiSeq X platform using PE 150 reads to approximately 90X depth.
Dovetail Hi-C library preparation and sequencing (multiple libraries) Third, a Dovetail Hi-C library was prepared from the heart tissue preserved in RNAlater following the procedures outlined in Lieberman-Aiden et al. [26]. This library was based on a genome-wide Chromatin Conformation Capture protocol using proximity ligation. Briefly, formaldehyde was used to fix chromatin in place in the nucleus, which was then extracted and digested with DpnII. The 5′ overhangs were filled with biotinylated nucleotides, and free blunt ends were ligated. After ligation, the crosslinks were reversed and the DNA was purified from remaining protein. Biotin that was not internal to ligated fragments was removed from the purified DNA, which was subsequently sheared to ∼350 bp mean fragment size using a Bioruptor Pico. The sequencing libraries were then prepared using NEBNext Ultra enzymes and Illumina-compatible adapters. Before PCR enrichment of the library, biotin-containing fragments were isolated using streptavidin beads. The resulting library was sequenced on an Illumina HiSeq X Platform using PE 150 reads to approximately 60X depth.
Assembly scaffolding with HiRise
To scaffold and improve the trevally de novo assembly, Dovetail staff input the Meraculous assembly, along with the shotgun reads, Chicago library reads, and Dovetail Hi-C library reads into the HiRise pipeline [25] to conduct an iterative analysis. First, the shotgun and Chicago library sequences were aligned to the draft contig assembly using a modified SNAP read mapper (http://snap.cs. berkeley.edu). Second, the separations of Chicago read pairs mapped within draft scaffolds were analysed to produce a likelihood model for genomic distance between read pairs. This model was used to identify and break putative misjoins, score prospective joins, and make joins above a threshold. Finally, after aligning and scaffolding the draft assembly using the Chicago data, the Chicago assembly was aligned and scaffolded using Dovetail Hi-C library sequences following the same method. After scaffolding, the short-insert sequences were used to close remaining gaps between contigs where possible.
Genome annotation
Automated gene models were predicted using the BRAKER2 pipeline v2.1.0 [28] with trevally RNA sequences and the trevally genome assembly as input. Gene and genome completeness were evaluated using BUSCO v3.0.2 [29] using the vertebrata_odb9 lineage set (containing 2586 genes). Functional annotations were assigned to the gene models using blastx [30] to search for similarities between the translated transcriptome gene-locus models and a peptide database using 88,504 peptide sequences of Zebrafish Danio rerio and 39,513 peptide sequences of Seriola lalandi (downloaded from NCBI using E-utilities version 11.4, 7th September 2020). The results from these searches were merged with species-specific genome-wide annotation for Danio rerio provided in the package org.Dr.eg.db [31], using Entrez stable gene identifiers [32] and Genbank accessions to annotate BLASTX alignments of gene models. Common Gene Locus (gene model g1 . g28000) from blast reports were also used to marry up Zebrafish and Kingfish accession and description information.
Whole genome sequencing of sexed F 0 broodstock Sampling of the 13 remaining broodstock (of the original 21) took place during February 2017. Fin tissue (fin clips) were placed directly into chilled 96 % ethanol, heated to 80°C for 5 min within 1 h of collection, and then stored at -20°C. Total genomic DNA was extracted as described in Supplementary Material 3: DNA extraction. High quality DNA was used to create short insert (300 bp) libraries (Illumina) which were sequenced by AGRF (PE reads, 125 bp long).
Whole genome sequence read alignment and variant detection
FASTQ files of reads belonging to the 13 sexed F 0 broodstock were quality filtered using Trimmomatic v0.36 [33] with a sliding window size of 4, a quality cutoff of 15 and the minimum read length set at 50. Filtered FASTQ files were aligned to the reference genome Tre-vally_v1 using BWA-MEM v0.7.17 [34]. Aligned BAM files of two sequencing lanes per individual fish were merged using Samtools v1.7 [35]. Read groups were added and duplicates were removed from merged BAM files using Picard Tools v2.18.7, and sorted and indexed using Samtools. Variant calling was done on the whole cohort of 13 fish using freebayes-parallel v1.1.0 (https:// github.com/freebayes).
Genome-wide detection of sex-linked variants
Two strategies were used for detecting sex-associated regions using the re-sequencing data from the 13 sexed broodstock ( Supplementary Fig. 1). First, a read-depth variation approach was employed to detect regions where read-mapping of samples is absent or reduced in one sex, characteristic of mapping of reads of either Y chromosomes in males of XY species, or of reads of W chromosomes of females in ZW species. Second, a variant (SNPs or indels) state (homozygous or heterozygous) approach was employed, searching for regions characteristic of divergence of sex chromosomes (X versus Y, Z versus W). The heterogametic sex is expected to possess high frequencies of variants that are heterozygous in regions that have diverged in the sex chromosome.
Alignments to scaffolds shorter than 3000 bp were excluded from bam files using an in-house BASH script with AWK. To detect regions with variable read-depths between males and females, for each sample, read-depth was calculated per base using Samtools v1.7 [32]. To determine sex-associated variation in read depth, mean depth was calculated for 1 kb bins for each sample, and t-tests (Welch Two Sample t-test, R v3.5.3) were conducted to test differences between means of males and females, for each 1 kb bin. P-values were converted into -log10P values and 1 kb windows with -log10P values greater than 2 were retained for plotting.
To determine association of variant state with sex (heterozygous in one sex, homozygous in the other), a VCF file of all 13 samples was generated using vcftools v0.1.14 [36] and was converted into genotypes using vcfR v.1.8.0 [37] in R v3.5.0. Fisher's exact test (R v3.5.0) was used to test the association of sex with genotype states (usually 0/0, 0/1 or 1/1). P-values were converted into -log10P values, and variants that had states entirely associated with sex were retained for plotting. Whole genome plotting of regions of significant read depth variation and variant state was done using Circlize [38] and plotting of individual scaffolds was done using ggplot2 [39] in R v3.5.0.
Identification of candidate genes related to sex determination
Teleost SD candidate genes were identified and compiled from publications from 1998 and onwards using the search terms: sex determination, Pseudocaranx georgianus, Carangidae, Perciformes, teleost, and fish in combination with sex determination and sex genes, in Google Scholar (parsed from 1 to 2019 to 1 October 2019). Sequences of candidate genes were downloaded from NCBI and used to query the trevally reference genome Trevally_v1 using blastn v2.2.25 [40], filtering for E-values < 1e-10, and alignment lengths and bit scores greater than 99. Gene models were developed based on sequence similarity at the peptide level using blastx and blastp against similar sequences of teleost fishes of the non-redundant protein database of NCBI [41]. To determine the class of the trevally sex-associated gene and its paralogues, searches of the Protein database at NCBI were made using cyp19a1a and cyp19a1b as search terms. Selections of ten peptides of each of these genes were used to generate a guide tree along with human cyp1a1 as an outgroup using the Clustal Omega web server at EMBL-EBI [42].
Sex phenotyping for marker development
For the development and validation of a molecular sex marker in trevally, gonadal tissues were collected from three-year-old F 1 individuals (n=96). In brief, fish were subjected to complete sedation and euthanasia by overdose in anaesthetic (> 50 ppm AQUI-S®; Aqui-S New Zealand Ltd, Lower Hutt, New Zealand) followed by cervical dislocation with a sharp knife.
A fragment of gonadal tissue was dissected and fixed in a solution of 4 % formaldehyde-1 % glutaraldehyde for at least 48 h at 4°C. Fixed samples were then dehydrated through an ethanol series before being embedded in paraffin (Paraplast, Leica Biosystems Richmond Inc, Richmond IL, USA). Serial sections cut to a thickness of 5 μm were obtained using a microtome (Leica RM2125RT, Leica Microsystems Nussloch GmbH, Germany) and stained in Gill 2 hematoxylin (Thermo Scientific Kalamazoo, MI, USA) and counterstained with eosin. Histological sections were examined under a light compound microscope (Olympus BX50) for the presence of oocytes or spermatogonia and photographed with a digital camera (Nikon DS-Ri2) to confirm the sex of each individual.
Sex marker development and validation
Fin clips were collected from the 96 individual F 1 fish and placed directly into chilled 96 % ethanol, heated to 80°C for 5 min and then stored in a -20°C freezer. Total genomic DNA was extracted as described above. Three types of genetic markers were developed in the sex-linked regions. PCR primers were designed using the Primer3 v4.1.0 web application. Y-specific markers were designed using male sequences where there is an absolute absence in females, so that PCR only amplifies the Y allele. Gene-based primers were designed with default parameters using the trevally ortholog of cyp19a1a from Seriola lalandi (HQ449733.1). PCR primers for High Resolution Melting (HRM) were designed around the sexually divergent SNPs by flanking the SNPs with 100 bp on each side.
HRM markers were screened using PCR conditions and mix described in Guitton et al. [43] using genomic DNA extracted from fin clips of these fish. Y-allele specific and candidate gene-based markers were screened as sequence-characterized amplified regions (SCAR) markers as described in Bus et al. [44]. PCR conditions were first tested on eight individual samples to verify PCR amplification and presence (in males) absence (in females) polymorphism, then screened on the population of 96 sexed fish.
Genome sequencing and assembly
In total, 412,758,157 paired-end Illumina short reads were generated from an F 1 unsexed trevally, of which 97.4 % were retained after trimming. K-mer analysis (k= 43) resulted in 0.71 % of heterozygous SNPs and an estimated genome size of 646 Mb. The total input sequencing data pre-assembly was approximately 121 Gb, which is equivalent to 187.3× coverage.
The whole genome assembly yielded 2,006 scaffolds greater than 1 kb, for a total assembly size of 579.4 Mb (89 % of estimated genome) and a N50 (scaffold) of 25.2 Mb. Of this total assembly, 574.8 Mb (99.2 % of the total assembly and 88.8 % of the k-mer estimated genome size) were assembled into 24 chromosome-size scaffolds ranging from 11 Mb to 31 Mb in length and corresponding to the expected karyotype of trevally ( Table 1). The remaining scaffolds (<0.8 % of the total assembly) that could not be anchored to pseudochromosomes were smaller, ranging from 1 kb to 51.2 kb in size.
Repeat and gene annotation
A total of 12.8 % of the genome was masked for repeats. BUSCO analysis of the anchored Trevally_v1 genome yielded a complete BUSCO score of 92.4 % with 2.4 % being single copy and 27.0 % being duplicated copies (134 were fragmented and 61 missing). In total, 28,416 protein-coding gene models were detected.
Whole genome re-sequencing and detection of sexdetermining regions The number of paired reads from each of the 13 trevally F 0 broodstock aligned to the reference genome ranged from 53,250,512 to 70,547,632 with a mean read number of 61,800,138. Read mapping rates ranged between 96.54 and 97.31 %, with between 90.00 % and 90.74 % of reads properly paired. With the majority of trimmed reads being 125 bp and with the reference genome size being 579,406,389 bp, the mean read depth for each fish was approximately 13X.
In total 16,576,890 variants were detected, including 14,355,149 and 2,221,741 SNPs and indels, respectively. A total of 572 of these variants showed absolute association with sex, with all individuals of either sex genotyping as heterozygous while all others of the other sex phenotype as homozygous. Five pseudochromosomes contain regions of variants whose state is associated with sex at densities higher than 10 variants per 5 kb window: Chr5, Chr7, Chr12, Chr21 and Chr0 (Fig. 2). Associations in Chr0 involved three unanchored scaffolds; scaf-fold_000353, scaffold_000374 and scaffold_001951 of 5647 bp, 3469 and 4013 bp in size, respectively. Only those regions of Chr21 and Chr0 were associated with sex-associated read depth variation, specifically tracts of zero read depth in females.
Close inspection of scaffold_001951 indicates that this scaffold contains a repetitive sequence that carries variants that are heterozygous in males and homozygous in females. However, while all females have zero read depth over major tracts of this scaffold, some males also showed a lack of read mapping over those same regions (Fig. 3). Contrastingly, scaffold_000353 shows major tracts of zero read depth in all females, while all males show consistent read mapping across the whole scaffold. Neither of these scaffolds show any sequence similarity to any known genes.
Chr5, Chr7 and Chr12 show short regions of association of variant state with sex across short regions within each chromosome. These regions include variants that are heterozygous in females and homozygous in males but further investigation into these regions suggested that these associations were spurious. Specifically, regions of 5.5 kb of Chr5 (8714673-8720250), 2.2 kb of Chr7 (13579401-13581609) and 6.7 kb of Chr12 (7693603-7700312) carry variants that are heterozygous Table 1 The 24 anchored trevally chromosomes following [45] and the corresponding scaffold names and their respective lengths (bp). Note, that the scaffold_001800 is located on Chromosome 21, making this the sex chromosome in females and homozygous in males, signal that suggests a ZZ/ZW system. These regions were investigated further for sex-associated variation in co-localised readdepth and for sequence similarity to known sequences. No read-depth variation was seen across of these regions and no significant similarity was detected of sequence from the regions of Chr5 and Chr7 to any known gene sequence (blastx; NCBI; nr peptide database, Evalue = 0.001). The region of Chr12 showed sequence similarity to homologues of TELO2-interacting protein 1, a protein that is part of the TTT complex that is involved in DNA damage response signalling [46], but no reports of any involvement in sex-determination were found. Taken together, as these regions of signal are not Fig. 2 Genome-wide association of sex determining regions in the trevally genome. Circos plot of -log 10 P values above 2, from t-tests of mean read depth variation within 1 kb windows, of whole genome sequence between male and female trevally (outer track in red), and of densities of variants whose state (heterozygous or homozygous) is fully sex-associated (inner track in black). Chr0 is composed of unanchored scaffolds. The regions of high densities of fully-associated variants that correspond to peaks of high -log 10 P values of read-depth variation at the start of Chr21 and within Chr0 are from variants which are heterozygous in males and homozygous in females, while those of Chr5, Chr7 and Chr12 are from variants that are heterozygous in females and homozygous in males supported by additional evidence of read-depth variation and sequence similarity to genes known to be associated with sex-determination, they are most parsimoniously explained as false discoveries.
Model for sex determination
Twenty-five and three variants were heterozygous for all seven re-sequenced males and homozygous for all six females in Chr21 and scaffold_000374, respectively (Supplementary Table 1). Furthermore, both of these scaffolds showed major tracts where read mapping was absent, or very low, in all females (Fig. 3). These two features, where males carry additional genomic sequence, with co-located sequence divergence seen in the form of higher frequencies of heterozygous variants, are characteristic of a XX/XY sex determining system. While regions of Chr5 and Chr12 carry variants that are heterozygous in females and homozygous in males, suggesting a ZZ/ZW system, there is no co-located variation in read-depth. We therefore propose that a XX/XY model for sex determination is most likely in this species.
Identification of a candidate gene related to sex determination
A literature review of sex determination in fish uncovered 32 research publications and from these, a total of 132 candidate SD genes were collated, of which 64 were unique (Supplementary Table 2). We used these 64 candidate genes as queries to search the trevally reference genome using blastn resulting in 2100 High Scoring Pairs (HSPs) with E-values less than 1e −6 ( Table 2). Of these, 19 were detected in Chr21 and six were detected in scaffold_000374 with e-values ranging from 3e −50 to 7e −8 . All HSPs of these scaffolds were with accessions described as cyp19a1a, cytochrome P450 aromatase, and cyp19b ( Table 2). The highest hit from a blastn search using the first 3 kb of Chr21 (XP_018539124.1, predicted an aromatase of 521 amino acids from Lates calcarifer) was used to Table 3. Models of two additional aromatase gene paralogues were similarly developed (see Supplementary Table 3). These consist of a proposed sequence of 479 aa, located on scaffold_001201 and scaffold_001906 (Trevally paralogue 1), and of 445 aa located on Chr16 (Trevally paralogue 2). On alignment with representative sequences of cyp19a1a and cyp19a1b, and construction of a guide tree, the proposed sex-associated aromatase and Trevally paralogue 1 clustered with sequences of cyp19a1a, while Trevally paralogue 2 clustered with sequences of cyp19a1b (Fig. 4). Taken together, these data indicate that the sex-associated aromatase is an additional paralogue of cyp19a1a.
Sex marker development and validation
Three types of markers were designed and tested based on the candidate gene regions: Gene wide markers (n=5, Supplementary Table 4), Y-allele markers (n=15, Supplementary Table 5), and HRM markers (n=15, Supplementary Table 6), respectively. One ('FW2/RV2_2359_2127') out of the total 15 successfully amplified HRM markers (Supplementary Table 6) and one (FW1/RV1_412_421') of the total 15 Y-allele specific designed markers (Supplementary Table 5) were linked to the sex trait and showed complete genotype to phenotype concordance on the 96 sexually characterized trevally (Fig. 5 A). Of the gene wide specific amplification markers (Supplementary Table 4) designed by amplifying large fragments of the cyp19a candidate gene, two ('TRE_Cyp19a_FW1/ RV1' and 'TRE_Cyp19a_FW2/RV2') amplified a PCR product present in all males and absent from the females (Fig. 5B shows just the first marker's results).
Discussion
Here we present the first near-complete genome assembly of New Zealand trevally (Pseudocaranx georgianus) and one of the first for a carangid species, and the use of this and whole-genome variation of males and females to identify sex-linked regions. A genome assembly for golden pompano (Trachinotus oyatus) was assembled at the chromosome level [14]. Another genome for California yellowtail (Seriola dorsalis) was developed, however, it was not resolved into chromosome scale scaffolds [47]. Our assembly covers the 24 chromosomes expected from the family Carangidae; it is highly contiguous and only includes a small proportion of scaffolds that could Fig. 4 Guide tree based on peptide sequences of ten NCBI entries of each of cyp19a1a and cyp19a1b, along with the trevally sex-associated aromatase (Trevally_sex_gene) and two trevally paralogues (Trevally_paralogue_1 and Trevally_paralogue_2). A human cyp1a sequence was included as an outgroup not be anchored. The trevally genome will be useful to assemble other Carangidae fragmented genomes based on synteny, such as other Seriola spp. which are economically important for aquaculture around the globe [48]. The genomic resources developed in this work will also enable more routine genotyping work, for example, the genome will assist analyses that require linkage disequilibrium knowledge, SNP marker checking and selection, and Gene by Environment Association (GEA) analyses of wild trevally populations, to reveal local adaptation of different stocks. The sex marker will prove to be a valuable resource to determine sex in aquaculture breeding programmes, particularly in immature fish, to ensure optimal sex ratios in elite broodstock lines. The downside of a presence/absence assay is, however, that it does not include a control for PCR failure which can lead to males with failed PCRs being mis-classified as females. This can be prevented by adding an internal control, e.g. a second pair of primers to amplify a region of different length. The ability to genetically sex trevally will also facilitate to partition datasets into sex, and to determine sex-linked and impacted traits, to characterise sex specific trade-offs in growth and maturation, and to identify possible sex-linked physiological optima. We discuss our approach and findings and outline resulting applications and implications, and provide insights how our results improve the overall understanding of the genetics of sex determination in teleost fishes. We chose two strategies to reveal the genomic regions linked to SD in trevally. First, we screened for genomic variants that were commonly, or always, in the heterozygous state in one sex and a homozygous state in the other. We discovered four regions with high numbers of variants seen in the heterozygous state in all seven male fish assessed, which were homozygous in all six female fish assessed. One region was approximately 6 kb and located at the proximal end of a chromosome-scale scaffold (Chromosome 21), while the other three regions span three short scaffolds (scaffold_000353, scaffold_ 000374 and scaffold_001951 of 5647 bp, 3469 and 4013 bp in size, respectively). The second strategy was based on read-depth variation between the sexes. We found higher read-depth in males compared to females along the same four scaffolds. Because the re-sequenced females showed absent regions compared to the reference Trevally_v1 assembly, we now hypothesise that the unsexed juvenile fish used as a specimen for genome assembly must have been a male. Our results also underscore the need for studies to go beyond SNPs in their data analysis and to include the wider spectrum of structural genomic variants, including copy number repeats such as insertions, duplication and deletions, as well as fusions, fissions and translocations, to increase the power of SD detection and to better detail the full extent of sexually divergent regions [49,50]. An increasing number of studies, including on teleost species [51], reveal that structural genomic variants encompass more genome-wide bp variants compared to SNPs, and thus hold an enormous potential to act as a potent substrate in processes involved in the eco-evolutionary divergence of species.
The region linked to sex determination on the pseudochromosome scaffold_001800 (Chromosome 21) is small (~6 kb), and could have been easily missed with other methods involving less comprehensive variant detection, such as reduced-representation genotyping by sequencing (GBS). This illustrates how our strategy, using a full genome assembly coupled with the full re-sequencing of sexed individuals, efficiently enabled us to pinpoint this region, develop sex-specific markers, and identify a candidate gene. Interestingly, the sex-linked short scaffolds may be unanchored due to difficulties in resolving the genome assembly in the SD region. The divergence between the Y and X alleles may have prevented the Meraculous assembler from collapsing both haplotypes. Long read sequencing and a phased assembly would be useful to resolve this issue in the future.
In contrast to mammals and birds, cold-blooded vertebrates, and in particular teleost fishes, show a variety of strategies for sexual reproduction [52]. Sex chromosomes in teleosts can either be distinguishable cytologically (heteromorphic) or appear identical (homomorphic). In both cases, one sex is typically heterogametic (possessing two different sex chromosomes) and the other one homogametic (a genotype with two copies of the same sex chromosome). A male-heterogametic system is called an XX-XY system, and female-heterogametic systems are denoted as ZZ-ZW, and both types can be found side by side in closely related species [52]. Close relatives of trevally show the ZZ-ZW type of sexdetermination; e.g. the Japanese amberjack (Seriola quinqueradiata). Evidence for a ZZ-ZW type of sexdetermination would come from a higher number of heterozygous SNPs in females combined with a higher (See figure on previous page.) Fig. 5 A Bar plot made from Y marker ('FW1/RV1_412_241') and HRM marker 2 ('FW2/RV2_2359_2127') measured cycle amplification levels from running HRM on a LightCyler480 (Roche) real time PCR instrument for males (blue) and females (orange). The Y-axis is the number of DNA samples and the labels high or low indicate amplification levels. B the full and uncropped 0.9 % agarose gel stained with RedSafe TM , which represents PCR products (~2.5 kb) from the sex-specific gene-wide marker ('TRE_Cyp19a_FW1/RV1'). Text above each slot containing either an M (Male) or F (Female) number of deletions in males (the latter would be hinting at a lack of the W-chromosome) [15]. In another closely related species, golden pompano, a SNP found within the gene Hsd17b1 was detected as a sex-specific marker, being heterozygous in females and homozygous in males, also supporting a ZZ-ZW system in this species. Yet in trevally the opposite is seen. When examining the SD region between the sexes, we found that in all instances males were heterozygous while all females were homozygous. A similar pattern was seen in the number of deletions. Of the 418 deletions detected, all of these were located in females, whereas none were located in males, when comparing the data with our male reference genome (note that male-specific deletions would only be observed if our reference genome was that of XX female due to Y degradation). Taken together, this all strongly indicates that trevally has XX-XY sex determining system.
Other teleost fish with a similar XX-XY sex determining mechanism have been well described. In the Atlantic cod (Gadus morhua), studies found deletions in females (hinting lack of a Y-chromosome) and males showed high SNP heterozygosity on the sex determination gene zKY [53] (confirmed with diagnostic PCR). In Medaka (Oryzias latipes) the XX-XY sex chromosomes were determined using genetic crosses and the tracking of sex linked markers [54]. Recent studies have also revealed the putative sex gene for two carangids, the greater amberjacks (Seriola dumerili) and Californian yellowtails (Seriola dorsalis), [15]. Biochemical analyses in greater amberjacks showed a missense SNP in the Z-linked allele of 17β-hydroxysteroid dehydrogenase 1 gene (Hsd17b1) [55]. In Californian yellowtails, Hsd17b1 was found in the SDR, identified by deletions in the female sex, like the SDR in trevally, however, females and not males were heterogametic in yellowtails [47]. The Hsd17b1 gene catalyses the interconversion of estrogens (estrone<->estradiol) and androgens (androstenedione<-> and testosterone). The Hsd17b1 gene can thus be classified as an estradiol-synthesising sex determination gene, just like cyp19 [15], because cyp19 converts androgens to estradiol (testosterone->estradiol).
Our results provide strong evidence that two small genomic regions form the major part of the SD locus of trevally. The presence of a Cyp19a1a-like gene within these sex-associated regions, strongly implicates a role of this gene in the sex determination of this species. No reads from female fish aligned to the gene sequence and male-specific PCR amplification of markers based on the gene indicate that it is specific to male fish and suggest it might play a role in the masculinisation of genetically male fish.
Previous research has demonstrated that cyp19a1b catalyses the irreversible conversion of the androgens androstenedione and testosterone into the estrogens estrone and estradiol, respectively [56]. Recent genomic investigations have detailed that the two variants of the cyp19 gene (cyp19a1a and cyp19a1b) seen in most teleost fish were derived from the teleost specific whole genome duplication (3R) and evolved through subfunctionalisation [57]. Expression of variant A (cyp19a1a) is restricted to the gonads (mainly the ovary), whereas the B variant (cyp19a1b) is expressed in the brain and the pituitary [58]. When looking at studies of the genus Seriola, which is in the same family as trevally, variant A is only expressed in the ovaries [59]. For males, the presence of this gene appears to be related to spermatogenesis and testicular development in some species [60], something that is also found in other vertebrate species outside of teleosts [61]. Stage-specific gene expression during spermatogenesis in European bass (Dicentrarchus labrax) gonads, for example, has revealed that cyp19a1a at lower levels has a regulatory effect at the initial stages of spermatogenesis [55]. In addition to this regulatory effect, cyp19a1a has also been implicated in the differentiation of sex in black porgy (Acanthopagrus schlegeli), where high levels were expressed during early testicular development [62]. Females still have higher expression than males of this gene at any ontogenetic stages, however. This is probably because as well as regulation and differentiation of the ovary at higher levels during early sex differentiation [63], for females this gene is also an important factor in the female reproductive cycle [64].
Variant B, which is expressed mostly in the brain, is attributed to the control of reproduction and behaviour related to sex. RT-PCR analysis of the hermaphroditic mangrove killifish (Rivulus marmoratus) showed that cyp19a1b is expressed in both the male and hermaphroditic fish, whilst cyp19a1a was completely absent in males [65]. In addition, a study where cyp19a1b levels were artificially lowered in male guppy (Poecilia reticulate) showed these fish experience a reduction in the performance of male specific behaviours [66]. Females also express cyp19a1b, but this expression is mainly restricted to the period around spawning. Work on both zebrafish (Danio rerio) and channel catfish (Ictalurus punctatus) show an increase in cyp19a1b right before the onset and during spawning, while a decrease and low levels cyp19a1b are found outside of the reproductive period [58]. Taken together, these studies are consistent with a cyp19a1b being more male-linked compared to cyp19a1a, and conversely, that cyp19a1a is associated with female phenotypes.
The presence of a sex-associated cyp19a1a-like gene, in addition to autosomal paralogues of cyp19a1a and cyp19a1b, implies a duplication of cyp19a1a and uprecruitment to assume the role of the sex determining gene in trevally. Up-recruitment of key SD genes from lower in the sex-determination pathway was originally proposed by Wilkins [67] and a number of examples supporting the hypothesis are seen among teleost fish. In Oryzias latipes, sex determination is mediated in the embryo by Dmy, a Y-specific duplication of Dmrt1, which itself is expressed much later during development of the testes [68]. Other similar examples include GsdfY in Oryzias luzonensis [69] and amhy in Hypoatherina tsurugae [70]. Evidence of the suitability of cyp19a1a for duplication and up-recruitment as an SD master gene is seen in the plasticity of the gene, shown by its common recruitment among cichlids into roles other than its usual role of aromatisation of androgens into estrogens within the ovaries [71]. Further research is required to elucidate what the role of cyp19a1a-like is, and better understand its function in sex determination of trevally. Final determination of up-recruitment of cyp19a1a to the sex-associated aromatase as the master SD gene of trevally, will require closer analysis of expression of the sex-associated aromatase, cyp19a1a and cyp19a1b within the brain, testes and ovaries during development.
Conclusions
As a greater number of fish genomes are sequenced, it is likely that more genes involved in the regulation of sex will be discovered. This will provide much needed data for future comparative genomic work to track the evolutionary processes and patterns governing sex evolution across close and distant teleost lineages. Given the importance of trevally and other carangid species for aquaculture production (e.g. Seriola) and wild fisheries [18], our reference genome will contribute to accelerating marker-assisted breeding programmes, and will aid genomics-informed fisheries management programmes, by providing insights into sex ratios and sex specific effects [72]. This genome assembly was a key resource to detect a sex-determining region in trevally, and it will be a substantial resource for a variety of research applications such as population genomics and functional genomics, in both cultured and wild populations of this and other carangid species. The developed resources will further studies into teleost evolution, specifically the evolution of sex determination, which has proven to be a complex and highly variable trait in fish. | 2021-04-30T13:23:25.301Z | 2021-04-26T00:00:00.000 | {
"year": 2021,
"sha1": "40b497197b60c4d08c838a55732a0d83c294bea7",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-021-08102-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a323031840da6d6a45f91dcf5bad0d48eac9e34d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
96456461 | pes2o/s2orc | v3-fos-license | Characteristics and Sources of Water-Soluble Ions in PM 2 . 5 in the Sichuan Basin , China
To track the particulate pollution in Sichuan Basin, sample filters were collected in three urban sites. Characteristics of water-soluble inorganic ions (WSIIs) were explored and their sources were analyzed by principal component analysis (PCA). During 2012–2013, the PM2.5 concentrations were 86.7 ± 49.7 μg m−3 in Chengdu (CD), 78.6 ± 36.8 μg m−3 in Neijiang (NJ), and 71.7 ± 36.9 μg m−3 in Chongqing (CQ), respectively. WSIIs contributed about 50% to PM2.5, and 90% of them were secondary inorganic ions. NH4 and NO3 roughly followed the seasonal pattern of PM2.5 variations, whereas the highest levels of SO4 appeared in summer and autumn. PM2.5 samples were most acidic in autumn and winter, but were alkaline in spring. The aerosol acidity increased with the increasing level of anion equivalents. SO4 primarily existed in the form of (NH4)2SO4. Full neutralization of NH4 to NO3 was only observed in low levels of SO4 + NO3, and NO3 existed in various forms. SO4 and NO3 were formed mainly through homogeneous reactions, and there was the existence of heterogeneous reactions under high relative humidity. The main identified sources of WSIIs included coal combustion, biomass burning, and construction dust.
Introduction
Water-soluble inorganic ions (WSIIs) are a major part of fine particles (PM 2.5 , particulate matter with an aerodynamic diameter less than 2.5 µm).Of the various components, secondary inorganic ions (SII), including sulfate (SO 4 2− ), nitrate (NO 3 − ), and ammonium (NH 4 + ), are the predominant species and account for more than 90% of WSIIs [1].Nationwide, SII contribute about 25%-48% to PM 2.5 mass, and are attributable to about 60% of the visibility reduction in China [2].Moreover, they also play important roles in atmospheric acidification and climate change [3,4].Characteristics of the WSII pollution in many cities of China have been studied, and the formation of SII has always been a focus [5].Sulfate is primarily formed through homogeneous gas-phase oxidation of sulfur dioxide, while heterogeneous transformation processes, i.e., metal-catalyzed oxidation, H 2 O 2 /O 3 oxidation, and in-cloud process, are also reported [6] (p. 348), [7].Both homogeneous reaction via NO 2 oxidation by OH radical and O 3 , and the heterogeneous hydrolysis of N 2 O 5 on preexisting aerosols, are important pathways of nitric acid formation [8].
four-channel sampler at a flow rate of 16.7 L/min (model: TH-16A, Tianhong Instrument Co., Ltd., Wuhan).The sampling process was carried out by trained staff of a local monitoring station, and the flow rate of the sampler was checked monthly using a bubble flow meter (Gilian Gilibrator 2, Sensidyne, US) to ensure collection efficiency.The sampled filters were stored in refrigerator at −18 °C and transported by air.The filters were balanced and weighted in a superclean laboratory with controlled temperature (20 ± 1°C) and RH (40 ± 3%), both before and after the sampling.
Water-Soluble Ion Measurement
Samples on teflon filters were firstly extracted ultrasonically using 10 mL ultrapure water (18.5 MΩ cm −1 ) for 30 minutes.The aqueous extract was filtered through a 0.45 μm water filter and the ion concentrations were determined using ion chromatography (Dionex, ICS 2000).A Dionex separator column of AS11-HC with KOH eluent was used for anion analysis (NO3 − , SO4 2− , and Cl − ), and a cation analytical column of CS12A and an eluent of 20 mM methyl sulfonic acid was used to analyze inorganic cations (Na + , NH4 + , K + , Mg 2+ , Ca 2+ ).Careful quality assurance and quality control (QA/QC) procedure was applied.Reference materials from the National Institute of Metrology, China were used as standards.Blank and standard samples were repeated every ten samples.Examples of a calibration curve of standard samples are displayed in the Supplementary Material.
Concentrations of PM10, PM2.5, and WSIIs
Table 1 summarizes the observed particle and WSII concentrations in the three sampling sites.During 2012-2013, the annual average PM10 and PM2.5 concentrations were 125.8 ± 74.4 and 86.7 ± 49.7 μg m −3 in CD, 116.3 ± 54.7 and 78.6 ± 36.8 μg m −3 in NJ, and 101.0 ± 51.7 and 71.7 ± 36.9 μg m −3 in CQ, respectively.Obviously, they all exceeded the latest NAAQS-II issued in 2012, and the annual PM2.5 concentrations were more than 2 times the 35 μg m −3 limit.Daily PM2.5 levels in more than one-third of the sampling days surpassed the daily 75 μg m −3 criteria, and there were 8, 3, and 3 heavy pollution days in CD, NJ, and CQ with PM2.5 higher than 150 μg m −3 .The average PM2.5/PM10 ratios were 0.72 in CD, 0.69 in NJ, and 0.71 in CQ, indicating a predominance of fine particulate pollution in the Sichuan Basin.
The annual averages of total WSIIs were 43.0 ± 27.9, 36.2 ± 18.4, and 35.4 ± 18.4 μg m −3 in CD, NJ, and CQ (Table 1), which accounted for 49.9%, 46.1%, and 49.4% of the PM2.5 mass, respectively.Although the average WSIIs in PM2.5 showed minor differences among sites, there were large fluctuations within each site, from a few to a couple of hundred μg m −3 .For instance, WSII levels in CD were lower than 10 μg m −3 during clean period (PM2.5 < 35 μg m −3 ), and they increased to higher During 2012-2013, an intensive haze campaign was carried out in the region and detailed information about fine particle pollution was gathered from Chengdu, Chongqing, and Neijiang (a medium-sized city in the Sichuan Basin).The one-year field sampling data is summarized, here, to provide a comprehensive study of the WSIIs in PM 2.5 of the Sichuan Basin.The WSII property is analyzed from annual and seasonal perspectives, and the acidity characteristics of the sulfate-nitrate-ammonium system and nitrate formation are investigated in the three cities of different sizes.The indicators reflected from the WSIIs are also explored to track the sources of PM 2.5 in the region.
Site Description and Field Sampling
Locations of the three sampling sites in the Sichuan Basin are denoted in Figure 1b.Chengdu is the provincial capital of Sichuan Province and surrounded by many small and medium-sized cities in the west of Sichuan basin.The sampling site in Chengdu is on the roof of a sixth-floor building with a height of 28 m (CD, 104 • 6 E, 30 • 36 N), which stands beside a main road with high traffic density.The site is representative of urban air quality, combining the influence of local vehicular emission, residential emission and regional pollution.
Chongqing is a municipality that is directly administrated by the Central Government.The sampling site in Chongqing is on the roof of a commercial building (CQ, 29 • 37 N, 106 • 30 E) in Yubei District in a downtown region (Figure 1b), and the sampling height is 35 m.The site is surrounded by main roads and office buildings.The third sampling site in Neijiang is on the roof of Neijiang Environmental Monitoring Center (NJ, 105 • 4 E, 29 • 42 N) with a height of 25 m.Neijiang is located 150 km southeast of Chengdu and 145 km west of Chongqing with an area of 75 km 2 and a population of about 1 million in the downtown region.
During May 2012 to May 2013, particulate samples were synchronously collected at the above three sites.Both PM 2.5 and PM 10 were sampled once every six days on 47 mm teflon filters using a four-channel sampler at a flow rate of 16.7 L/min (model: TH-16A, Tianhong Instrument Co., Ltd., Wuhan).The sampling process was carried out by trained staff of a local monitoring station, and the flow rate of the sampler was checked monthly using a bubble flow meter (Gilian Gilibrator 2, Sensidyne, US) to ensure collection efficiency.The sampled filters were stored in refrigerator at −18 • C and transported by air.The filters were balanced and weighted in a superclean laboratory with controlled temperature (20 ± 1 • C) and RH (40 ± 3%), both before and after the sampling.
Water-Soluble Ion Measurement
Samples on teflon filters were firstly extracted ultrasonically using 10 mL ultrapure water (18.5 MΩ cm −1 ) for 30 minutes.The aqueous extract was filtered through a 0.45 µm water filter and the ion concentrations were determined using ion chromatography (Dionex, ICS 2000).A Dionex separator column of AS11-HC with KOH eluent was used for anion analysis (NO 3 − , SO 4 2− , and Cl − ), and a cation analytical column of CS12A and an eluent of 20 mM methyl sulfonic acid was used to analyze inorganic cations (Na + , NH 4 + , K + , Mg 2+ , Ca 2+ ).Careful quality assurance and quality control (QA/QC) procedure was applied.Reference materials from the National Institute of Metrology, China were used as standards.Blank and standard samples were repeated every ten samples.Examples of a calibration curve of standard samples are displayed in the Supplementary Material.
Results and Discussion
3.1.Concentrations of PM 10 , PM 2.5 , and WSIIs Table 1 summarizes the observed particle and WSII concentrations in the three sampling sites.During 2012-2013, the annual average PM 10 and PM 2.5 concentrations were 125.8 ± 74.4 and 86.7 ± 49.7 µg m −3 in CD, 116.3 ± 54.7 and 78.6 ± 36.8 µg m −3 in NJ, and 101.0 ± 51.7 and 71.7 ± 36.9 µg m −3 in CQ, respectively.Obviously, they all exceeded the latest NAAQS-II issued in 2012, and the annual PM 2.5 concentrations were more than 2 times the 35 µg m −3 limit.Daily PM 2.5 levels in more than one-third of the sampling days surpassed the daily 75 µg m −3 criteria, and there were 8, 3, and 3 heavy pollution days in CD, NJ, and CQ with PM 2.5 higher than 150 µg m −3 .The average PM 2.5 /PM 10 ratios were 0.72 in CD, 0.69 in NJ, and 0.71 in CQ, indicating a predominance of fine particulate pollution in the Sichuan Basin.The annual averages of total WSIIs were 43.0 ± 27.9, 36.2 ± 18.4, and 35.4 ± 18.4 µg m −3 in CD, NJ, and CQ (Table 1), which accounted for 49.9%, 46.1%, and 49.4% of the PM 2.5 mass, respectively.Although the average WSIIs in PM 2.5 showed minor differences among sites, there were large fluctuations within each site, from a few to a couple of hundred µg m −3 .For instance, WSII levels in CD were lower than 10 µg m −3 during clean period (PM 2.5 < 35 µg m −3 ), and they increased to higher than 90 µg m −3 under heavy polluted period (PM 2.5 > 150 µg m −3 ).SO 4 2− was the most abundant species of WSIIs, with an average concentration of 17.7 ± 11.2 µg m −3 in CD, 18.1 ± 10.0 µg m −3 in NJ, and 17.6 ± 9.6 µg m −3 in CQ.Annual average concentrations of the other ions were ranked in the order of NO 3 − > NH 4 + > Cl − > K + > Na + > Ca 2+ > Mg 2+ in CD, whereas the order was NH 4 + > NO 3 − > K + > Cl − > Ca 2+ > Na + > Mg 2+ in both NJ and CQ (Table 1).The secondary inorganic components, in total, constituted about 90% of the total WSIIs (89.3% in CD, 92.4% in NJ, and 94.2% in CQ), and the rest of the ions each had a minor contribution.Among the three cities, PM 2.5 and WSII levels were highest in CD, followed by NJ and CQ.The three cities have roughly the same SO 4 2− levels, while CD was characterized by higher NO 3 − and NH 4 + concentrations, indicating that the sampling site in CD was more affected by motor vehicles from its surrounding road.Specifically, concentration of chloride was also the highest in CD, which might be associated with coal combustion.CD and NJ suffered from higher loadings of K + (Table 1), and it was a diagnostic tracer for intensive biomass burning in the suburban regions [17].[15].However, there was still an increase in NO 3 − (10.9 µg m −3 ) [15], which further highlights the importance of vehicular emissions in Chongqing.
Results of the study were also compared with previous measurements conducted in other cities of China.CD displayed lower loadings of PM 2.5 and SO 4 2− than Deyang, which was a medium-sized city in the Sichuan Basin located on the diffusion air pathways of Chengdu [19].WSII levels in the Sichuan Basin were generally lower than cities in northern China, i.e., Handan (2013-2014) [20], Shijiazhuang (2009)(2010) [21], and Taiyuan (2009-2010) [22].However, the SO 4 2− and NH 4 + levels were higher than those of Beijing during 2009-2010 [23], despite lower PM 2.5 and NO 3 − levels.When compared with cities in southern China, like Shanghai and the Pearl River Delta (PRD) [24], the pollution situation of PM 2.5 and WSIIs was more serious in the Sichuan Basin.
Seasonal Variations of PM 2.5 and WSIIs
The seasonal variations of PM 2.5 and WSIIs at the three sites are depicted in Figure 2. Notably, winter has the highest PM 2.5 levels (108.1, 97.6, and 97.5 µg m −3 in CD, NJ, and CQ, respectively), and was the most heavily polluted season in the Sichuan Basin.Autumn in CD and NJ also recorded high PM The seasonal patterns of WSIIs were attributable to local and regional source variations between seasons, as well as meteorological factors, which affected their formation, transformation, and transport.The seasonal variations of NH4 + followed the changes of PM2.5 in each city, and NH4 + levels were highest in winter.Undoubtedly, the sulfate concentrations were also high in winter, which were due to poor dispersion in the cold season, enhanced in-cloud process under high RH, and long contact time for gas-liquid reaction under stable meteorology.Specifically, NJ and CQ suffered from the highest loading of sulfate in summer, though the PM2.5 concentrations were low.As a typical secondary ion, SO4 2− formation via homogeneous gas phase reaction was greatly enhanced at high temperature and intensive solar radiation in summer.It is worthy to note that autumn samples in CD showed an elevated level of sulfate with respect to summer (Figure 2, 17.9 μg m −3 in summer vs. 19.5 μg m −3 in autumn), which might be ascribed to the elevated PM2.5 loading in the season.
Different from sulfate, the season pattern of nitrate was characterized by winter maxima, autumn medium, and spring/summer minima at the three sites (Figure 2).Temperature and relative humidity are two important meteorological factors influencing the thermodynamic features of nitrate, and high temperature and low RH is highly favorable for nitrate volatilization [25].Therefore, the low temperature and high RH in winter and autumn are beneficial for nitrate stabilization.Moreover, the high loadings of PM2.5 in winter provided more aerosol surfaces for heterogeneous formation of nitrate [26].
Stoichiometric Analysis of Cations and Anions
To examine the ion balance and acidity of PM2.5 samples, the ion mass concentrations (μg m −3 ) are converted into microequivalents (μmol m −3 ) by the following equations.(
AE anion equivalent
Figure 3 illustrates the scatter plots of AE vs. CE in four seasons of CD, NJ, and CQ.Strong correlations between anion and cation equivalents were found for all the three cities (~1.0),The seasonal patterns of WSIIs were attributable to local and regional source variations between seasons, as well as meteorological factors, which affected their formation, transformation, and transport.The seasonal variations of NH 4 + followed the changes of PM 2.5 in each city, and NH 4 + levels were highest in winter.Undoubtedly, the sulfate concentrations were also high in winter, which were due to poor dispersion in the cold season, enhanced in-cloud process under high RH, and long contact time for gas-liquid reaction under stable meteorology.Specifically, NJ and CQ suffered from the highest loading of sulfate in summer, though the PM 2.5 concentrations were low.As a typical secondary ion, SO 4 2− formation via homogeneous gas phase reaction was greatly enhanced at high temperature and intensive solar radiation in summer.It is worthy to note that autumn samples in CD showed an elevated level of sulfate with respect to summer (Figure 2, 17.9 µg m −3 in summer vs. 19.5 µg m −3 in autumn), which might be ascribed to the elevated PM 2.5 loading in the season.Different from sulfate, the season pattern of nitrate was characterized by winter maxima, autumn medium, and spring/summer minima at the three sites (Figure 2).Temperature and relative humidity are two important meteorological factors influencing the thermodynamic features of nitrate, and high temperature and low RH is highly favorable for nitrate volatilization [25].Therefore, the low temperature and high RH in winter and autumn are beneficial for nitrate stabilization.Moreover, the high loadings of PM 2.5 in winter provided more aerosol surfaces for heterogeneous formation of nitrate [26].
Stoichiometric Analysis of Cations and Anions
To examine the ion balance and acidity of PM 2.5 samples, the ion mass concentrations (µg m −3 ) are converted into microequivalents (µmol m −3 ) by the following equations.
AE(anion equivalent
Figure 3 illustrates the scatter plots of AE vs. CE in four seasons of CD, NJ, and CQ.Strong correlations between anion and cation equivalents were found for all the three cities (~1.0), supporting that the measured eight ions were dominant species in the PM 2.5 ionic components.In CD, most of the samples in autumn and winter were above the 1:1 (AE/CE) line, indicating an acidic feature.By contrast, the majority of the samples in spring fall below the 1:1 line, demonstrating a deficiency of anions which might be associated with more alkaline dust particles.CO 3 2− and HCO 3 − were not measured by the method, and also contributed to the anion deficiency.In the summer of CD, most of the samples generally showed a balance between anions and cations, while some of them also denoted acidic features which might result from the enhanced formation of sulfate and loss of cations from the volatilization of nitrate and ammonium.Similar seasonal patterns of PM 2.5 acidity were also observed in NJ and CQ (Figure 3b, Figure 3c).Interestingly, it was reflected from the scatter plots (Figure 3) that aerosol acidity increased with the level of AE, indicating PM 2.5 samples under heavy pollution were mostly acidic.Tian et al. [15] also found increased acidity with aerosol pollution level.High humidity and low wind speed were common meteorological conditions for heavy pollution [27].They were unfavorable for horizontal dispersion and vertical mixing of pollutants but beneficial for the formation of nitrate and sulfate, therefore resulting in the acidic feature.
Atmosphere 2019, 10, x FOR PEER REVIEW 6 of 13 were not measured by the method, and also contributed to the anion deficiency.In the summer of CD, most of the samples generally showed a balance between anions and cations, while some of them also denoted acidic features which might result from the enhanced formation of sulfate and loss of cations from the volatilization of nitrate and ammonium.Similar seasonal patterns of PM2.5 acidity were also observed in NJ and CQ (Figure 3b, Figure 3c).Interestingly, it was reflected from the scatter plots (Figure 3) that aerosol acidity increased with the level of AE, indicating PM2.5 samples under heavy pollution were mostly acidic.Tian et al. [15] also found increased acidity with aerosol pollution level.High humidity and low wind speed were common meteorological conditions for heavy pollution [27].They were unfavorable for horizontal dispersion and vertical mixing of pollutants but beneficial for the formation of nitrate and sulfate, therefore resulting in the acidic feature.
Chemical Forms of Nitrate and Sulfate
The scatter plots of NH4 + vs. SO4 2− , SO4 2− + NO3 − , and SO4 2− + NO3 − + Cl − (all the above denote electron equivalent concentrations) in CD, NJ, and CQ are depicted in Figure 4.As (NH4)2SO4 is less volatile and preferentially formed compared to NH4NO3 and NH4Cl, the relationships between NH4 + and SO4 2− are firstly explored to investigate the chemical forms of sulfate and nitrate.It is reflected from Figure 4a-c that NH4 + was closely related with SO4 2− , and the data were mostly above the 1:1 (NH4 + /SO4 2− ) line, suggesting the complete neutralization of SO4 2− by NH4 + , and (NH4)2SO4 was the major species.However, there were a few exceptions below the 1:1 (NH4 + /SO4 2− ) line in the summer of NJ and CQ (Figure 4a-c).It was understandable that the formation of SO4 2− was greatly enhanced under the high temperature of summer while NH4 + was more easily removed by decomposition.Therefore, the samples did not have sufficient NH4 + to fully neutralize SO4 2− , and NH4HSO4 existed in summer.When it came to nitrate, the samples in spring mostly occupied enough NH4 + to neutralize both SO4 2− and NO3 − (Figure 4d-f) and formed (NH4)2SO4 and NH4NO3, in spite of a few exceptions.In other seasons, NH4 + was able to neutralize the secondary anions when their levels were low (SO4 2− + NO3 − < 0.5 μmol m −3 ) (Figure 4d-f).In fact, the abundance of NH4 + almost equaled to the sum of SO4 2− , NO3 − , and Cl − under low PM loadings (Figure 4g-i), and dominant anions existed in the form of (NH4)2SO4, NH4NO3, and NH4Cl.However, under high SO4 2− and NO3 − levels (SO4 2− + NO3 − > 0.5 μmol m −3 ), NH4 + was far from fully neutralized (Figure 4d-f).It was observed in most previous studies that high NO3 − levels were associated with high levels of NH4 + [28].In contrast, the relatively high NO3 − observed in the present study was with moderate levels of NH4 + , suggesting that the formation rate of nitrate may be much higher than other ions.The high NO3 − might also be associated with high levels of NO2 under heavy pollution.
The correlation coefficients between NO3 − and other cations in PM2.5 were further calculated in Table 2 to identify the chemical forms of nitrate.Na + and K + were found to be correlated with NO3 − in most seasons, and NaNO3 and KNO3 were also the major chemical species in aerosol particles.Notably, there were exceptionally high levels of NO3 − in the winter of CD, and NO3 − were significantly correlated with all cations and existed in various forms of NH4NO3, NaNO3, KNO3, Mg(NO3)2, and Ca(NO3)2.that the formation rate of nitrate may be much higher than other ions.The high NO 3 − might also be associated with high levels of NO 2 under heavy pollution.
The correlation coefficients between NO 3 − and other cations in PM 2.5 were further calculated in Table 2 to identify the chemical forms of nitrate.Na + and K + were found to be correlated with NO 3 − in most seasons, and NaNO 3 and KNO 3 were also the major chemical species in aerosol particles.Notably, there were exceptionally high levels of NO 3 − in the winter of CD, and NO 3 − were significantly correlated with all cations and existed in various forms of NH 4 NO 3 , NaNO 3 , KNO 3 , Mg(NO 3 ) 2 , and Ca(NO 3 ) 2 .
Formation Mechanism of Nitrate and Sulfate
Sulfur oxidation ratio (SOR), defined as n-SO 4 2− /(n-SO 2 + n-SO 4 2− ), and nitrogen oxidation ratio (NOR) defined as n-NO 3 − /(n-NO 2 + n-NO 3 − ), in CD, NJ, and CQ, are listed in Table 3 and used to indicate the secondary transformation processes.Generally, the SOR values were much higher than 0.10 (Table 3), demonstrating the occurrence of strong secondary oxidation of SO 2 to SO 4 2− throughout the year [29].The interseasonal variation of SOR peaked in summer in both CD and CQ, and it was explicable by the accelerated homogenous gas-phase oxidation of SO 2 under high temperature [30].Moreover, the increased oxidizing capacity from the greater production of ozone in summer also promoted SO 4 2− formation.By contrast, the highest SOR in NJ appeared in autumn instead of summer, and a good correlation was found between SOR and RH (r = 0.58).This suggested that high RH also increased the formation of SO 4 2− by prompting SO 2 oxidation through heterogeneous reaction [31], i.e., metal-catalyzed H 2 O 2 /O 3 oxidation and in-cloud process.The good correlation between SOR and RH in winter of CD (r = 0.66) and CQ (r = 0.71) also confirmed the existence of heterogeneous reaction.On days with elevated RH, the hygroscopic growth of sulfate would increase the liquid water content, and the aqueous phase on the aerosol surface could provide heterogeneous transformation vectors for gaseous pollutants (SO 2 ).Therefore, the elevated RH would largely promote the secondary formation of sulfate [32,33].The annual average NOR in CD, NJ, and CQ all surpassed 0.1 (Table 3), indicating the existence of secondary oxidation of NO 2 to NO 3 − [29].Reflected in Table 3, the NOR values had a different seasonal pattern from SOR, and reached their maxima in winter.Although the absolute concentrations of sulfate and nitrate both increased with PM 2.5 levels, their relative importance changed under different pollution levels.Table 4 listed the variations of NO 3 − /SO 4 2− ratios and NOR/SOR ratios as a function of different PM 2.5 levels.The continuous increase of NO 3 − /SO 4 2− ratio as a function of PM 2.5 concentrations (Table 4), as well as the increase of NOR/SOR ratio, indicated that NO 2 oxidation under heavy pollution was more significant than SO 2 , and nitrate formation might play an important role in haze in the Sichuan Basin.The above result is also supported by the findings of Hewitt [34] and Tian et al. [15].It is worth noting that the high concentrations of nitrate in the study were mostly collected during hazy and humid weather with high sulfate and acidity.Thus, the details of nitrate formation are discussed below.the study were mostly collected during hazy and humid weather with high sulfate and acidity.Thus, the details of nitrate formation are discussed below.Thus, these samples were categorized into NH4 + -rich conditions ([NH4 + ]/[SO4 2− ] ratio > 1.5).According to Pathak et al. [28], the relationship between [NO3 − ]/[SO4 2− ] and [NH4 + ]/[SO4 2− ] could be used to show the formation pathways of NO3 − .Under NH4 + -rich conditions, a linear relationship exists between them, suggesting homogeneous gas-phase formation for NO3 − , and, otherwise, hydrolysis of NOx on preexisting aerosols is responsible for the high NO3 − level [12,15,28].In Figure 5, the relative abundance of nitrate ([NO3 − ]/[SO4 2− ]) increased as the [NH4 + ]/[SO4 2− ] ratio increased (r = 0.81 in CD, r = 0.75 in NJ, and r = 0.68 in CQ), suggesting that nitrate formation via gas-phase reaction became evident in the NH3-H + -SO4 2− -H2O system in aerosol [35,36].In fact, the good relationship between the excess ammonium (excess [NH4 + ] = [NH4 + ] − 1.5[SO4 2− ]) and nitrate (Figure 6) further confirmed that the homogeneous gas-phase formation of nitrate was significant.Reflected from Figure 6, the increase of nitrate rate seemed to surpass the increase of excess [NH4 + ] under high concentrations (NO3 − > 0.25 μmol m −3 ).As more NO3 − led to more ions, it could further explain the observed high acidity of PM2.5 under high pollution levels in Section 3.3.However, it is hard to ignore that some plots are rather scattered in Figure 5, and the relationship between [NO3 − ]/[SO4 2− ] and [NH4 + ]/[SO4 2− ] were further explored under different acidity and RH (Figure 7).The scatter plots of [NO3 − ]/[SO4 2− ] and [NH4 + ]/[SO4 2− ] for both AE/CE > 1 and AE/CE < 1 displayed significant linear relationships, highlighting the importance of homogeneous gas-phase reaction.However, the correlation of R 2 = 0.45 between [NO3 − ]/[SO4 2− ] and [NH4 + ]/[SO4 2− ] for samples with RH > 75% was lower than samples with RH < 75% (R 2 = 0.65), and the plots were more scattered (Figure 7b).The result was consistent with other research in Suzhou and Chongqing [15,16], where they also tended to have better linear correlation under lower RH conditions (<75%).The scattered plots under RH > 75% implied the existence of different mechanisms other than the homogeneous reaction.The critical parameters to heterogeneous formation of nitrate via N2O5 hydrolysis on pre-existing particles include particulate hygroscopicity, surface area, and acidity.On the one hand, high RH relates to greater water content and surface areas of aerosols, which may However, it is hard to ignore that some plots are rather scattered in Figure 5 for samples with RH > 75% was lower than samples with RH < 75% (R 2 = 0.65), and the plots were more scattered (Figure 7b).The result was consistent with other research in Suzhou and Chongqing [15,16], where they also tended to have better linear correlation under lower RH conditions (<75%).The scattered plots under RH > 75% implied the existence of different mechanisms other than the homogeneous reaction.The critical parameters to heterogeneous formation of nitrate via N 2 O 5 hydrolysis on pre-existing particles include particulate hygroscopicity, surface area, and acidity.On the one hand, high RH relates to greater water content and surface areas of aerosols, which may promote N 2 O 5 uptake on the aerosol surfaces.On the other hand, the high concentrations of PM 2.5 mass and large fractions of WSIIs with acidic conditions would favor the hydrolysis of N 2 O 5 [34].
Atmosphere 2019, 10, x FOR PEER REVIEW 10 of 13 promote N2O5 uptake on the aerosol surfaces.On the other hand, the high concentrations of PM2.5 mass and large fractions of WSIIs with acidic conditions would favor the hydrolysis of N2O5 [34].
Source Analysis of WSIIs
Principal component analysis (PCA) is applied in the study using SPSS version 16.0 software packages to explore the sources of WSIIs in CD, NJ, and CQ.In the analysis, all the WSIIs are considered as variables, and factors explaining more than 80% of the total variance are extracted.Varimax rotation is then used to redistribute the variance and provide a more interpretable pattern of the factors.Three factors in each city, with their component loadings, eigenvalues, and explained variance are displayed in Table 5.
Factor 1 in CD covered 61.1% of the total variance, and had high loadings of Na + (0.74), NH4 + (0.96), Cl − (0.77), SO4 2− (0.85), and NO3 − (0.93).NH4 + , SO4 2− , and NO3 − are typical secondary ions, and Cl − is a tracer for coal combustion.Thus, factor 1 in CD was recognized as a mixture of secondary aerosols and coal combustion, and the good correlation between Na + and Cl − indicated their similar origins or coexistence in aerosols.In factor 2, 14.9% of the total variance was explained and loadings of K + and Ca 2+ were much higher than other variables, indicating the contribution from biomass burning and natural dust.Factor 3 was responsible for 8.5% of total variance, and was heavily loaded by Mg 2+ (0.89).Mg 2+ was from construction dust, and the factor was related with the wide reconstruction work in urban Chengdu.
Source Analysis of WSIIs
Principal component analysis (PCA) is applied in the study using SPSS version 16.0 software packages to explore the sources of WSIIs in CD, NJ, and CQ.In the analysis, all the WSIIs are considered as variables, and factors explaining more than 80% of the total variance are extracted.Varimax rotation is then used to redistribute the variance and provide a more interpretable pattern of the factors.Three factors in each city, with their component loadings, eigenvalues, and explained variance are displayed in Table 5.
When compared with the observed values in 2011, the levels of WSIIs in Chengdu have shown a downward trend except for NO 3 − and Cl − [8].In Chongqing, the PM 2.5 and SO 4 2− levels have decreased by 57.3% and 31% compared to those in 2005-2006, whereas the NO 3 − level increased by 43% (7.8 vs. 5.46 µg m −3 ) [13].The similar trend of decreasing SO 4 2− and increasing NO 3 − in both Chengdu and Chongqing were related to the strict enforcement of desulfurization engineering and the soaring of the vehicular population in large cities of China [18].After 2013, the decrease of PM 2.5 concentration in Chongqing was minor and stabilized around 67.5 µg m −3 from 2015 to 2016 2.5 concentrations (101.6 µg m −3 in CD and 81.2 µg m −3 in NJ), whereas spring and summer were relatively clean (spring: 72.2 µg m −3 in CD and 67.5 µg m −3 in NJ; summer: 68.3 µg m −3 in CD and 68.7 µg m −3 in NJ).In CQ, PM 2.5 concentrations in spring (57.3 µg m −3 ), summer (64.9 µg m −3 ), and autumn (68.0 µg m −3 ) were rather close to each other.Atmosphere 2019, 10, x FOR PEER REVIEW 5 of 13 and 68.7 μg m −3 in NJ).In CQ, PM2.5 concentrations in spring (57.3 μg m −3 ), summer (64.9 μg m −3 ), and autumn (68.0 μg m −3 ) were rather close to each other.
Figure 2 .
Figure 2. Seasonal variations of PM 2.5 and WSIIs in CD, NJ, and CQ.
3. 4 .
Chemical Forms of Nitrate and Sulfate The scatter plots of NH 4 + vs. SO 4 2− , SO 4 2− + NO 3 − , and SO 4 2− + NO 3 − + Cl − (all the above denote electron equivalent concentrations) in CD, NJ, and CQ are depicted in Figure 4.As (NH 4 ) 2 SO 4 is less volatile and preferentially formed compared to NH 4 NO 3 and NH 4 Cl, the relationships between NH 4 + and SO 4 2− are firstly explored to investigate the chemical forms of sulfate and nitrate.It is reflected from Figure 4a-c that NH 4 + was closely related with SO 4 2− , and the data were mostly above the 1:1 (NH 4 + /SO 4 2− ) line, suggesting the complete neutralization of SO 4 2− by NH 4 + , and (NH 4 ) 2 SO 4was the major species.However, there were a few exceptions below the 1:1 (NH 4 + /SO 4 2− ) line in the summer of NJ and CQ (Figure4a-c).It was understandable that the formation of SO 4 2− was greatly enhanced under the high temperature of summer while NH 4 + was more easily removed by decomposition.Therefore, the samples did not have sufficient NH 4 + to fully neutralize SO 4 2− , and NH 4 HSO 4 existed in summer.
Figure 4 .
Figure 4. Scatter plots of ammonium and the major acidic anions in PM2.5 of (a, d, and g) CD, (b, e, and h) NJ, and (c, f, and i) CQ.
Figure 4 .
Figure 4. Scatter plots of ammonium and the major acidic anions in PM 2.5 of (a,d,g) CD, (b,e,h) NJ, and (c,f,i) CQ.When it came to nitrate, the samples in spring mostly occupied enough NH 4 + to neutralize both SO 4 2− and NO 3 − (Figure 4d-f) and formed (NH 4 ) 2 SO 4 and NH 4 NO 3 , in spite of a few exceptions.In other seasons, NH 4 + was able to neutralize the secondary anions when their levels were low (SO 4 2− + NO 3 − < 0.5 µmol m −3 ) (Figure 4d-f).In fact, the abundance of NH 4 + almost equaled to the sum of SO 4 2− , NO 3 − , and Cl − under low PM loadings (Figure 4g-i), and dominant anions existed in the form of (NH 4 ) 2 SO 4 , NH 4 NO 3 , and NH 4 Cl.However, under high SO 4 2− and NO 3 − levels (SO 4 2− + NO 3 − > 0.5 µmol m −3 ), NH 4 + was far from fully neutralized (Figure 4d-f).It was observed in most previous studies that high NO 3 − levels were associated with high levels of NH 4 + [28].In contrast, the relatively high NO 3 − observed in the present study was with moderate levels of NH 4 + , suggesting
Table 2 .
The correlation coefficients (R) between NO3 − and cations in PM2.5 of CD, NJ, and CQ.
Table 2 .
The correlation coefficients (R) between NO 3 − and cations in PM 2.5 of CD, NJ, and CQ.
Table 3 .
Seasonal variation of sulfur oxidation ratio (SOR) and nitrogen oxidation ratio (NOR) in CD, NJ, and CQ.
Table 5 .
PCA factor loadings of WSIIs in CD, NJ, and CQ. from the Ministry of Environmental Protection, China (NO.201009001) and the Funds for Excellent Teachers from Capital University of Economics and Business (2016).We also would like to thank the staff in Sichuan Provincial Environmental Monitoring Center, Neijiang Environmental Monitoring Station and Chongqing Environmental Monitoring Center for the help in the sample collection. | 2019-03-06T11:47:42.288Z | 2019-02-15T00:00:00.000 | {
"year": 2019,
"sha1": "515e8b911857f578c3f2d55c2e95c06ffeaa8a1a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/10/2/78/pdf?version=1550209925",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "515e8b911857f578c3f2d55c2e95c06ffeaa8a1a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
248121951 | pes2o/s2orc | v3-fos-license | Study on ecological environment change of water source area of the middle route of South-to-North Water Diversion Project in the past 20 years
The South-to-North Water Diversion Project (SNWDP), which began operation in 2013, is an important step in China’s water resources development, and the middle route of SNWDP is one of the most important parts. However, this project has caused a certain negative impact on the ecological environment of Danjiangkou reservoir, the water source area of this project. This study uses DEM data and Landsat data for different years to analyze land-use changes in QGIS software and discusses the causes and impacts of the middle route of SNWDP. The results show that the present area of water and vegetation increased significantly, while the cultivated land and human area (including urban, rural and other areas of human activity) decreased obviously compared with 1999. The reasons for this change include natural conditions, engineering construction and policies, but policies such as raising the Danjiangkou dam and returning farmland to forests have played a bigger role. The impacts of SNWDP include water quality deterioration, soil erosion, changes in vegetation area, changes in biodiversity and changes in the lives of residents in the area. These impacts have positive reference significance for the construction of water conservancy undertakings in China in the future.
Introduction
The South-to-North Water Diversion Project (SNWDP) is one of the most important water conservancy projects in China. It has three lines, east, middle and west. The middle route of SNWDP diverts water from the Danjiangkou reservoir in the upper and middle reaches of the Han River, the largest tributary of the Yangtze River. It starts from the project in Xichuan County, Henan Province, on the east bank of the Danjiangkou reservoir, and flows to Beijing, through the watershed of the Yangtze River basin, Huaihe River Basin, the Yellow River and the west side of the Beijing-Guangzhou railway [1]. It focuses on solving the problem of water shortage in Henan, Hebei, Beijing and Tianjin, providing more than a dozen large and medium-sized cities along the route with water for living, industry and agriculture. The water supply area covers a total area of more than 155,000 square kilometers, with a total length of 1,277 kilometers of main channels and a 155-kilometer branch line of Tianjin water supply. The project officially began in 2003 and was completed in 2013. On December 12, 2014, the project was officially put into operation [2]. By July 20, 2021, the first phase of the middle route of SNWDP had been safe for more than 2,000 days, and 40 billion cubic meters of water had been transferred to the north, benefiting Figure 1 shows the schematic diagram of the middle route of SNWDP.
However, in the process of engineering operation, some problems are gradually occurring. Due to the increase of reservoir water volume and the rise of reservoir water level, the situation of water quality and soil erosion is not optimistic, which brings many adverse effects on the environment [4]. Due to the needs of SNWDP, some residents need to emigrate, which has changed the use of land in Danjiangkou reservoir area. At the same time, the economic and re-employment level of the immigrant masses also need to be found out and improved [5]. Therefore, this study used QGIS software to conduct a quantitative analysis of change in the use of land in Danjiangkou reservoir area from 1999 to 2020 to find the causes and impacts of the middle route of SNWDP.
Methods
This study uses DEM data and Landsat data for different years to analyze land-use changes in QGIS software. Remote sensing images include Landsat-7 ETM+/ Landsat-8 OLI in 1999, 2013 and 2020 to quantify land use, which were obtained from the official website of THE United States Geological Survey (https://earthexplorer.usgs.gov/). The various band combinations were used to show a visual representation of the land use within the last three years. DEM data could indicate the terrain profile of wetland region and help judge the general distribution of land use types in wetland region.
This study use QGIS software to clip the remote sensing image and generate surface reflectance layers. According to "National Land Classification" and combined with the actual situation of Danjiangkou reservoir wetland, the wetland land types of Danjiangkou reservoir are divided into five categories: vegetation, water area, plow land and human area.
Results
After processing the remote sensing satellite images, three years of land use distribution and area data of each land type were obtained. Orange area indicates cultivated land, the green area indicates vegetated areas, the pink area indicates human areas and the blue area indicates water area.
Because of the topography, most of the areas of human concentration are located in the valleys in the west and the plains in the east. And most human areas are rural areas. The cultivated land is mainly distributed in the eastern plain, and the vegetation is distributed in the western hills. This constitutes the basic distribution of the whole wetland area.
The distribution of land use types of Danjiangkou reservoir from 1999 to 2020 is shown in Figure 2 and Figure 3. In the past 21 years, the main landscape types of Danjiangkou reservoir were vegetated land and cultivated land, the area of human and cultivated land decreased greatly, and the area of vegetated land, water area and increased on the whole. The area of humans decreased the most, with a IOP Publishing doi:10.1088/1755-1315/1011/1/012042 3 decrease of 1974.809km², and its proportion decreased from 27.50% to 8.28%. Vegetated area increased the most, with an increase of 2150.746km², and its proportion increased from 48.28% to 69.08%. The proportion of water area and plow land increased from 3.65%, and 20.57% to 11.04% and 11.60% (Fig.2).
The reasons for the land-use change in the wetland area
The main changes that took place were in the natural environment and in areas where people congregated. In 1999, the Danjiangkou reservoir wetland was undeveloped. By 2008, the landscape pattern has great change, because the middle route of SNWDP construction in 2000 and the people of immigrants, causing some damage to the reservoir area ecological environment, resulting in vegetation and arable land decrease, water loss and soil erosion aggravate, plus residents' ecological consciousness and the ecological environment protection measures, poor reservoir area ecological environment worsening. Until 2012, with the gradual completion of SNWDP, the improvement of residents' ecological awareness, and the continuous improvement of relevant institutions' ecological and environmental protection measures, the Danjiangkou reservoir wetland gradually recovered and tended to be stable. This section mainly discusses the causes of natural conditions and anthropogenic policies [6].
Natural cause of formation
The terrain of the Danjiangkou reservoir area is mainly hilly, with large terrain fluctuation, low vegetation coverage, poor ability to resist external erosion and destruction, and low self-regulation ability of the ecosystem, which is fragile on the whole. The soil in the region is mainly yellow-brown earth, which is viscous and poor in permeability. The surface layer is loose and thin, which is not resistant to drought and flood, easy to be eroded, and weak in the ability to resist precipitation and water erosion [7]. The annual water volume in the region is large, which easily causes soil erosion and soil erosion. During the construction period from 1999 to 2008, these factors led to a decrease in the area of green plants and arable land, and caused ecological damage. However, with the construction of the water source wetland reserve since 2012, the influence of natural factors has decreased.
Engineering and policy
This part discusses the impact of engineering construction and policies on land-use change. In the early stage of the middle route of SNWDP, the government raised the Danjiangkou reservoir dam to improve flood control and water supply. But as a result, the land around the reservoir area was flooded, and most rural residents were distributed close to the reservoir area, so a large number of people needed to be relocated. However, during the same period, cities around the reservoir area developed rapidly, and the land was less inundated, and the area of rural and urban human activity areas still increased [8]. After the water supply began in 2012, the demand for water supply increased year by year, so the water area increased again. At the same time, in order to protect the ecological environment of the reservoir area, measures such as returning farmland to forest and closing factories further lead to the rapid reduction of human activity areas [6].
Water quality deterioration
Danjiangkou reservoir has been raised to meet the demand of 9.5 billion cubic meters of water to be transferred from south to north soon and 12 billion to 13 billion cubic meters of water to be transferred from south to north [9]. The project enlarged the capacity of the reservoir and the normal water level of the reservoir, but also caused the deterioration of water quality. As a result, the water in the reservoir inundated the surrounding land, causing nitrogen and phosphorus elements in the land to enter the water, resulting in serious water pollution. In the Chinese government's classification of water quality, a lower number indicates better water, while a higher number indicates worse water. From Class I to V, water quality changes from the best to the worst. From 2004 to 2005, the total ammonia was in the range of 0.73-2.22mg/L, and the maximum value was in Class V water. During the monitoring from 2016 to 2020, about 70% of the water quality was in Class III, and 25% was in Class IV, indicating that the water quality was not optimistic [10]. It is noted that the water quality of the reservoir is also affected by sewage
Soil erosion
Soil erosion refers to the phenomenon of simultaneous loss of water and soil, due to the influence of natural or man-made factors. It leads to rainwater can not be absorbed on the spot, and then flows down homeopathy and washes the soil. Danjiangkou reservoir has a serious situation of soil erosion. According to statistics, about 93.5% of the reservoir land has mild soil erosion, and 6.4% of the reservoir land erosion is moderate or serious [6]. The influence of soil erosion is very far-reaching, including the siltation of river reservoirs, water pollution, serious and even lead to river flow interruption, inducing natural disasters. There are many reasons for this. Firstly, due to the increase of water storage area of the reservoir, the surrounding land is flooded, resulting in soil erosion. Secondly, the soil in the region is mainly yellow-brown soil, lime soil, etc. The soil texture is sticky and has poor permeability. And the surface layer is loose and thin, which is not resistant to drought and flood. Thus, it is easy to be eroded, resulting in soil erosion. Thirdly, the precipitation of Danjiangkou reservoir is more and more intensive in summer, which leads to more serious soil erosion in summer. Fourth, the mountainous valley in the west of Danjiangkou reservoir alternates with undulation, and the terrain gradient is large, which aggravates the occurrence of soil erosion to a certain extent [11,12].
Changes in vegetation area
As mentioned above, the vegetation area of Danjiangkou reservoir increased both after the middle route of SNWDP started construction and after its completion. The changes include the decrease of arable land area and the increase of woodland and grassland area. The reasons are manifold. One is the migration policy of the reservoir population, which has left the Danjiangkou reservoir area in large numbers. Another reason is the Chinese government's policy of returning farmland to forests. Returning cultivated land to forest means stopping the cultivation of sloping land which is easy to cause soil erosion in a planned and step-by-step manner. And then, people plant trees according to the principle of suitable trees and restore forest vegetation according to local conditions. The project of converting farmland to forest includes two aspects: one is converting slope farmland to the forest; the other is afforestation of waste mountains and wasteland. Danjiangkou City was included in the pilot cities of the National Program of returning farmland to the forest in 2001 and was officially included in the implementation of the National Program of returning farmland to the forest in 2002. During the nearly 20 years, the whole city has completed the return of farmland to the forest of 421,500 acres, among which 172,000 acres of farmland to forest and 249,500 acres of barren mountain afforestation [13]. This project improves the green environment and is conducive to the sustainable development of the middle route of SNWDP.
Biodiversity impacts
The construction of the middle route of SNWDP has changed the ecological pattern of Danjiangkou reservior. First of all, the increase of the water storage area of the reservoir could change the zoning of vegetation. However, since vegetation is widely distributed and the inundated green space is not large, the type of vegetation does not have a great impact. Secondly, due to the increase of the storage area of the reservoir, the increase of nutrients in the water could benefit the survival of fish and other animals in the reservoir, and also provide sufficient food for animals that depend on fish, such as birds and other animals, which is of positive significance to the protection of biodiversity in Danjiangkou area. Thirdly, the implementation of the middle route of SNWDP has changed the local ecological pattern. Some plants and animals will be eliminated due to their inadaptability to the new ecological environment, while some plants and animals that can adapt can get a suitable development prospect [14].
Human water conservancy projects will inevitably destroy the original ecological environment, but fortunately, we have advanced scientific and technological guidance, which can reduce the damage to biodiversity to the maximum extent. At the same time, coupled with the good ability of animals to adapt to the environment, biodiversity can be increased. The protection of biodiversity requires strict compliance with laws and regulations [15], and effective measures and alternatives to ensure the sustainable development of the Danjiangkou area.
Residents impacts of the middle route of SNWDP
According to this study, the sharp decline in the living area after 2013 is closely related to China's reservoir migration policy. Since the 1980s, the Chinese government has been experimenting with reservoir resettlement. About 382,000 people have left the Danjiangkou reservoir for new homes since the Danjiangkou reservoir was piloted in 2008 due to the construction of the middle route of SNWDP. A large decrease in population has also led to a decrease in the area of arable land and an increase in areas such as woodland and grassland. After emigration, the Chinese government has taken appropriate subsidies and guidance, including compensation for land acquisition and housing construction, to increase the income of the villagers who emigrate. At the same time, the government actively strengthens psychological counseling and language adaptation for immigrants, and most people are satisfied with the results of immigration. But there are also a series of problems in the process of immigration. Most notably, infrastructure has not been secured, especially in the case of railways, where only one railway can be used for transportation. Most migrants still use road transport, which reduces travel efficiency [16]. Meanwhile, for a small number of non-farming immigrants, they not only lose their jobs, but also find it difficult to get national security and subsidies. Their living conditions are not optimistic, which is also a problem that the local government needs to solve later [17].
Appropriately adjust the scope of protected areas
Satellite images show that the east of Danjiangkou reservoir wetland is still surrounded by residential areas, roads and farmland [18]. These unexpected human activities are threatening the survival of the wetland, which affects the water quality in the middle route of SNWDP. These resettlement sites containing a large number of residents can no longer meet the standard of the conservation area, they could be transferred out of the conservation area. However, the fertile but ecologically fragile areas, such as new floodplains formed by water storage, meet the standard of the conservation area and should be protected, which could reduce the amount of pollution in the region of human activities.
Strengthen wetland ecological development
Nowadays, the ecological benefits brought by ecological vegetation restoration projects such as returning farmland to forest and closing mountains for afforestation are increasingly obvious [7]. The area of green plants is greatly increased and the ecological structure of the water source area is enhanced, but the fragile ecological structure of the water source region has not been completely reversed [18]. In addition to accelerating the construction process of vegetation restoration projects, wild animals in the protected areas also need to be protected [19]. Through the construction of animal rescue centers, animal rescue and breeding places, and the establishment of isolation zones, a good survival and reproduction environment for wild animals can be provided. At the same time, the control of invasive alien species should be strengthened [18][19][20].
Strengthen prevention and control of environmental pollution
First of all, the government needs to improve the level of ecological environment monitoring by establishing an ecological environment monitoring network which can master the characteristics, functions, values and dynamic changes of the wetland resources in Danjiang reservoir area [19] [20]. Secondly, the government needs to reduce pollution from the source by building waste transfer stations, phasing out or renovating polluting enterprises in a planned and step-by-step manner, shutting down mineral exploitation, and banning industries with high water consumption and heavy pollution [6] [18]. Most importantly, the government should adjust agricultural structure, improve agricultural production IOP Publishing doi:10.1088/1755-1315/1011/1/012042 7 conditions, vigorously develop ecological agriculture and popularize water-saving agriculture. In order to reduce the wetland pollution of agricultural production [18].
Conclusion
This study focuses on the changes in land use in the Danjiangkou reservoir wetland area from 1999 to 2020 and discusses the causes of its changes and its influence on the Danjiangkou reservoir wetland area.
The study used QGIS and satellite imagery to analyze the evolution of Danjiangkou Wetland from 1999 to 2020, and discussed the reasons for changes in wetlands and impacts of the SNWDP. From 1999 to 2020, the area of human and cultivated land decreased greatly, and the area of vegetated land, water area and increased on the whole. The area of human and cultivated land decreased 19.22% and 8.97% respectively in the whole study area. The area of water and vegetation increased 7.39% and 20.8% respectively in the whole study area. The causes of the change include natural conditions, engineering construction and policy. Danjiangkou dam heightening directly causes the increase of water area and the decrease of human area. Ecological protection policies such as returning farmland to forest lead to an increase in vegetation area.
As for the influence, the first influence is the deterioration of water quality caused by the middle route of SNWDP. The increase of the storage area of the reservoir leads to the inundated land of the reservoir, increasing nutrient elements in the water, leading to water pollution. The second is that the construction of reservoirs leads to serious soil erosion, which affects the normal flow of river channels and causes siltation. The third is that the construction of the reservoir changes the basic pattern of local land use, which is mainly reflected in the increase of water area, woodland area and the decrease of arable land area. The fourth is a change in this region's biodiversity. The last is that the residents have to emigrate due to the construction of the middle route of SNWDP, which has changed the residents' lifestyle and income pattern. In all, the middle route of SNWDP has a profound impact on the Danjiangkou reservoir area.
In order to solve the current Danjiangkou wetland environmental problems, it is necessary to appropriately adjust the scope of protected areas, strengthen ecological construction and reduce environmental pollution. In these countermeasures, appropriate adjustment of the protected area is the most important, which can not only fundamentally solve the damage of human activities to the wetland area, but also reduce the unnecessary waste in funds of environmental protection. Ecological construction such as wildlife rescue centers can speed up the ecological restoration of wetland areas. And reducing industrial and agricultural production pollution directly alleviates the ecological pressure in wetland areas. | 2022-04-13T20:08:09.193Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "17084552eeeaab6e469804b785ef10dd41d296f4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/1011/1/012042",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "17084552eeeaab6e469804b785ef10dd41d296f4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
54943361 | pes2o/s2orc | v3-fos-license | Quality Management Systems and Organizational Performance: A Theoretical Review in Kenya’s Public Sector Organizations
Total Quality Management (TQM) has become an accepted technique to ensure performance and survival in the modern economies. In order to facilitate and impact the quality issues globally, the International Organization for Standardization (ISO) was first published in 1987 and was subsequently revised in 1994, 2000 and 2008. The ISO 9001 standards is a quality management standards that embraces principles of TQM and which merges organizational concerns with customer satisfaction, shareholder satisfaction, process efficiency, and employee wellbeing. The study focuses on customer satisfaction, employee engagement, productivity and management control. The paper reviewed the following theories: Expectancy-Disconfirmation Paradigm (EDP) as the most promising theoretical framework for the assessment of customer satisfaction; Herzberg motivation theory; Systems theory which consists of inputs, a transformation process, outputs, feedback and the environment and Henry Fayol’s administrative theory of management which focuses on the entire organization. The study adopted a conceptual research design approach using secondary data. Conceptual analysis was used to draw inferences from QMS studies’ findings in public and private sectors and in both goods and services industries keenly after a thorough examination. The purpose of the paper is to review the extant multidisciplinary based literature on quality management to propose a theoretical model relating quality management practices and firm performance. The paper focused on three objectives. Firstly, it sought to review the extant theoretical literature on the construct of quality management. Secondly, it identified relevant supporting theories to the construct of quality management. Thirdly the paper proposed a theoretical model for explaining the relationship between quality management and performance in diverse environmental contexts.
Introduction
In the current business environment there is increasing pressure on firms by both consumers and competitors alike to continually innovate in new products and to upgrade the quality of existing goods and services. Quality setback is a burning and an endeavor issue that organizations make every effort in order to succeed and survive in a competitive world. Currently, the main concern of any organization is to reach the world class excellence through high quality products and services, customer satisfaction, and cost reduction with profit optimization [1]. TQM integrates fundamental management techniques, resources, and its implementation stands as a challenge and support to top management. Recent studies have claimed that successful implementation of TQM could generate improved products and services, as well as reduced costs, more satisfied customers and employees, and improved financial performance [2].
ISO is the world's largest developer and publisher of international standards. The ISO 9001 standards is a quality management standard that embraces principles of TQM and which merges organizational concerns with customer satisfaction, shareholder satisfaction, process efficiency, and employee wellbeing [3]. ISO 9001 encompasses organizational practices across sectors with its application being relevant to organizations regardless of the business they are in. This has made this particular certification relevant to most organizations thus gaining most popularity compared to other ISO standards. ISO 9001 provides a tried and tested framework for taking a systematic approach to managing business practices to consistently turn out quality products. ISO 9001 encompasses organizational practices across sectors with its application being relevant to organizations regardless of the business they are in. This has made this particular certification relevant to most organizations thus gaining most popularity compared to other ISO standards.
A 2012 survey of certifications [4] shows that Kenya has the highest number of organizations in East Africa achieving ISO certification. In 2012 there were 460 organizations with ISO 9001 certification (quality); 32 organizations with ISO 14001 certification (environment) and 118 organizations with ISO 22000 certification (food safety). ISO 9001 leads the pack and also has highest rate of conversion from non-certified to certified organizations over the last 10 years. Trade is a crucial driver of growth ( [5]. For Kenya to achieve the double-digit economic growth envisaged in Vision 2030 [6], the country must be able to respond to local and global market demands. Kenya, just like many African countries, is confronted by a myriad of challenges in improving its capacity to meet production and quality standards which are obligatory to access foreign markets, especially the European Union which is one of Kenya's biggest trading partners. Nicholas Stern, the former World Bank chief economist and senior vice president, notes that without addressing market access and international standards compliance issues, African firms and farmers will be unable to take full advantage of market opening initiatives. ISO (International Standards Organization) offer Kenya convenient solutions that will not only respond to the local and global market demands, but also be a panacea to the technological problems that it encounters. No country can successfully develop without addressing the critical issue of demand and supply on energy [6].
Kenya is in a very interesting development phase with regards to its domestic energy requirements. In the past decade the country has grappled with the challenge of unreliable, expensive and unsustainable energy use supporting a stagnating industrial and manufacturing base [6]. The recent population census held in Kenya in 2009 as reported by Kenya National Bureau of Statistics (KNBS) shows that the Kenyan population has been increasing on average by one million per year. Currently, Kenya is grappling to supply the ever rising energy demands to its ever increasing population. Distribution and transmission losses remain an important issue as the rate of loss verged on 17% in 2012. Electricity consumption reached 6,581GWh in 2012/13, increasing by 73% from 2007/08. The high cost of energy is one of the biggest bottlenecks to economic activity in the country [7]. Kenya continues to lose out on foreign direct investments partly because of this problem, with considerable penalties on socio-economic development.
As a consequence, the economy experiences high electric power system losses estimated at 20% of net generation, extreme voltage fluctuations and intermittent power outages at 11,000 per month, which cause material damage and losses in production. Given the above scenario organizations are encouraged to adopt ISO 9001 requirements in their management systems so as to improve performance [8] and avoid losses attributed to unreliable energy source challenging development of the government. An organization which is certified to ISO 9001 is expected to enhance customer satisfaction and consistently provide products that meet customer and applicable statutory and regulatory requirements. So far studies accessed by the researcher on quality management systems in Kenya focus on health sector [9,10], education [11,12], banking sector [11] and wildlife services [13]. The paper seeks to find out how quality management systems affect organizational performance through an extant multidisciplinary literature review. The focus was mainly customer on customer satisfaction, employee engagement, productivity and management control. The paper proposes a theoretical model which scholars and researchers can test empirically in the field of quality management systems. Organizations will also benefit from the details of the paper hence improve their quality management systems in place for better organizational performance.
Literature Review
TQM is an ideology which is focused on the satisfaction of customer's need. Thus, most organizations try as much as possible to meet or exceed customer's expectation in their daily activity and also their long term plan [14]. TQM require organizations to develop a customer focused operational processes and at the same time committing the resources that position customers and meeting their expectation as an asset to the financial well-being of the organization. Andrle [15] explains that it is necessary for organizations to maintain a close link with their customers in order to know their requirements and to measure how it has been successful in meeting up to customers' requirements. According to [16] a high level of customer satisfaction is obtained solely by providing services or products whose features will satisfy customer's requirements or needs. The customer's needs and expectations serve to drive development of new service offering. This is due to the fact that customers determine the quality level of service delivered [17].
Oakland [18] noted that organizations are made up of a series of internal suppliers and customers. To him, this forms the quality chain of the company and it implies that every employee is a potential customer and supplier in the course of production. The process of production is structured in a way where each process have needs and expectations which must be fulfilled by others in the network of production. The effective fulfillment of these needs leads to the production of quality goods and services.
In recent years there has been a weight of evidence suggesting that engagement has a significantly positive impact on productivity, performance and organizational advocacy, as well as individual well-being, and a significantly negative impact on intent to quit and absenteeism from the work place. The potential for employee engagement to raise levels of corporate performance and profitability has been noted by government and policymakers as well, and has led in the UK, for instance, to the highly influential work of Engage for Success a voluntary movement involving public, private and third sector employers, alongside representatives from government, trades unions and professional bodies, as well as consultants and academics. The espoused aim of the movement is to provide employers with free tools, techniques and guidance on how to raise the engagement levels of workers, based on the premise that a highly engaged workforce will perform better than one that is disengaged, as well as enjoying higher levels of personal well-being, thus ultimately helping bolster the UK economy [19].
Eatwell and Newman [20] defined productivity as a ratio of some measure of output to some index of input use. Put differently, productivity is nothing more than the arithmetic ratio between the amount produced and the amount of any resources used in the course of production. This conception of productivity goes to imply that it can indeed be perceived as the output per unit input or the efficiency with which resources are utilized. In effect, productivity becomes the attainment of the highest level of performance with the lowest possible expenditure of resources. It represents the ratio of the quality and quantity of products to the resources utilized. In a nutshell, productivity is concerned with efficiency and effectiveness simultaneously. Lawlor [21] sums up productivity as comprehensive measures of how efficient and effective an organization or economy satisfies five aims: objectives, efficiency, effectiveness, comparability and progressive trends. No matter how it is perceived, productivity implies that there is an incremental gain in what is produced as compared with the expenditure on measures utilized.
Performance management system (PMS) can be defined as the set of the evolving formal and informal mechanisms, processes, systems, and networks used by organizations for conveying the key objectives and goals elicited by management, for assisting the strategic process and ongoing management through analysis, planning, measurement, control, rewarding, and broadly managing performance, and for supporting and facilitating organizational learning and change [22]. The main characteristics of this definition refers to different types of mechanisms (both formal and informal); the effectiveness in strategy accomplishment; the PMS' purpose, i.e. enabling the organization to achieve its goals, through learning and change.
Performance measurement is an integral part of all management processes and traditionally has involved management accountants through the use of budgetary control and the development of financial indicators such as return on investment. However, it has been claimed that conventional aggregate financial accounting indicators are inappropriate in TQM settings. Several authors have claimed that an important part of ensuring that TQM leads to sustained improvements in organizational profitability is that direct quantitative measures of manufacturing are used to assess the effectiveness of managers' efforts to manage the development and implementation of TQM programmes. TQM has evolved as a philosophy that emphasizes the need to provide customers with highly valued products and to do so by continuous improvement. While TQM provides a potential for organizations to enhance their competitiveness there is evidence that many organizations have been disappointed [23]. Performance management systems are a cornerstone of human resource management practices and are the basis for developing a systems approach to organization management. In theory, a performance management system links organizational and employee goals through a goalsetting process, and subsequently links employee goal achievements to a variety of HR management decisions through a performance measurement process.
Emerging Issues
A study carried out by Kimutai [10], indicate that quality management systems (QMS) has prompted prompt delivery of services to customers consistently both day and night. It has also led to consistent presence of health workforce at service delivery points resulting in an increased number of customers served. This is also reaffirmed in a study done by Karani and Bichanga [11] whereby Kenya Wildlife Services (KWS) employees maintained that the organization should understand the current and future needs of customers and that business performance and customer satisfaction are enhanced by quality management practices. A study by Bichanga et al, [13] established that service delivery in most of the Kenyan universities was average and that ISO (International Standards Organization) certification affected service delivery to a great extent.
One of ISO 900 standard requirements is employees' extensive continuous training on core knowledge of their jobs skills and competency [10]. Adoption of Quality Management Systems had ensured consistent training and therefore improved performance of health workers in their institutions. The findings also agree with results in a study done by Otieno and Kinuthia [9], on Total Quality Management practices in selected private hospitals in Nairobi, Kenya who found out that training on Total Quality Management practices would go a long way in eliminating information asymmetry since its success is highly dependent on information dissemination and feedback across all levels of an organization. The results in a study by Bichanga et al., [13] also revealed that ISO 9001:2008 certification defines responsibilities clearly, improves communication within the universities, facilitates data gathering for management, improves the attitude of the staff, improves staff management, improves integration within the university and reduces improvisation.
A study done by Mangula [24] revealed that the quality of products in manufacturing firms in Tanzania had significantly improved following the adoption and being certified by ISO 9001. More specifically the findings depict that quality product had improved in terms of reduced customer complaints' and the ability of product to meet the local and international standards.
Research findings done by Oluwatoyin and Oluseun [25], attest to the benefits that accrue from the implementation of Total Quality Management as a strategic tool for an organization to employ in the quest to remain competitive. If adequately deployed, the principle brings about added value to an organization in terms of efficiency in operation, employee satisfaction, customer satisfaction, and even profitability. The findings also revealed that the relentless pursuit of improvement in service delivery bring about added value to customers by making the organization focused on satisfying customers' needs, while team work and training empowers employees for the continuous improvement drive of the organization. The implication of managing every facet of the organization was revealed, as each production unit is seen to affect and in turn affected by others. That is, a dysfunction in the process of service delivery has an overall effect on the total production process, thus showing the need for a holistic approach which involves every functional area to be managed effectively.
Review of Supporting Theories
Oliver (26,27) proposed the Expectancy-Disconfirmation Paradigm (EDP) as the most promising theoretical framework for the assessment of customer satisfaction. The model implies that consumers purchase goods and services with pre-purchase expectations about the anticipated performance. The expectation level then becomes a standard against which the product is judged. If the outcome matches the expectation confirmation occurs. Disconfirmation occurs where there is a difference between expectations and outcomes. A customer is either satisfied or dissatisfied as a result of positive or negative difference between expectations and perceptions.
Catherine et al [19] examined the implications of engagement within the context of human resource development (HRD). HRD is concerned with the development of both human and social capital within the organization, and there has been increasing focus in recent years on the relevance of engagement in enhancing individual performance and the individual experience of work. They highlight the emerging definition of engagement within the HRD field as being the cognitive, emotional and behavioural energy an employee directs towards positive organizational outcomes. They highlight the role that HRD can play not only in raising levels of engagement, but also in reducing levels of disengagement. This can be indicated through organizational development, workplace learning and career development initiatives.
The growth accounting framework acts as a mechanism for breaking down the sources of economic growth into the contributions from increases in capital, labour and other factors. When these factors have all been accounted for, what remains is usually attributed to technology. This remainder is often called the Solow residual and, in theory, if all the factors contributing towards productivity were identified and measured correctly then this residual would be zero. The growth accounting framework is the main framework used internationally and is now more than fifty years old. [28,29].
Systems theory consists of five components namely: inputs, a transformation process, outputs, feedback and the environment. Inputs are the material, human, financial or information resources used to produce goods and services. Transformation process is managements' use of production technology to change the inputs into outputs. Outputs include the organizations' products and services. Feedback is knowledge of the results that influence the selection of inputs during the next cycle of the process. The environment surrounding the organization includes the social, political and economic forces [30]). Customer satisfaction refers to a person's feeling of pleasure or disappointment which results from comparing a product's perceived performance or outcome against his or her expectations [30].
Employee engagement is defined as positive attitude held by the employee towards the organization and its value. An engaged employee is aware of business context, and works with colleagues to improve performance within the job for the benefit of the organization. The organization must work to develop and nurture engagement, which requires a twoway relationship between employer and employee [31].
Productivity refers to the attainment of the highest level of performance with the lowest possible expenditure of resources. It represents the ratio of the quality and quantity of products to the resources utilized [32].
Management control is defined as a systematic effort by an organization to compare performance to predetermined standards, plans, or objectives in order to determine whether performance is in line with these standards and presumably in order to take any remedial action required [22].
Organizational Performance refers to the actual output or results of an organization measured against its intended goals and objectives. According to Richard et al. [33] organizational performance encompasses three specific areas of firm outcomes: financial performance which comprise profits, return on assets and return on investment; product market performance for example sales and market share; and shareholder return focusing on total shareholder return and economic value added.
Conclusion and Recommendations
Overall extant literature indicate that adoption of quality management systems has resulted to prompt delivery of services, quality of the products had improved in terms of reduced customer complaints' and the ability of product to meet the local and international standards. Further studies shows that ISO 9001:2008 certification defines responsibilities clearly, improves communication within the universities, facilitates data gathering for management, improves the attitude of the staff, improves staff management and improves integration. ISO certification also ensured consistent training and therefore improved performance. Quality Management Systems implementation has positive effects on overall organizational performance and implementing does pay off since the benefits accrued include; improved quality, employee satisfaction, productivity, employee participation, teamwork, communication, profitability and greater market share.
The idea behind the implementation of quality management systems is to ensure that adequate attention is given to quality so as to give room for an error free transactional process and less room for customer complaints while maximizing customer satisfaction. It is proven that satisfied customers are more willing to recommend quality service to others. This has some cost reduction implication on the organizations which is good for the business as they will be able to compete more effectively in terms of operating cost. In summary, quality is defined in the eyes of consumers, thus a customer focus approach which total quality management emphasizes, keeps organizations abreast of how customers define it from time to time.
Being a theoretical paper, the findings lack an empirical perspective. Therefore, it is highly recommended that further longitudinal research is carried out to find out why quality ISO 9001 certified government organizations in Kenya are facing challenges in meeting the expectations of their customers in service delivery. | 2019-05-19T13:03:26.359Z | 2016-09-18T00:00:00.000 | {
"year": 2016,
"sha1": "71a1feb934e7e301f814e48a64148fae0cd3bc85",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.sjbm.20160405.12.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cd6e0c5dc3e1c870b81ad2aea7db1a0aea2530f7",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
257236771 | pes2o/s2orc | v3-fos-license | Distribution of energy in the ideal gas that lacks equipartition
The energy and velocity distributions of ideal gas particles were first obtained by Boltzmann and Maxwell in the second half of the nineteenth century. In the case of a finite number of particles, the particle energy distribution was obtained by Boltzmann in 1868. However, it appears that this distribution is not valid for all vessels. A round vessel is a special case due to the additional integral of motion, the conservation of the gas angular momentum. This paper is intended to fill this gap, it provides the exact distribution of particle energy for a classical non-rotating ideal gas of a finite number of colliding particles in a round vessel. This previously unknown distribution was obtained analytically from the first principles, it includes the dependence on all the particle masses. The exact mean energies of gas particles are also found to depend on the system parameters, i.e., the distribution of energy over the degrees of freedom is not uniform. Therefore, the usual ideal gas model allows for the uneven energy partitioning, which we study here both theoretically and in simple numerical experiments.
Gas in a round vessel
We will consider the two-dimensional motion of a finite number of colliding particles N, placed in a stationary circular vessel of radius R. All particles will be of a round shape with radii r i and masses m i , generally different. The motion of particles will be rectilinear and uniform. All collisions between particles and with the vessel walls will be absolutely elastic. All particle masses and energies will be normalized to the total gas mass M tot and energy E tot . In collisions with the walls of a stationary vessel, the energy of the particle after reflection is equal to its energy before the collision, but the momentum of the particle changes upon reflection. The angular momentum of a particle also generally changes, but the round vessel is a special case. After reflection from any point of its boundary, the particle's angular momentum L ′ i remains equal to its initial angular momentum L i (see Fig. 1). In particle collisions, it is also conserved. As a result, the value L tot = N i=1 y i p xi − x i p yi remains constant during evolution. This distinguishes a round vessel (axisymmetric vessels in the 3D case) from all other possible vessels. Further, we will consider a non-rotating ideal gas with L tot = 0.
During evolution, the point representing the state of an isolated system in a phase space stays on the surface, which is called invariant. For an ideal gas in a round vessel, this surface is the intersection of surfaces of constant energy and constant angular momentum. It makes the system non-ergodic in a sense that the surface of constant energy is not filled densely. The only essential assumption in our theoretical consideration is that instead the invariant surface is densely filled.
Particle energy distributions
Let us consider the energy distribution of ideal gas particles in the round vessel. For the derivation of its analytical form, we have first calculated the density of filling of an invariant surface . The filling density of an invariant surface of a Hamilton system is known to be uniform with respect to the special measure, which is called ergodic. For the surface of constant energy, it defines the hypervolume of an elementary surface part as d = const d |gradE| . Similarly, for the intersection of the surfaces of constant energy and constant angular momentum, we obtain the following filling density: This is the probability density per elementary hypervolume d (usual Euclidean measure) of a curvilinear 4N − 2-dimensional hypersurface. It was calculated with the use of the Liouville theorem, which comes from the laws of particle motion. After the procedure of "projection" of this probability density, to make the integration over the phase coordinated possible, and substitution of the explicit expressions for energy and angular momentum, we came to the following probability density (at L tot = 0): for the coordinates of gas particles to be x 1 ..x N , y 1 , .., y N and the particle momenta components to be p x1 ..p xN−1 , p y1 ..p yN−1 . The last two components are determined by the laws of conservation. Integration of this expression over the phase variables within appropriate limits brings the desired gas distributions. All of the integration limits are finite, and they account for the fact that a particle with mass m 1 and energy E cannot be found at any point inside the vessel. It can only be located inside the strip of width 2 , oriented in the direction of the particle's momentum. Otherwise, the remaining energy will not be enough for other particles (2) dp = const dx 1 ..dp yN−1 Figure 1. A stationary round vessel of radius R contains N colliding particles with masses m i and radii r i . The reflection of a particle from the vessel walls does not change its angular momentum L ′ i = L i , due to the boundary symmetry. As a result, the total gas angular momentum is conserved, unlike the other vessels. The surface of constant energy is not filled densely. The system trajectory instead fills the invariant surface . For this reason, the behaviour of the ideal gas in the round vessel is very different. www.nature.com/scientificreports/ to compensate for the angular momentum of this particle. Finally, for the energy distribution of a particle of mass m 1 , we obtained: Here J(R 2 , .., R N ) = N n=2 m n R 2 n is the moment of inertia of other particles, A is the normalization constant. This is the most compact representation of the exact general energy distribution that we have found. In Eq. (3), the integration over momenta components is already done, while the integration over coordinates remains. The area of this integration is: This area is the intersection of the interior of the rectangular region y * 1 ∈ (−R, R), R n ∈ (0, R), n ∈ [2, N] and the interior of the hypersurface N n=2 m n R 2 n − Em 1 E tot −E y * 2 1 = 0 . Depending on the range of energies E, one of these areas can be wholly or partially inside the other, or wholly include the other. In each of these cases, corresponding to different particle energy regions, the limits of integration and the resulting distribution function are different. As a result, the particle energy distribution turns out to consist of some finite number of segments, each with its own function that can be calculated explicitly.
In the case of two particles, their energy distribution is the most simple. It consists of two sites, and for the particle m 1 has the following form: where E(k) = π/2 0 1 + k 2 sin 2 �d� and K(k) = π/2 0 d� √ 1+k 2 sin 2 � are complete elliptic integrals. This distri- , since if the first particle has energy E, then the energy of the second particle will be E tot − E , the corresponding probabilities are equal.
Some examples of the energy distributions for two particles are shown in Fig. 2a, b. The continuous lines show the theoretical distributions, the dots show the distributions obtained via numerical simulation of particle motion. The sites of the distribution Eq. (5) are smoothly glued together, but the way distribution goes changes with the transfer from one site to another. The distributions for two particles clearly demonstrate that the theoretical subdivision of the energy distribution on segments corresponds to the physical constitution of these distributions, i.e., it is not just a mathematical description feature. The physical reason for the change in the distribution behaviour is that, starting from a certain energy, restrictions appear on the possible combinations of motion and location of the particle.
The distribution Eq. (3) was derived for ideal gas particles, i.e., particles of negligible size. However, if the particles are large enough compared to the vessel size, for example r i = 10 and R = 30 as in Fig. 2b, then the distribution obtained experimentally will significantly differ from Eq. (5). In other words, the particle energy distribution in a round vessel depends not only on the masses but also on the sizes of particles. An amendment, accounting for the particle sizes, can also be calculated. It is necessary to integrate the integrand from Eq. (3) over the area where the particles intersect with each other or with the vessel walls. The resulting amendment must be subtracted from the original distribution before it is normalized.
In the case of three particles of negligible sizes, the energy distribution of a particle m 1 at m 2 > m 3 can be obtained explicitly as: where the functions p(m) and p * (m) are: www.nature.com/scientificreports/ . Its parameters are: Typical distributions for three particles are shown in Fig. 2c, d. Each curve corresponds to the choice of one of the particles, the mass of which was substituted into Eq. (6) as m 1 , the heaviest of the remaining particles as m 2 , and the last one as m 3 . For three or more particles, the subdivision of their distributions on segments is no longer obvious from the distribution appearance. But its reasons are the same as in the case of two particles.
Examples of distributions for more particles, obtained by numerical integration of Eq. (3) are shown in Fig. 2e, f. It is clear that with an increase in the number of particles, the distributions of particles with different masses approach each other and the corresponding Boltzmann distribution. The vertical dotted line at Fig. 2e, f shows the characteristic energy value E tot N . The probability of having this energy is approximately equal for all particles. The energy below it is more often for heavy particles, while light particles are more likely to have energies above E tot N . As it turns out, the smaller the particle mass, the higher its average energy (at L tot = 0).
In the case of a large number of particles of comparable masses, the moment of inertia of a single particle can be neglected compared to the moment of inertia of other particles. The energy of this particle E can also be considered small compared to the energy of all other particles E tot − E . Under these assumptions, we may consider that (3) coincides with the limit of the Boltzmann distribution P Bol ∼ (E tot − E) N−2 at N → ∞ . However, this is www.nature.com/scientificreports/ valid only if the particle mass m 1 allows for the term Em 1 y * 2 1 to be neglected. The distribution of heavy enough particle will still be different.
It is interesting to note that passage to the limits N → ∞ and E tot → ∞ in the classical Boltzmann distribu- yields the Boltzmann distribution p Bol (E) = βe −βE for an infinite number of particles. Only here the temperature appears as the limit of the ratio lim E tot N = 1 β = kT.
Distribution of collision angles
Let us consider in more detail the stationary state of gas in a round vessel. At each collision of two particles, some energy is transferred from one particle to another. After equilibrium is established, the amount of energy received in collisions must, on average, be equal to the amount lost for every particle. Otherwise, the average energy of the particle will change in time. In a rectangular vessel, such a balance is achieved with the energy distributions of all particles being the same and, accordingly, with equal mean particle energies. Now we note that if any of the particles in the round vessel has some angular momentum L, then the remaining particles, in the case of a non-rotating gas, must have a total angular momentum −L . In other words, each of the particles moves against some counter flow since the gas of the remaining particles rotates in the opposite direction with respect to the selected particle. This leads to an increase in the share of frontal collisions for all particles. We consider numerically the distribution of particle collision angles, i.e., angles between the direction of motion of the selected particle and the direction to the centre of the particle with which the collision occurred. These distributions are shown in Fig. 3. For comparison, dotted lines show similar distributions for particles in a rectangular vessel. In round and rectangular vessels, these distributions are qualitatively the same, in contrast to the energy distributions. In a round vessel, particles collide frontally more often and catch up with one another less often than in a rectangular vessel. For light and fast-moving particles, for which frontal collisions initially predominate, the increase in their share is smaller than for heavier ones. However, the heavier the particle, and the fewer particles in the vessel, the greater this effect.
Thus, an additional law of conservation leads to a change in the distribution of particle collision angles. That leads to a change in the relationship between the energy transferred in collision and the average energies of particles. After equilibrium is reached, the average energy received and the average energy lost in collisions are set equal for every particle, not mean particle energies. Due to a different distribution of collision angles, in a round vessel, this balance is achieved with the mean particle energies being different.
Average particle energy, equipartition violation
We now consider how the particle mean energy depends on the system parameters. Theoretically, this energy can be obtained using the P This is an exact expression for particles of negligible size, it gives the mean particle energy as a definite integral. Unfortunately, even in the simplest cases, such integrals cannot be calculated analytically. Therefore, the exact analytical description of the energy partitioning is possible only in this general form, in quadratures.
In a round vessel, the average energy of a particle depends on all the parameters of the system. These parameters include the number of particles, all their masses, and all sizes. In addition, it also depends on the total angular momentum of the gas. Here, as above, we will consider a non-rotating gas L tot = 0 . Among these parameters, several main ones can be distinguished, while the dependence on the rest, in most cases, is insignificant. The www.nature.com/scientificreports/ most important parameters (at L tot = 0 ) are the number of particles in the vessel N and the share of the selected particle mass m M tot in the total mass of all particles M tot . Some dependencies of the particle mean energy on these parameters are shown in Fig. 4. These dependencies were constructed both by the numerical integration of Eq. (8) and by modelling the particles' motion. The difference between the results obtained by these two methods was within the calculation errors. The Fig. 4a shows the dependencies of the particle average energy < E > on its mass share m for a fixed number of particles N = (3,5,7,12) . It is visible that the smaller the mass of the particle, the greater its average energy. For light particles with masses less than the average m < M tot N , their average energies will be above the energy equipartition level, and vice versa for heavy particles. The Fig. 4b shows the change in particle average energy as the number of particles increases, provided that the mass share of the selected particle remains constant at m = (0.01, 0.13, 0.3, 0.5) . As expected, as the number of particles increases, the energy of each decreases approximately in inverse proportion to N. The lower mass particles have higher energies at any number of particles in the vessel.
The calculated dependencies follow smooth curves and can be described by a general empirical formula of simple form. Approximately, the mean energy of an ideal gas particle of mass m in a round vessel with N particles (at L tot = 0 ) is given by the formula: This formula is obviously approximate, since it takes into account only the main parameters affecting the average energy. But if there are more than three particles and all of them have the same order masses and small sizes, then the formula Eq. (9) describes the observed distribution of energy between the particles well.
With an increase in the number of particles, their mass shares will become practically equal to zero if all particles are of comparable mass. As a result, their average energies become equal. However, if the particle's mass share differs from zero, the average energy of such a massive, in a sense Brownian particle, will be lower than that of the surrounding particles, even with a large number of particles in the vessel.
Summary
Usually, it is considered that the form of the vessel with ideal gas of colliding particles does not affect the gas behaviour. In this paper, we consider a special case when the circular vessel boundary leads to unusual, previously unknown distributions in the ideal gas of a finite number of particles. The mechanism of influence of the vessels' shape is connected with the conservation of the gas angular momentum. Even in the absence of a circular gas flow, it changes the distribution of particle collision angles and all the other gas distributions.
We have obtained the exact analytical expressions for the distribution of energy of ideal gas particles in a round vessel. In a compact form, it is presented as the definite integral Eq. (3). This expression is valid at L tot = 0 for particles of negligible size but any number and mass. The explicit form of the energy distribution is complicated, we provide it for two and three particles for reference. As the number of particles increases, their energy distributions tend to the Boltzmann distribution. But the fewer particles in the vessel, and the greater the spread of their masses, the more significantly the energy distribution of ideal gas particles in a round vessel differs from the classical Boltzmann distribution.
The key difference of the round vessel is the uneven distribution of energy between gas particles. The exact theoretical expression for mean particle energy is the definite integral Eq. (8). There are two main parameters affecting the mean particle energy, the mass share of the particle and the number of particles in the vessel. The approximate dependence on these parameters is given by the simple formula Eq. (9). The greater the mass associated with the degree of freedom, the smaller its average energy will be. Even with a large number of particles, the mean energy of a sufficiently massive particle will be lower than that of the surrounding particles. These Figure 4. Dependence of the mean energy of selected particle: (a) on its mass m for a fixed number of particles in the vessel. (b) on the number of particles N for a fixed mass of selected particle. All other particles had equal masses. The dotted lines show the energy equipartition level. The dots show the average energy according to both Eq. (8) and the simulation of particle motion, the solid lines show the approximate formula Eq. (9). The dependence of the particle's mean energy on its mass is very visible, in contrast to the energy equipartition. www.nature.com/scientificreports/ theoretical results have been strictly obtained and checked in numerical simulation experiments. They agree with numerous reports of equipartition violations in different systems. We expect them to be general for systems with a lack of energy equipartition.
Data availability
The details of the analytical or numerical calculations are available from the corresponding author on reasonable request. | 2023-03-01T15:21:14.005Z | 2023-02-28T00:00:00.000 | {
"year": 2023,
"sha1": "b54779b4e51dcf9318eb14ba9cbdc067c4823829",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "b54779b4e51dcf9318eb14ba9cbdc067c4823829",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220629714 | pes2o/s2orc | v3-fos-license | A Biodegradable Antifungal-Loaded Sol–Gel Coating for the Prevention and Local Treatment of Yeast Prosthetic-Joint Infections
Fungal prosthetic-joint infections are rare but devastating complications following arthroplasty. These infections are highly recurrent and expose the patient to the development of candidemia, which has high mortality rates. Patients with this condition are often immunocompromised and present several comorbidities, and thus pose a challenge for diagnosis and treatment. The most frequently isolated organisms in these infections are Candida albicans and Candida parapsilosis, pathogens that initiate the infection by developing a biofilm on the implant surface. In this study, a novel hybrid organo–inorganic sol–gel coating was developed from a mixture of organopolysiloxanes and organophosphite, to which different concentrations of fluconazole or anidulafungin were added. Then, the capacity of these coatings to prevent biofilm formation and treat mature biofilms produced by reference and clinical strains of C. albicans and C. Parapsilosis was evaluated. Anidulafungin-loaded sol–gel coatings were more effective in preventing C. albicans biofilm formation, while fluconazole-loaded sol–gel prevented C. parapsilosis biofilm formation more effectively. Treatment with unloaded sol–gel was sufficient to reduce C. albicans biofilms, and the sol–gels loaded with fluconazole or anidulafungin slightly enhanced this effect. In contrast, unloaded coatings stimulated C. parapsilosis biofilm formation, and loading with fluconazole reduced these biofilms by up to 99%. In conclusion, these coatings represent a novel therapeutic approach with potential clinical use to prevent and treat fungal prosthetic-joint infections.
Introduction
Prosthetic-joint infections (PJIs) are a highly debilitating complication affecting the joint prosthesis and adjacent tissue that occurs in approximately 1-3% of patients who undergo total arthroplasty [1,2]. The majority of these infections are caused by bacteria, while only 1% are caused by fungi [3]. In particular, fungal PJI are highly persistent and recurrent, exposing patients to the development of candidemia, a severe complication with a high rate of mortality (30-60%) [4]. These infections are mostly caused by Candida species, being Candida albicans (C.P. Robin) Berkhout and Candida parapsilosis Langeron & Talice the most frequently isolated species [5,6] and other filamentous fungi such as Coccidioides immitis C.W. Stiles [7] or Aspergillus spp. [8], and in a small percentage of cases there may be a concomitant bacterial infection [3]. When the fungus encounters the implant surface, it develops a
Surface Characterization
The surface morphology and composition of the as-prepared coatings were assessed by scanning electron microscopy (SEM) and Energy Dispersive Spectrometry (EDS). The homogeneity of applied coatings was analyzed. The study was performed using the Teneo FEI Tungsten filament Electronic Microscope (Field Electron and Ion Company, FEI, OR, USA), equipped with an X-ray microanalysis system along with an Octane Plus detector of 30 mm 2 area. Images were taken applying 2 kV and 0.2 nA at 1000× magnifications and the Circular Back-Scatter detector (CBS) was used.
Kinetics Study of Antifungal Release
These experiments were performed using coatings F100 and A100. Briefly, release analyses of both antifungals were based on multiple absorbance measurements using a JASCO V-650 UV-vis absorption spectrophotometer (Jasco Deutchsland Gmbh, Pfungstadt, Germany). Coatings were exposed to 5 mL of Dulbecco's Phosphate Buffered Saline (PBS) solution (pH 7.4) (Sigma-Aldrich, St. Louis, MO, USA) at 37 • C and placed in polypropylene tubes. Three samples were used for each coating for the study. The fluconazole and anidulafungin release were monitored by measuring the maximum absorbance of both antifungals (261 nm and 303 nm respectively) at different times (2, 4, 6, 12, 24, and 48 h). For each measurement, 3 mL aliquots were extracted and transferred to a 3 mL quartz cuvette (10 mm path length, Hellma GmbH, Müllheim, Germany). The drug concentration for the corresponding absorbance values was calculated based on the calibration curves for fluconazole and anidulafungin in PBS, previously made. Calibration was performed by varying the concentration of both antifungals between 0.1 × 10 −3 mg to 0.1 mg. Then, the concentration of each sample was normalized considering the dilution and the corresponding calibration curve. The calibration curves were linear in the concentration range measured with an R 2 = 0.9999 for the fluconazole calibration curve and R 2 = 0.9990 for the anidulafungin calibration curve.
Selection and Maintenance of Strains for Microbiological Study
Experiments were performed using the reference strain C. albicans from the American Type Culture Collection ATCC 10,231 in addition to two C. albicans clinical isolates: Cal1 (isolated from a catheter infection) and Cal35 (isolated from a hip PJI); and reference strain C. parapsilosis ATCC 22,019 plus two clinical isolates: κ1 (isolated from a case of otitis) and κ4 (isolated from a hip PJI). Clinical isolates were identified by MALDI-TOF-TOF using the Vitek MS system (database IVD V3.0.) (BioMérieux, Marcy-l'Étoile, France). All strains were kept frozen at −80 • C until the experiments were performed. They were then maintained at 37 • C in Sabouraud gentamicin chloramphenicol agar plates (SGC2) (BioMérieux, Marcy-l'Étoile, France).
Adherence Study
To evaluate the adherence of the strains to the sol-gel without the addition of antifungals, the bottom of a well of a six-well polystyrene plate (Thermo Fisher Scientific, Waltham, MA, USA) was coated with 100 µL of sol-gel (P2) and cured at room temperature for at least 24 h. Next, 3 mL of a solution diluted to a final concentration of 0.5 McFarland (0.5-2.5 × 10 5 Colony forming units (CFU)/mL) of yeast in sterile saline solution (SS) was added to each well and the plate was incubated for 2 h at 37 • C and 5% CO 2 . After incubation, the wells were washed two times with 2 mL of SS, and another 2 mL of SS were added to the well, after which the plate was sonicated for 5 min at 50 to 60 Hz. Following sonication, the yeasts that had adhered to the coating were estimated by means of the drop plate method [30]. As a positive control, the experiment was performed with uncoated wells.
Evaluation of Biofilm Formation Inhibition
To evaluate the inhibition of biofilm formation, all sol-gel formulations were deposited on Ti sample pieces of 15 mm diameter × 25 mm thick prepared by a conventional powder metallurgy route by dip-coating as described previously [24]. Then, the coatings were dried at 60 • C for one hour inside an oven. The coated Ti substrates were placed in a well of a 12-well plate (Sigma Aldrich, St. Louis, MO, USA) with 3 mL of a solution diluted to a final concentration of 0.5 McFarland of yeasts in Roswell Park Memorial Institute (RPMI) 1640 medium (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with glucose 2% and buffered with 3-(N-morpholino) propanesulfonic acid (MOPS) (Sigma Aldrich, St. Louis, MO, USA) 0.165 mol/L at pH 7.0, and incubated at 37 • C in 5% CO 2 for 24 h in the case of C. albicans strains [31] and for 48 h in the case of C: parapsilosis strains [32]. After incubation, the Ti pieces were washed three times in SS and sol-gel coatings were scraped using sterile wooden sticks that were then sonicated in 10 mL of SS. Then, CFU/cm 2 values were estimated by the drop plate method. Additionally, non-adherent planktonic yeasts remaining in the incubation medium were estimated by absorbance at 530 nm, using RPMI medium alone as a negative control.
Evaluation of the Treatment of Mature Biofilms
Biofilm formation was induced by inoculating 200 µL/well of a solution diluted to a final concentration of 0.5 McFarland of yeasts in RPMI 1640 + 2% glucose + MOPS on untreated, flat-bottomed, 96-well Fluoronunc Black polystyrene microtiter plates (Thermo Fisher Scientific, Waltham, MA, USA) incubated for 48 h at 37 • C and 5% CO 2 . After incubation, the medium was removed and 200 µL of fresh media was added to each well, and the lid of the plate was replaced by a MBEC™ biofilm Incubator lid (Innovotech, Edmonton, AB, Canada); the previous day, the lid pegs had been coated by dipping the lid in wells filled with 200 µL of each sol-gel formulation (negative control, P2, F/A50, F/A75, and F/A100; n = 8 for each) followed by incubation at 37 • C in 5% CO 2 for 48 h. Biofilm viability was then determined by adding 10 µL of AlamarBlue ® (BIO-RAD, Hercules, CA, USA) to each well and incubating the plate with gentle shaking (70 rpm) for 3 h at 37 • C. Fluorescence was measured in a Perkin Elmer EnSpire ® Multimode Reader (Perkin Elmer, Waltham, MA, USA) using an excitation wavelength of 570 nm and an emission wavelength of 585 nm. Fresh RPMI 1640 medium alone was used as a negative control and biofilms grown alone were used as positive controls.
Cellular Study
MC3T3-E1 cells were seeded at a concentration of 10,000 cells/cm 2 on 96-well plates with α-minimum essential medium with 10% bovine fetal serum and 1% penicillin-streptomycin (αMEM, Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA) and were incubated at 37 • C and 5% CO 2 overnight. After cell adherence, the medium was replaced by αMEM with 50 mg/mL ascorbic acid (Sigma-Aldrich, St. Louis, MO, USA), 10 mM ß-glycerol-2-phosphate (Sigma-Aldrich, St. Louis, MO, USA) to promote osteoblastic differentiation, and the lid of the plate was replaced with a MBEC™ biofilm incubator lid (Innovotech, Edmonton, AB, Canada); the previous day, the lid pegs had been coated by dipping the lid in wells filled with 200 µL of each sol-gel formulation (negative control, P2, F/A50, F/A75, and F/A100; n = 8 for each) followed by incubation at 37 • C in 5% CO 2 for 48 h. After incubation, cytotoxicity was tested by CytoTox 96 ® NonRadioactive Cytotoxicity Assay (Promega, Madison, WI, USA). Cell proliferation was determined by addition of AlamarBlue ® solution (BIO-RAD, Hercules, CA, USA) at 10% (v/v) to the cell culture at 48 h of growth. Fluorescence intensity was measured with excitation and emission wavelengths of 540 and 600 nm, respectively, in a Tecan Infinite 200 Reader (Tecan Group Ltd., Männedorf, Switzerland).
Statistical Analysis
All statistical analyses were performed using the Stata software program, release 11 (Statacorp, College Station, TX, USA). First, the normality of each series of data was checked with the Shapiro-Wilk test. Results from the drug-release experiments were analyzed by using the non-parametric Kruskal-Wallis test. Results from the adherence study and the evaluation of the inhibition of biofilm formation were analyzed by using the Wilcoxon test to compare each group with the control. The analysis of the percentage of biofilm viability and the cellular study were performed using Student's t-test to compare each group with controls. For analyzing the antifungal release, the data were adjusted to a linear regression model and a non-parametric Kruskal-Wallis test was employed to compare the drug release between the different times. A level of significance of 0.05 was used in all tests. All data are represented as mean and standard deviation for statistically normal results and as median and interquartile range for non-normal results. All experiments were performed in at least three biological replicates.
Synthesis of the Coatings and Surface Characterization
The obtained sol in all cases was transparent and without phases separation. Sols had an adequate viscosity facilitating the correct application on the substrate. Dried coatings were simple-sight observed, without imperfections such as cracks or pores. A more thorough inspection was performed using SEM. Figure 1 shows SEM images observed with CBS detector to study the composition of the substrate and the coatings F100 and A100 applied onto the titanium substrate. In addition, the figure shows an element mapping using EDS to study the distribution of the elements. Inspection of the surfaces showed the formation of smooth, uniform, homogeneous, and crack-free coatings on the substrates in these formulations.
Drug-Release Study
The release of fluconazole followed a linear behavior between 0 and 4 h (R 2 = 0.9116) with a release constant of 7.44 μg/h, and reaching a concentration of 30 μg at 4 h. From 6 to 48 h the release of fluconazole stabilized and stayed constant over time (p = 0.478 for Kruskal-Wallis test). In contrast, anidulafungin release followed a similar pattern, most of the drug was released within the first 4 h (R 2 = 0.7286) with a release constant of 0.899 μg/h and reaching a concentration of up to 7 μg, staying constant from 6 to 48 h (p = 0.478 for Kruskal-Wallis test) (Figure 2).
Adherence Study
The presence of the sol-gel coating (P2) slightly decreased the adhesion of C. albicans strains (p = 0.0495). In contrast, there was no significant effect on C. Parapsilosis (p = 0.1266), and P2 promoted adherence of C. parapsilosis clinical strains 39 to 66-fold for κ1 and κ2, respectively, in comparison with the uncoated control (p = 0.04) (Figure 3).
Adherence Study
The presence of the sol-gel coating (P2) slightly decreased the adhesion of C. albicans strains (p = 0.0495). In contrast, there was no significant effect on C. Parapsilosis (p = 0.1266), and P2 promoted adherence of C. parapsilosis clinical strains 39 to 66-fold for κ1 and κ2, respectively, in comparison with the uncoated control (p = 0.04) (Figure 3).
Figure 2.
Kinetics of fluconazole (red) and anidulafungin (blue) released from F100 and A100 solgels over time. Data represent median and interquartile range of the amount of drug (in μg) measured in three replicates.
Adherence Study
The presence of the sol-gel coating (P2) slightly decreased the adhesion of C. albicans strains (p = 0.0495). In contrast, there was no significant effect on C. Parapsilosis (p = 0.1266), and P2 promoted adherence of C. parapsilosis clinical strains 39 to 66-fold for κ1 and κ2, respectively, in comparison with the uncoated control (p = 0.04) (Figure 3).
Prevention of Biofilm Formation
An evaluation of non-adherent yeasts showed that loading with fluconazole was effective in reducing the generation of planktonic yeasts by up to 75% in all strains ( Figure 4A). The load with anidulafungin showed a marked concentration-dependent effect in C. albicans strains, though the same was not observed with C. parapsilosis strains. In both cases, A100 produced the highest reduction of planktonic yeasts ( Figure 4B). Comparing the two antifungals, fluconazole caused a higher
Prevention of Biofilm Formation
An evaluation of non-adherent yeasts showed that loading with fluconazole was effective in reducing the generation of planktonic yeasts by up to 75% in all strains ( Figure 4A). The load with anidulafungin showed a marked concentration-dependent effect in C. albicans strains, though the same was not observed with C. parapsilosis strains. In both cases, A100 produced the highest reduction of planktonic yeasts ( Figure 4B). Comparing the two antifungals, fluconazole caused a higher reduction in planktonic yeasts than anidulafungin, especially in C. parapsilosis strains. Anidulafungin did not produce a significant reduction in yeasts in C. parapsilosis strains. When evaluating the prevention of biofilm formation, antifungal loading showed a concentration-dependent effect: coatings loaded with maximum concentrations of fluconazole (F100) or anidulafungin (A100) caused the greatest inhibition in biofilm development in both species, inhibiting both species by up to 60% ( Figure 5). Moreover, loading with anidulafungin prevented biofilm formation more efficiently than fluconazole in C. albicans strains ( Figure 5C), while the latter was more efficient against C. parapsilosis strains ( Figure 5B). When evaluating the prevention of biofilm formation, antifungal loading showed a concentration-dependent effect: coatings loaded with maximum concentrations of fluconazole (F100) or anidulafungin (A100) caused the greatest inhibition in biofilm development in both species, inhibiting both species by up to 60% ( Figure 5). Moreover, loading with anidulafungin prevented biofilm formation more efficiently than fluconazole in C. albicans strains ( Figure 5C), while the latter was more efficient against C. parapsilosis strains ( Figure 5B).
Evaluation of Treatment of Mature Biofilms
The results of this experiment showed that the presence of the unloaded coating (P2) was sufficient to produce a significant decrease in biofilm viability of C. albicans strains (p < 0.001), and the addition of antifungals contributed synergistically to this effect ( Figure 6A). The addition of anidulafungin only had a significant effect on C. albicans reference strain, producing a greater decrease in biofilm viability compared to P2. This effect was not significant in clinical strains, although no concentration-dependent trend was observed ( Figure 6C).
In contrast, P2 induced slight biofilm production in C. parapsilosis strains, and the presence of antifungals had a differential effect. Fluconazole was much more effective than anidulafungin, reducing biofilm formation by up to 99% in C. parapsilosis reference and κ4 strains ( Figure 6B,D).
Evaluation of Treatment of Mature Biofilms
The results of this experiment showed that the presence of the unloaded coating (P2) was sufficient to produce a significant decrease in biofilm viability of C. albicans strains (p < 0.001), and the addition of antifungals contributed synergistically to this effect ( Figure 6A). The addition of anidulafungin only had a significant effect on C. albicans reference strain, producing a greater decrease in biofilm viability compared to P2. This effect was not significant in clinical strains, although no concentration-dependent trend was observed ( Figure 6C).
In contrast, P2 induced slight biofilm production in C. parapsilosis strains, and the presence of antifungals had a differential effect. Fluconazole was much more effective than anidulafungin, reducing biofilm formation by up to 99% in C. parapsilosis reference and κ4 strains ( Figure 6B,D).
Cytotoxicity and Proliferation Assays
No significant effects on cytotoxicity or cellular proliferation due to the presence of coatings were observed (p > 0.05 in all cases) (Figures 7 and 8).
Cytotoxicity and Proliferation Assays
No significant effects on cytotoxicity or cellular proliferation due to the presence of coatings were observed (p > 0.05 in all cases) (Figures 7 and 8).
Cytotoxicity and Proliferation Assays
No significant effects on cytotoxicity or cellular proliferation due to the presence of coatings were observed (p > 0.05 in all cases) (Figures 7 and 8).
Discussion
In this work, the research group describes the capacities of a hybrid sol-gel coating loaded with different concentrations of fluconazole or anidulafungin to prevent and/or treat fungal PJI.
First, adhesion experiments showed that P2 reduced C. albicans yeast adhesion and promoted the adherence of C. parapsilosis yeasts by up to 65-fold. Since this effect is the result of the hydrophobicity of P2, which facilitates the adherence of yeasts through London-van der Waals forces (commonly designated as hydrophobic interaction) [33], the chemistry of the sol-gel degradation could be the cause of the lower adhesion of C. albicans. When placed in aqueous solution, water hydrates the net of the sol-gel, swelling the coating before hydrolytic degradation begins. Hydrolytic degradation of the coatings is based on a depolymerization reaction that can be considered the opposite reaction of polycondensation [24]; hence, yeast attachment could be affected as the surface is being continually remodeled on a nanometric scale. In addition, several factors could contribute to the observed differences between both species. For instance, C. parapsilosis displays higher surface hydrophobicity than C. albicans, thus increasing its ability to adhere to hydrophobic surfaces [34,35]. Moreover, it has been reported that C. parapsilosis strains display high genomic variability, with a high percentage of them having numerous repetitions of orthologous genes to C. albicans Agglutininlike sequence (ALS) genes. ALS genes encode adhesins, which are responsible for the initial adhesion of yeasts to biotic and abiotic surfaces [36]. In contrast, C. albicans strains maintain a lower and stable copy number of ALS genes [37].
Second, antifungal-loaded coatings effectively prevented biofilm formation of both species in a concentration-dependent manner: A100 was more effective against C. albicans strains while F100 was more effective against C. parapsilosis strains. The higher tolerance of C. parapsilosis strains to echinocandins is well characterized and is due to a sequence variation present in the hot-spot region 1 of the glucan synthase, which decreases its drug sensitivity [38]. Moreover, the evaluation of nonadhered yeasts showed that F100 was the most effective formulation against both species, reducing planktonic yeasts by more than 75% in all strains. This effect could be due to a lower release of the anidulafungin from the molecular framework of sol-gel during hydrolytic degradation due to the larger size of the drug molecules. The hypothesis was confirmed by evaluating the drug release from the coatings after introducing them in aqueous solution. The results showed that the release of fluconazole was much more efficient, with a release constant of 7.44 μg/h within the first 4 h, while anidulafungin showed a release constant of 0.899 μg/h. According to the distribution of Minimum
Discussion
In this work, the research group describes the capacities of a hybrid sol-gel coating loaded with different concentrations of fluconazole or anidulafungin to prevent and/or treat fungal PJI.
First, adhesion experiments showed that P2 reduced C. albicans yeast adhesion and promoted the adherence of C. parapsilosis yeasts by up to 65-fold. Since this effect is the result of the hydrophobicity of P2, which facilitates the adherence of yeasts through London-van der Waals forces (commonly designated as hydrophobic interaction) [33], the chemistry of the sol-gel degradation could be the cause of the lower adhesion of C. albicans. When placed in aqueous solution, water hydrates the net of the sol-gel, swelling the coating before hydrolytic degradation begins. Hydrolytic degradation of the coatings is based on a depolymerization reaction that can be considered the opposite reaction of polycondensation [24]; hence, yeast attachment could be affected as the surface is being continually remodeled on a nanometric scale. In addition, several factors could contribute to the observed differences between both species. For instance, C. parapsilosis displays higher surface hydrophobicity than C. albicans, thus increasing its ability to adhere to hydrophobic surfaces [34,35]. Moreover, it has been reported that C. parapsilosis strains display high genomic variability, with a high percentage of them having numerous repetitions of orthologous genes to C. albicans Agglutinin-like sequence (ALS) genes. ALS genes encode adhesins, which are responsible for the initial adhesion of yeasts to biotic and abiotic surfaces [36]. In contrast, C. albicans strains maintain a lower and stable copy number of ALS genes [37].
Second, antifungal-loaded coatings effectively prevented biofilm formation of both species in a concentration-dependent manner: A100 was more effective against C. albicans strains while F100 was more effective against C. parapsilosis strains. The higher tolerance of C. parapsilosis strains to echinocandins is well characterized and is due to a sequence variation present in the hot-spot region 1 of the glucan synthase, which decreases its drug sensitivity [38]. Moreover, the evaluation of non-adhered yeasts showed that F100 was the most effective formulation against both species, reducing planktonic yeasts by more than 75% in all strains. This effect could be due to a lower release of the anidulafungin from the molecular framework of sol-gel during hydrolytic degradation due to the larger size of the drug molecules. The hypothesis was confirmed by evaluating the drug release from the coatings after introducing them in aqueous solution. The results showed that the release of fluconazole was much more efficient, with a release constant of 7.44 µg/h within the first 4 h, while anidulafungin showed a release constant of 0.899 µg/h. According to the distribution of Minimum Inhibitory Concentrations published for C. albicans and C. parapsilosis [39], the amount of fluconazole released would be sufficient to inhibit the growth of C. albicans and C. parapsilosis, while the amount of anidulafungin released would be effective in inhibiting C. albicans but not C. parapsilosis, which is in concordance with the results obtained in this work.
In addition, taking the results from the analysis of planktonic yeasts and the inhibition of biofilm formation, the higher adherence of C. parapsilosis strains can also be observed indirectly: the estimation of non-adherent C. albicans yeasts showed higher absorbance than C. parapsilosis yeasts (0.6 versus 0.5), while C. parapsilosis CFU/cm 2 counts were between 5 and 7.5-fold higher than C. albicans counts.
Third, biofilm treatment studies showed that the presence of the unloaded coating decreased biofilm viability in C. albicans strains, and the addition of antifungals contributed to this effect but was not significant in clinical strains. In contrast, in C. parapsilosis strains, the presence of P2 significantly induced biofilm formation, and loading with fluconazole dramatically reduced biofilm viability, while loading with anidulafungin was not effective. This effect may be related to the hydrolytic degradation of the coatings. During sol-gel hydrolytic degradation, phosphate ions are released to media, which are virulence and morphogenesis modulating factors in some yeasts such as C. albicans and C. glabrata [40,41].
Fourth, higher values of absorbance, CFU/cm 2 and biofilm formation were obtained in clinical strains regardless of species, consistent with the assumption that clinical strains display a greater capacity to form biofilms than reference strains [42]. This remarks the importance of adding clinical strains to this type of studies, as they tend to behave differently from reference strains in terms of biofilm production capacity and antimicrobial resistance mechanisms.
Last, no significant effects were found on cytotoxicity and proliferation. This is in contrast with previous works where fluconazole-loaded coatings displayed slight cytotoxicity [28]. However, those coatings were not functionalized with phosphorous compounds. Taking into account that the presence of organophosphate enhances cellular proliferation [24,43], it could be overcoming the slight cytotoxicity of fluconazole, which explains the absence of cytotoxicity of the fluconazole-loaded coatings observed in this work.
Conclusions
The coatings loaded with the highest concentration of antifungals showed an excellent anti-biofilm behavior. Therefore, these coatings could be a useful tool for preventing and treating locally yeast PJI, In particular, the coatings loaded with fluconazole proved to be effective against both Candida species tested, nevertheless, and thanks to the versatility that offers the sol-gel technology, other drugs and combinations could be tested, aiming for a more personalized treatment. | 2020-07-16T09:02:49.381Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "024802d8301fdfa3ffbf9ecbae267eb3d442514a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ma13143144",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36e00f1910e8a7e401dcd7e90ec498be39a2ac43",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243863442 | pes2o/s2orc | v3-fos-license | Prion protein polymorphisms in Michigan white-tailed deer (Odocoileus virginianus)
ABSTRACT Chronic Wasting Disease (CWD), a well-described transmissible spongiform encephalopathy of the Cervidae family, is associated with the aggregation of an abnormal isoform (PrPCWD) of the naturally occurring host prion protein (PrPC). Variations in the PrP gene (PRNP) have been associated with CWD rate of infection and disease progression. We analysed 568 free-ranging white-tailed deer (Odocoileus virginianus) from 9 CWD-positive Michigan counties for PRNP polymorphisms. Sampling included 185 CWD-positive, 332 CWD non-detected, and an additional 51 CWD non-detected paired to CWD-positives by sex, age, and harvest location. We found 12 polymorphic sites of which 5 were non-synonymous and resulted in a change in amino acid composition. Thirteen haplotypes were predicted, of which 11 have previously been described. Using logistic regression, consistent with other studies, we found haplotypes C (OR = 0.488, 95% CI = 0.321–0.730, P < 0.001) and F (OR = 0.122, 95% CI = 0.007–0.612, P < 0.05) and diplotype BC (OR = 0.340, 95% CI = 0.154–0.709, P < 0.01) were less likely to be found in deer infected with CWD. As has also been documented in other studies, the presence of a serine at amino acid 96 was less likely to be found in deer infected with CWD (P < 0.001, OR = 0.360 and 95% CI = 0.227–0.556). Identification of PRNP polymorphisms associated with reduced vulnerability to CWD in Michigan deer and their spatial distribution can help managers design surveillance programmesand identify and prioritize areas for CWD management.
Introduction
Chronic Wasting Disease (CWD), a well described, fatal, transmissible spongiform encephalopathy of the Cervidae family, is associated with the aggregation of an abnormal isoform (PrP CWD ) of the naturally occurring host prion protein (PrP C ) [1][2][3]. First characterized in 1980 based on clinical and pathological findings in Colorado captive mule deer [2], CWD has since spread within the United States, been found in Canada and Europe, and been detected in imported cervids in Korea [4][5][6][7].
CWD prevalence in free-ranging cervid populations has been found to be as high as 35% [18] with population-level impacts seen with prevalence as low as 13% [19][20][21]. Cervid populations provide not only social and cultural benefits through hunting and viewing, and ecological contributions to biodiversity, they also serve as a financial keystone species for conservation and management making their potential decline of considerable management concern.
CWD was first detected in wild white-tailed deer (Odocoileus virginianus) in Michigan in 2015 through opportunistic passive surveillance, 6 years after the state's first detection in a captive herd. Since 2015, Michigan has invested in intensive surveillance through localized culling and hunter assisted sampling and CWD has been detected in 9 counties at the time of this study. We examined the current frequency of PRNP polymorphisms among CWD-positive and non-detected deer in 9 CWDpositive Michigan counties, one county in the Upper Peninsula and 8 contiguous counties in central Michigan. We tested for an association between CWD status and PRNP polymorphisms and hypothesized CWD polymorphisms associated with reduced CWD infection are present in Michigan white-tailed deer.
Results
PRNP sequences were determined for 568 free-ranging white-tailed deer from 9 CWD-positive Michigan counties. Of these samples, 185 were CWD-positive, 332 were CWD non-detected, and an additional 51 CWD non-detected were paired to CWD-positives to control for sex, age, and harvest location ( Figure 1). Within the analysed 625bp region of the PRNP gene, we detected 12 single nucleotide polymorphisms (SNPs), 9 of which had been previously reported [22,29,33,36,[38][39][40][41]. Of the 12 SNPs, 5 were non-synonymous, resulting in a change to the amino acid sequence (Table 1). BLAST and literature searches indicated that 589A/G, 642 G/A, and 643 C/A had not previously been reported. Full associated sequences have been deposited in GenBank under accession numbers MZ913400 -MZ913401. Thirteen haplotypes were predicted from the 12 SNPs, 11 of which have previously been described [22,40,41]. Of the 13 haplotypes, B was most common (n = 368) and was used as the reference in logistic regression. Haplotypes J and MI-1 were found only in CWD non-detected deer, precluding them from analysis. Haplotypes C (OR = 0.488, 95% CI = 0.321-0.730, P < 0.001) and F (OR = 0.122 and 95% CI = 0.007-0.612, P < 0.05) were less likely to be found in deer infected with CWD ( Table 2).
We identified 49 diplotypes with AB being the most common (n = 89) and used this as the reference in logistic regression. Twenty-four diplotypes were found only in positive or non-detected deer, precluding them from analysis. Of the remaining 25 diplotypes, BC (OR = 0.340 and 95% CI = 0.154-0.709, P < 0.01) was less likely to be found in deer infected with CWD (Table 3).
Three genotypes at aa96 were observed; aa96GG was most common (n = 387) and was used as the reference in logistic regression. aa96GS was less likely to be found in deer infected with CWD (OR = 0.360 and 95% CI = 0.227-0.556, P < 0.001; Table 4; Figure 2); however, we did not detect a reduced likelihood of infection for homozygous individuals (aa96SS). Two genotypes at aa95 were observed, aa95QQ was most common (n = 552). No significant associations were seen for genotype at aa95 and CWD infection.
Among the case-controlled samples, the presence of one C haplotype or one serine at aa96 was confirmed to be associated with reduced CWD infection by 0.191 (95% CI = 0.065-0.555, P < 0.01) and 0.182 (95% CI = 0.063-0.528, P < 0.01), respectively. As with the full dataset comparison, no evidence for protection was seen in homozygous CC or aa96SS individuals.
The ratio of haplotypes C and F, diplotype BC, and genotype aa96GS (associated with reduced susceptibility) relative to non-protective haplotypes, diplotypes and genotypes, respectively, were compared among the nine studied counties. Pairwise comparisons using Fisher's exact tests failed to detect significant differences among counties in the distribution of protective genetic types after p-values were corrected for multiple comparisons.
Discussion
This is the first examination of PRNP variation for a wild white-tailed deer population in Michigan. We established baseline frequencies of PRNP genotypes, haplotypes, and diplotypes in nine known CWD-positive counties. We found aa96GS, haplotypes C and F, and diplotype BC to be less frequent in CWD-positive deer, consistent with other studies [22,[24][25][26][27][29][30][31][32][33][34][35]. The results of our analyses of paired samples controlling for potential confounding variables of age, sex, and harvest location further reinforce the finding of an association between haplotype C and the presence of a serine at aa96 with reduced vulnerability to CWD. While the C haplotype and aa96S were less frequent in CWD-positive deer, we did not find evidence to support a reduced vulnerability to CWD for homozygotes. Previous work has also failed to detect a reduced likelihood of infection among aa96SS individuals [22]. In the current study, this could be indicative of a biological process due to strain type, or an artefact of the low prevalence of aa96SS reducing our power to detect an effect. To account for strain-type differences, these results should be used to target aa96GG, aa96GS, and aa96SS CWD-positive individuals for inclusion in strain-type assessments. And with increased sampling over time we may produce a greater proportion of aa96SS individuals for evaluation.
Annual apparent CWD prevalence between 2015 and 2019 varied across the 9 positive counties with the highest prevalence of 1.95% seen in Kent county in 2019 (Table 5). We found no statistically significant differences in PRNP genotype or haplotype frequencies Table 2. PRNP haplotype frequency (f) and count for chronic wasting disease positive (+) and non-detected (-) white-tailed deer (Odocoileus virginianus) from 9 CWD-positive Michigan counties. Odds ratios and 95% confidence intervals are shown for significant variables (P < 0.05) determined by logistic regression against the most frequent haplotype, B. Asterisks indicate haplotypes found in only positive or non-detected deer precluding them from analysis. Bolding indicates previously unreported haplotypes. Given that deer in some counties in Michigan seem to have higher CWD prevalence than others, it will be of interest to monitor the potential selective impacts of CWD across these areas. While no PRNP types have been associated with complete resistance, the presence of aa96S has been associated with slower disease progression and longer survival post-infection [30,31]. Longer survival may provide deer with aa96S a selective advantange leading to changes in PRNP frequencies in wild populations over time [27]. Our data present the current localized prevalence of G96S (28.7%) as similar to studies in white-tailed deer in Wyoming [38] (20%), but higher than those in Illinois [33] (13.8%), and northern Illinois and southern Wisconsin [22] (11%). Our characterization of PRNP frequencies, presumably relatively early in the disease's occurrence in Michigan, provides a baseline for monitoring selective effects of CWD on PRNP frequencies and white-tailed deer population characteristics over time and should be used in disease modelling efforts to map risk and rate of spread.
It is important to note some possible limitations to our study that point towards the need for future investigation. This assessment was a snapshot of polymorphisms restricted to a 625bp region from deer in a relatively small geographic area. Future work to Table 3. PRNP diplotype frequency (f) and count for chronic wasting disease positive (+) and non-detected (-) white-tailed deer (Odocoileus virginianus) from 9 CWD-positive Michigan counties. Odds ratios and 95% confidence intervals are shown for significant variables (P < 0.05) determined by logistic regression against the most frequent diplotype, AB. Asterisks indicate diplotypes found in only positive or non-detected deer precluding them from analysis. monitor frequencies of haplotype C, aa96S, and any new informative polymorphisms outside of the 625bp region will help inform disease impact, possible selection within the population, and target regions for special management attention. Additional assessments of genetic connectivity among deer in the CWD-positive regions would also inform delineation of management areas.
Surveillance is currently being used as the leading indicator to inform CWD management in wild deer populations, and while beneficial, surveillance is costly, limited in scope, and is not in itself a management tool. As CWD detections continue to increase the areas under surveillance, the use of regionally specific data to allocate testing efforts and funding will be pivotal for success. Identification of PRNP polymorphisms associated with reduced vulnerability to CWD and their spatial distribution and prevalence may help managers design surveillance programmes to identify and prioritize areas for CWD management when partnered with movement data and anticipated deposition of prions onto the landscape over time.
Sampling
Medial retropharyngeal lymph nodes were collected from white-tailed deer by Michigan Department of Natural Resource staff during routine disease surveillance between April 2015 and January 2020 from 9 CWD-positive Michigan counties. Sex, harvest location, and age, as assessed by tooth wear and replacement, were collected from all sampled deer. Samples were Subsampling within each county for this study represented: 1) CWD-positive deer; 2) CWD nondetected deer; and 3) and additional CWD nondetected paired controls. Sampling aimed to obtain three individuals from unique sections (2.6 km 2 ) in each township (93 km 2 ) for CWD non-detected animals. To control for factors known to be associated with CWD infection probability, paired controls were identified for a subset of CWD-positive deer by matching a CWD-positive deer to a CWD non-detected deer of the same age, sex, and harvest location. Due to the already small sample size for paired controls, we were unable to control for background relatedness as done previously [42].
Samples were collected within a short period of time that led us to assume relatively similar exposure to CWD between paired case-controls and CWD-positive and CWD non-detected deer.
CWD diagnosis
All animals were tested for CWD using a USDA approved enzyme-linked immunosorbent assay to detect PrP CWD at either the Michigan (East Lansing, MI) or Wisconsin (Madison, WI) Veterinary Diagnostic Laboratory. Confirmation by immunohistochemistry was done by the diagnostic laboratories or by the National Veterinary Services Laboratory (Ames, IA). Sampling did not allow for the assessment of disease stage in different tissue types; however, the use of lymph tissue, where PrP CWD deposition first occurs, reduced the chance that false negatives might impact these results [23].
Prnp sequence analysis
Genomic DNA was isolated from lymph node tissue using Qiagen DNeasy Blood and Tissue Kits (Qiagen Inc., Valencia, CA) following manufacturer's guidelines with a final elution volume of 200uL in Buffer AE.
The PRNP gene was amplified using a primer pair specific for the functional gene (223 5ʹ-acaccctctttattttgcag-3ʹ and 224 5ʹ-agaagataatgaaaacaggaag-3ʹ) [36]. PCR amplicons were purified using ExoSAP-IT (Applied Biosystems, Foster City, CA) and products were sequenced using the Big Dye Terminator system (Applied Biosystems, Foster City, CA). Sequence products were purified using ethanol/EDTA precipitation and resolved on an ABI 3500.
Sequences were visualized and edited in SEQUENCHER (Gene Codes Corporation, Ann Arbor, MI). Re-sequencing was done until regions of variability were confirmed three times. Haplotypes were generated from unphased sequences using DNA Sequence Polymorphism 5.10.01 (Rozas et al., Universitat de Barcelona). Markov chain Monte Carlo (MCMC) samples were taken from a minimum of 1,000 iterations, with a discarded burn-in of 100 iterations. Previously published haplotype sequences [22,43] were uploaded from NCBI and a local BLAST was run to match phased sequences to published haplotypes.
Phased sequences were translated in SEQUENCHER to their amino acid composition for final reporting.
Statistical analyses
We used logistic regression to identify associations between CWD status and haplotype, diplotype, and aa95 and aa96 genotypes. Chronic wasting disease status was a binomial variable with CWD-positive deer coded as 1 and non-detected deer coded as 0. Genetic data were treated as categorical variables. The most common genetic type was used as the reference type in each analysis. Genetic types significantly associated with CWD status were those with P-values ≤ 0.05. Odds ratios (ORs) and associated 95% confidence intervals were also calculated. Odds ratios with 95% confidence intervals that did not include one were considered significant. Genetic types with significant ORs less than one were interpreted as exhibiting reduced susceptibility to CWD.
To further explore associations between CWD status and genetic type while controlling for other factors that might affect CWD status (eg, age, sex, location), Table 5. Apparent prevalence of chronic wasting disease in white-tailed deer (Odocoileus virginianus) from 9 CWD-positive Michigan counties. Number in parentheses corresponds to total number of animals tested for the year in the given county. conditional logistic regression was used to identify associations between CWD status and genetic types for matched case-control pairs. The lesser number of available pairs (n = 51) limited the analyses we could conduct. Based on findings from the analyses described above as well as previous studies, we tested for associations between CWD status and presence of a C haplotype, CC genotype, and presence of at least one serine at aa96 using the clogit function in the survival package [44] in R [45] (version 3.6.1). We coded CWD status as described above. We did not assess haplotype F as only one available deer pair had a F haplotype. Significance was interpreted as described above.
We assessed differences in the frequency of presumably protective haplotypes, diplotypes, and genotypes among the 9 counties where CWD had been identified using Fisher's exact tests. | 2021-11-10T06:23:33.153Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "5f85cee601a82d1ad0a81434e343342376610526",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19336896.2021.1990628?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7dc80ce694f51b3bfc07b11027ffef82ccd2b953",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251974519 | pes2o/s2orc | v3-fos-license | A range-wide analysis of population structure and genomic variation within the critically endangered spiny daisy (Acanthocladium dockeri)
Understanding population structure and genetic diversity is important for designing effective conservation strategies. As a critically endangered shrub, the six remaining extant populations of spiny daisy (Acanthocladium dockeri) are restricted to country roadsides in the mid-north of South Australia, where the species faces many ongoing abiotic and biotic threats to survival. Currently the spiny daisy is managed by selecting individuals from the extant populations and translocating them to establish insurance populations. However, there is little information available on the genetic differentiation between populations and diversity within source populations, which are essential components of planning translocations. To help fill this knowledge gap, we analysed population structure within and among all six of its known wild populations using 7,742 SNPs generated by a genotyping-by-sequencing approach. Results indicated that each population was strongly differentiated, had low levels of genetic diversity, and there was no evidence of inter-population gene flow. Individuals within each population were generally closely related, however, the Melrose population consisted entirely of clones. Our results suggest genetic rescue should be applied to wild spiny daisy populations to increase genetic diversity that will subsequently lead to greater intra-population fitness and adaptability. As a starting point, we suggest focussing on improving seed viability via inter-population crosses such as through hand pollination experiments to experimentally assess their sexual compatibility with the hope of increasing spiny daisy sexual reproduction and long-term reproductive fitness.
Introduction
Small, fragmented plant populations often have reduced genetic diversity, which risks elevating inbreeding and genetic drift within populations (Templeton et al. 1990;van Treuren et al. 1991;Heywood 1993;Furlan et al. 2012;Neaves et al. 2015). Inbreeding and genetic drift are particularly concerning for small populations because they can increase the prevalence of disease (O'Brien and Evermann 1988;Hajjar et al. 2008). Genetic erosion is the reduction in effective population size within a population over time; where infrequently occurring alleles are likely to be lost, further hindering population survival by reducing adaptive potential (van Treuren et al. 1991). In addition, populations an important representative of Australian botanical diversity (Jusaitis and Adams 2005;Clarke et al. 2013;Bickerton et al. 2018). Only six small pop Clarke et al. 2013ulations (~ 87-1986 ) are known to occur within the mid-north of South Australia and are separated by 4-110 km distances. These populations have been named in previous studies as Hart, Telowie, Thornlea, Melrose, Yangya and Rusty Cab ( Fig. 1) (Jusaitis and Adams 2005;Clarke et al. 2013;Bickerton et al. 2018). These populations are susceptible to anthropogenic interference as they occur along narrow roadsides that are impacted by agricultural practices. They are exposed to chemical fertilisers and pesticides, herbivory and compete with weeds during the winter months (Clarke et al. 2013). Seed production in the wild is infrequent making recruitment via sexual reproduction rare (Jusaitis 2008). No seedlings have ever been observed in the field despite the presence of insect pollinators on spiny daisy flowers (Jusaitis and Adams 2005). The roughly 50 cm diameter shrubs produce suckers within their woody perennial root system, and this has helped these populations to endure (Clarke et al. 2013). Previous work has shown spiny daisies have low pollen viability and their flowers have a deformed pollen tube possibly contributing to prolonged vegetative reproduction that will eventually cause sterility (Jusaitis and Adams 2005;Jusaitis 2008).
Previous work has assessed the genetic diversity among four populations known at the time (Thornlea, Yangya, Hart and Rusty Cab) using allozymic markers (Jusaitis and Adams 2005). The authors found that each population represented a single, distinct genotype suggesting all individuals within each population were clones and there was no interpopulation gene flow (Jusaitis and Adams 2005). Efforts to conserve the spiny daisy included regular site monitoring and maintenance (i.e. pest control and weeding), in addition to the establishment of at least one translocation site for each population (Sharp et al. 2010;Clarke et al. 2013). Subsequent conservation activities further led to the discovery of another two populations (Telowie and Melrose) (Clarke et al. 2013). Although informative, the analysis of allozyme markers only provides a limited representation of the allelic variation in a genome, in comparison to the use of more modern genome-wide techniques (Gaudeul et al. 2004;Narum et al. 2013). It is therefore necessary that population studies use high-resolution genomic methodologies to better inform management and conservation goals.
Next-generation sequencing (NGS) applications have more frequently been used for non-model organisms, improving conservation management strategies (Seeb et al. 2011;Narum et al. 2013). Genotyping by sequencing (GBS) is a popular method that uses NGS platforms to reduce genome complexity and characterise genomic variation across thousands of genome-wide single nucleotide polymorphisms (SNPs). GBS provides a better insight into population structure compared to more traditional methods such as allozymes or microsatellites as there is greater genomic resolution (Beissinger et al. 2013;Narum et al. 2013).
The aim of our study was to determine levels of genetic diversity within and among six extant populations of the spiny daisy using a genome-wide SNP dataset derived by GBS. We hope to use this information to help conservation management of this species. Based on previous findings, we predicted that each population would be genetically differentiated and contain low levels of genetic diversity. We addressed the following questions: (i) How strong is the genetic differentiation among populations? (ii) How variable is intra-populations genetic diversity? (iii) Is evidence of gene flow present among the extant populations?
Study sites and sampling
All spiny daisy populations occur along roadsides within close proximity to country townships in the mid-north of South Australia (Fig. 1) (Jusaitis and Adams 2005;Clarke et al. 2013). Some of these sites are surrounded by several species of native vegetation identified in the Spiny Daisy Recovery Guide (Clarke et al. 2013). Thornlea, Yangya and Rusty Cab sites are situated East of Laura and are the closest geographically (4 km) to each other. The Hart site occurs within the Clare Valley, approximately 65 km south of the township Laura, between a main sealed road and an old railway reserve. The sites are located in semi-arid grasslands, adjacent to and sometimes partially on private property. The Telowie site is located approximately 20 km north of Port Pirie and its closest population, Thornlea is 30 km to the south-east of Telowie. Melrose represents the most northerly site, occurring 31 km north-east of Telowie. Plant density of these populations have been monitored over time and was recorded in 2007 by the Department of Environment, Water and Natural Resources (DEWNR) (Appendix Table S1) (Clarke et al. 2013).
Young leaf tissue samples were collected from a total of 90 individuals (15 plants per population) among the six sites. Five leaves from each plant harvested were taken along a 45 m transect at each site. Along the transect, the plants were randomly selected at a three-metre interval and within three metres from the transect. This means that sampled individuals were never less than 3 m apart. In instances where plants were absent at a designated interval, the next closest individual along the transect was sampled and the altered distance was recorded. This occurred at the Hart site where the distance between the last individual sampled and its previous sample was 10 m. Transects were also altered if the distribution of the plant populations was not linear up to 45 m. This was the case with the Telowie population, in which the transect had to be separated into two directions. As most sites contained a small number of individuals, a sample size of 15 individuals was thought to provide an adequate representation of the genetic material within the population.
Young leaves were harvested from the tip of a branch, down to the fourth node as DNA quality is known to improve for younger tissue (Moreira and Oliveira 2011). Leaves were selected from one to three separate branches of the plant. The five leaves from each plant were individually stored and temporarily preserved in liquid nitrogen. Samples were then freeze-dried using the Beta 2-8 LSCplus ice condenser for a 2-day period to allow for the extraction of moisture within the leaf.
DNA extraction and genotyping
DNA was extracted from two to three leaves (6-7 mg) from each individual by Diversity Arrays Technology Pty Ltd (DArT) according to their inhouse protocol (Jaccoud et al. 2001;Kilian et al. 2012). Library preparation involved the restriction enzymes PstI and MseI for complexity reduction which targeted low-copy sequences (Wenzl et al. 2004;Melville et al. 2017;van Deventer et al. 2020). Sequencing was conducted using DArTseq™; a genome complexity reduction sequencing methodology, similar to RADseq (Restriction site Associated DNA sequencing) (Sansaloni et al. 2011;Kilian et al. 2012;Rodger et al. 2021). The procedure was performed using an Illumina short-read platform producing 300,000 reads of 75 base pairs in length which were then filtered and shortened based on sequence quality. The closest relative with a complete genome sequence is the sunflower (Helianthus annuus) with a genome size of 3.6 gigabases (Kane et al. 2011;Badouin et al. 2017). Assuming the spiny daisy genome has a similar size to that of the sunflower, we expected the genome coverage with this methodology to be approximately 0.65% of the spiny daisy genome. SNP calling and initial marker filtering based on sequencing read depth and accuracy were performed using standard procedures developed by DArT (Kilian et al. 2012).
Marker quality control
Filtering of less informative SNPs and SNPs introducing bias was conducted in R v 4.0.2 (2020.06.22) (RStudio Team. 2020) using the DARTR package (Gruber et al. 2018). The sequencing procedure was replicated so that each locus was scored based on genotype repeatability. We applied a reproducibility filter to stringently remove loci which contained inadequate repeatability scores (< 99%). Failure to call SNPs due to reduced DNA quality causes missing genotypes which can severely bias some analyses. Loci containing > 1% missing data were removed. Secondary SNPs occur when multiple SNPs occur within the same sequencing read. These cause linkage disequilibrium, another source of bias for some analysis and were therefore removed. To improve data quality and computational efficiency, a minor allele frequency threshold of 5% was applied (
Population genetic structure and differentiation
Genetic structure within the final dataset was examined in R v 4.0.2 using various approaches. Firstly, a Discriminant Analysis of Principle Components (DAPC) within the R package Adegenet was computed to identify genetic clusters and examine the validity of individual assignment within groups. This analysis employs a k-means clustering algorithm which infers the optimal number of k based on the selection of clusters to maximise between-group variation. The most optimal model was defined upon the value which corresponded to the lowest BIC (Bayesian information criterion). A Principal Coordinates Analysis (PCoA) was then used to visualise genomic variation (Jombart 2008). Lastly, to investigate the extent of gene flow between different geographical locations, individual admixture coefficients were produced in the R package LEA, a Bayesian clustering program similar to that of STRUCTURE (Frichot et al. 2015). Shared ancestry among individuals were examined using models of K ranging between 1 and 10 with each
Population structure
Population structure between all sampling localities was supported by several genetic analyses. The DAPC showed that the most likely model of populations structure was K = 6, based on the lowest Bayesian information criterion (Appendix Fig. S1). The PCoA showed a similar result to the DAPC (Fig. 2). The PCoA demonstrated that individuals from each population clustered together, but there was variation between the level of differentiation among populations (Fig. 2). Axis 1 (21.1% of variation) separated the two furthest populations, Melrose (furthest north) and Hart (furthest south), from all the other populations while axis 2 (19.2% of variation) separated Hart and Rusty Cab from all other populations (Fig. 2). Thornlea, Yangya and Telowie were the most genetically similar and Yangya showed the most variation between its individuals.
The proportion of shared ancestry between the six populations produced by the Bayesian clustering program LEA was very small (Fig. 3). Alternative models of K also displayed little admixture between populations (Appendix Fig. S2).
All populations contained similar levels of high differentiation according to pairwise F ST estimates ( Table 1). The mean pairwise F ST estimates between populations was 0.53 ± 0.03 SD ( Table 1). Melrose produced slightly higher pairwise F ST estimates with all other populations (mean = 0.57 ± 0.007 SD).
AMOVA results displayed significantly greater variation within individuals (63%) than among populations (37%). No variation was detected among individuals (0%) (p = 0.01; Table 2). The proportion of fixed differences between the populations was similar across population pairs (mean = 5.1 ± 0.6%) (Fig. 4). Pairwise comparisons involving Melrose had the highest proportion of fixed differences model being repeated ten times. Pairwise Weir et al. (1984) F ST estimates were obtained using the R package StAMPP (Pembleton et al. 2013) to estimate population genetic differentiation among the sampling sites. The percentage of loci with fixed allelic differences between populations was calculated and represented as a heat map using the R package Pheatmap. A fixed allele occurs when all individuals in a population contain an allele that is not shared with individuals from any other population. The presence and amount of fixed alleles within a population will make that population more genetically distinct (Huber et al. 2016). We examined isolation by distance (IBD) to determine whether geographic distance or geographic isolation is affecting inter-population gene flow. We used the R package Adegenet to test for IBD with a Mantel test and Pearson's product-moment correlations using pairwise comparisons of Euclidean geographic distance and F ST /(1-F ST ) between populations (Rousset 1997).
Population genetic diversity
Observed heterozygosity (H O ) and expected heterozygosity (H E ) were calculated in the DARTR package (Gruber et al. 2018). An Analysis of Molecular Variance (AMOVA) was computed using GenAlEx v 6.503, implementing 999 permutations, to examine the distribution of genetic diversity within individuals, among individuals and among populations (Peakall and Smouse 2012). To compare levels of relatedness among the populations, pairwise comparisons of relatedness among individuals within one population were performed using the software COANCESTRY v 1.0. Relatedness estimates were obtained using the Wang estimator with 100 bootstraps while accounting for inbreeding (Wang 2011). F IS values were generated within GenAlEx v 6.503. To determine whether samples from each population contained clones, we used COLONY v 2.0.6.5 (Jones and Wang 2010).
Marker quality
From a total of 73,407 SNPs, 7,742 were retained across the 90 individuals after filtering. The majority of SNPs were removed due to low reproducibility (removing 18,578 SNPs) and missing data (removing 45,843 SNPs). Furthermore, the deletion of secondary SNPs and loci below the minor allele frequency threshold removed a further 1,139 and 105 SNPs respectively, bringing the total of retained SNPs within the final dataset to 7,742. This final dataset contained no missing values.
Relatedness within populations
The summary statistics of pairwise relatedness outcomes indicated high relatedness within each population (mean r = 0.89 ± 0.03 SD) ( Table 3). Melrose contained the highest relatedness (population mean r = 0.96), and Rusty Cab demonstrated the lowest relatedness (population mean r = 0.85) between individuals (Table 3). Although all F IS results produced a similar range of values (mean = -0.82 ± 0.04 SD) ( Table 3) these values were not significant (p = 1). Four of the populations contained only two clones of the 15 individuals sampled. Telowie had slightly more clones (5), but Melrose consisted entirely of clones (Table 3).
Genetic diversity and isolation-by-distance
Observed heterozygosity (mean = 0.26 ± 0.03 SD) appeared to be considerably higher than expected heterozygosity (mean = 0.13 ± 0.02 SD) and this pattern was found across all the populations (Fig. 5). Thornlea contained the highest value of expected and observed heterozygosity (0.15; 0.28) and Melrose contained the lowest value of expected heterozygosity and observed heterozygosity (0.09; 0.19). There was no significant trend in isolation by distance despite there being a positive correlation between geographic and genetic distance between pairs of populations (r = 0.48, p = 0.16). inhibiting fertilization between individuals who contain the same S-alleles. The mating system within these spiny daisy populations may be attributed to this mating system (Frankel and Galun 1977;DeMauro 1993). Species which exhibit self-incompatibility often demonstrate limited reproductive performance, particularly via inhibition of the stigma, pollen tube growth and reduced seed set (Lloyd 1968;Oloumi and Rezanejhad 2009). The co-occurrence between SI and clonality is common among small populations as it helps compensate for limited seed production and reduces selection pressure which favours the breakdown of SI, thus preventing inbreeding (Vallejo-Marín 2007;Franklin-Tong 2008;DeMauro 1993) demonstrated that the partial-SI within the lakeside daisy (Hymenoxys acaulis) can vary depending on the degree of relatedness between mates (DeMauro 1993). She found that although selfing resulted in incompatibility, mating's between full siblings and between parent and F1 progeny were able to produce seed (DeMauro 1993). A similar incidence may have occurred within the spiny daisy. Jusaitis and Adams (2005) reported that seed collected and artificially raised from Hart contained similar genetic composition to that of their genetically identical parents; however the offspring differed among 2-5 loci and these changes did not contain new alleles (Jusaitis and Adams 2005). The increased resolution of data obtained within this study shows that members within Hart are not all identical, therefore, it is possible that individuals which produce seed may in fact contain different S alleles allowing them to interbreed, although this would be rare due to low genetic diversity (DeMauro 1993). A lack in observed seedling recruitment within the wild due to either SI, inadequate environmental conditions or reduced pollinator activity could each contribute towards an absence of historical sexual reproduction (Jusaitis and Adams 2005).
Although there was no evidence of inter-population gene flow, Thornlea, Rusty Cab and Yangya sites contained lower levels of pairwise genetic differentiation, higher levels of genetic diversity and fewer numbers of clones than Melrose and Telowie. A possible explanation for this is that these populations have been isolated for fewer generations and/ or have experienced occasional sexual reproduction in the past, which would reduce genetic drift (Young et al. 2002;Milton et al. 2004). As these three sites occur proximal to each other, and face exposure to similar environmental
Discussion
This study confirms that strong genetic structure is present among all currently known extant populations of spiny daisy with little to no evidence of inter-population gene flow. This indicates that each site represents a unique subset of genetic variation of this species. The inclusion of Telowie and Melrose populations has increased the amount of known genetic diversity that exists within the species by approximately 33%, however the lack in connectivity among these isolated populations reinforces the extinction risk faced by this species (Ellstrand and Elam 1993;Young et al. 1996;Spielman et al. 2004;Frankham and Wilcken 2006). Although relatedness within each population was high, only Melrose represented a single genet. All other populations contained a small number of clones. Our findings therefore differ to that of previous genetic studies that described each population to consist entirely of clones (Jusaitis and Adams 2005). Levels of observed heterozygosity were greater than expected heterozygosity, likely as a result of high levels of clonal reproduction and/or mechanisms preventing sexual reproduction. Our study helps improve the knowledge of the distribution of limited genetic diversity within this critically endangered species and demonstrates that the spiny daisy has likely experienced strong genetic bottlenecks (Rodger et al. 2021), probably due to large-scale habitat disturbance (Jusaitis and Adams 2005;Clarke et al. 2013;Brown and Hodgkin 2015). The varying levels of genetic diversity detected among populations and subtle differences in population structure provide a deeper insight towards prioritization strategies for conservation management (Gardiner et al. 2017).
The presence of sporophytic self-incompatibility (SI) is present in many Asteraceae species and serves as an evolutionary mechanism to promote outcrossing within populations and restrict the build-up of deleterious alleles which is often associated with inbreeding in small, isolated populations (Zagorski et al. 1983;Les et al. 1991;Ferrer and Good-Avila 2007). Given the absence of excess homozygosity and sexual reproduction, high relatedness within the spiny daisy populations may reflect increased clonality, whereby the majority of the population represents descendants from a single lineage (Young and Brown 1999). Compatibility of the pollen grain is typically determined by the diploid genotype at the S locus of the paternal individual, Blyth et al. 2021). The translocation of individuals from all populations (via vegetative cuttings), including Melrose into areas more similar to the expected range of climatic suitability such as at Banrock Station has been carried out (Bickerton et al. 2018) and this serves as a solution towards preventing the loss of species genetic diversity, especially if the original populations become locally extinct. Increased growth rates, flowering and seed set within this population were experienced when plants gained access to improved habitat quality (increased watering and soil tillage), suggesting that the plants may be better suited towards environments which contain greater similarity to that of their historic distribution (Eckert 2002;Sharp et al. 2010;Bickerton and Tourenq 2015;Bickerton et al. 2018). Currently there is a study analysing the genetic diversity within these seedlings to determine whether sexual reproduction has occurred within the species. Further studies which examine the species ecological requirements and assess genetic composition within progeny will lead to opportunities that reintroduce gene flow among populations.
Conservation implications and management recommendations
Our study helps improve the knowledge of genetic structure within the critically endangered spiny daisy, demonstrating that it is likely experiencing strong genetic drift probably due to processes that limit sexual reproduction (Sharp et al. 2010;Clarke et al. 2013;Bickerton et al. 2018;Rodger et al. 2021). However, as the evolutionary resilience of a species relies upon adequate levels of genetic diversity which can only be achieved through successful interbreeding, and our study therefore raises concerns about its adaptability and we call for updated conservation management interventions (Bijlsma et al. 2000;Brook et al. 2002;Young et al. 2002;Blambert et al. 2016). We urge the implementation of strategies which enhance reproductive fitness to facilitate sexual reproduction and encourage seed set as the initial step towards species recovery. Although preliminary hand pollination experiments between Hart and Thornlea have been trialled, limited seed viability has been a notable hurdle towards successful germination (Jusaitis and Adams 2005;Sharp et al. 2010;Clarke et al. 2013;Bickerton et al. 2018). Updated crosses among individuals from each population (propagated from vegetative cuttings) would help determine the severity of this problem and explore reproductive compatibility among populations.
Common garden trials enable conservation biologists to measure fitness in parental as well as outcrossed progeny, evaluating the effects and risks of outbreeding depression in a controlled setting (Ottewell et al. 2016; Martín-Forés conditions, increased genetic diversity may reflect historic opportunistic gene flow followed by genetic drift upon being isolated (Milton et al. 2004). Increased spatial separation among populations may have led to increased rates of clonality within populations as a result of limited mate availability. Changes in reproductive strategy from a sexual to vegetative mode in response to patchy geographic distribution has been observed in other Asteraceae species in Australia (Young et al. 2002;Blyth et al. 2021). For example, populations of Rutidosis leiolepis that occur in higher altitudes have been reported to contain greater levels of clonality and population genetic differentiation than those at lower altitudes. Reductions in pollinator activity within these areas, altered length of flowering season, as well as increased disturbance events were considered possible reasons towards increased clonality (Young et al. 2002). Variation in the level of clonality observed among spiny daisy populations may therefore have been influenced by changes in historic connectivity. High intra-population relatedness with strong differentiation, particularly within Melrose and Telowie, indicate effects of genetic drift are likely due to higher rates of cloning (Campbell and Husband 2005).
Clonal reproduction serves as an advantageous strategy within challenging landscapes as it increases population size and maintains fitness within generations due to the repetition of heterozygous genotypes (Young et al. 1999;Stoeckel et al. 2006;Navascués et al. 2010). Clonality can allow for the sharing of resources (such as water, carbohydrates, minerals and photosynthate) among ramets and this competitive advantage can act as a vital tool for population colonisation (Alpert 1996;Pennings and Callaway 2000;Stuefer et al. 2004;Jusaitis and Adams 2005;Donahue and Lee 2008). Our study shows that the extent of clonality and relatedness varies among populations with Melrose consisting entirely of clones. It may be possible that this population has been founded more recently that the other populations, resulting in low genetic diversity in the form of somatic mutations (Stoeckel et al. 2006;de Meeûs et al. 2007;Wang et al. 2018). Asexual reproduction during unfavourable conditions such as when mates or pollinators are limited or during periods of increased environmental stress, can serve as a short-term, cost-effective mechanism to increase population size as it assures reproduction (Wang et al. 2018). The most clonal populations: Telowie and Melrose (respectively containing 33% and 100% clones), also represent the most northern sites. The Melrose population may be at a greatest risk of extinction as it might be at the edge of the climatic limit for this species and does not have the genetic variability to adapt to a changing environment (Dodd and Douhovnikoff 2016). Assisted migration can help retain genetic diversity, and this is important should the original site become further degraded or locally extinct (Clarke et improve genetic diversity in addition to facilitating species restoration. Results from common garden trials will also help elucidate the risks and benefits of initiating gene flow among differentiated populations. Our study highlights the urgency of genetic rescue for the spiny daisy and is a guide to develop conservation management strategies which will increase evolutionary resilience within the species.
Acknowledgements We would like to thank Trees for Life for their assistance with this study.
Funding This study was partially funded by a Flinders University Honours scholarship.
Open Access funding enabled and organized by CAUL and its Member Institutions
Data Availability Data will be made available in an open access repository once our paper is published. https://doi.org/10.25451/ flinders.20380362
Conflict of interest
The authors do not have any conflicts of interest to declare.
>Author declaration We declare that the content of this article is being submitted only to Conservation Genetics and has not been previously published elsewhere. Furthermore, this manuscript does not contain material which has been published by or plagiarised from other sources, using references to accredit known sources for where the information was obtained.
Permit to conduct research Permission to conduct research was given by the Government of South Australia, Department for Environment and Water, permit number E26945-1.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. Alpert P (1996) Blyth et al. 2020Blyth et al. , 2021; Van Rossum and Le Pajolec 2021). These trials have been increasingly used as an effective management approach to re-introduce genetic diversity within clonal, highly related and moderately differentiated in-situ populations as well as guide the design of ex-situ translocation populations (Ellstrand 2014;Ottewell et al. 2016;Blyth et al. 2021;Brunton et al. 2021;Gavin-Smyth et al. 2021). We advise the use of common garden experiments to validate the benefits of introducing novel genotypes within in-situ and ex-situ populations. In addition to habitat suitability, many studies have shown that the composition of genetic diversity of founder individuals acts as a primary driver towards successful early establishment of translocation sites (Schäfer et al. 2020; Van Rossum and Le Pajolec 2021). The increased probability of containing pre-adapted genotypes, as well as the reduced likelihood of bottleneck effects, should improve its evolutionary potential (Gamfeldt and Källström 2007;Breed et al. 2019). The variation of population genetic diversity detected within this study can direct ex-situ conservation efforts by determining sampling intensities within populations for the purpose of genetic rescue (Gardiner et al. 2017;Rodger et al. 2021).
References
As each population represents a subset of unique genetic diversity, representatives from each locality should be used for outcrossing experiments as this will maximise genetic diversity within the progeny. For collection purposes, the intensity of sampling within sites needs to be evaluated depending on the level of genetic diversity within each site (Greenfield et al. 2016;Gardiner et al. 2017;Rossetto et al. 2021). Extensive sampling within clonal sites such as Melrose will help increase population size, however, it may pose an unnecessary use of resources e.g. if sampling is too intense, as it will likely under-represent the genetic diversity of the species (Broadhurst et al. 2008;Gardiner et al. 2017). Instead, pollen from one individual in Melrose could be used to fertilise several individuals in other populations.
Conclusion
Our study has shown that the spiny daisy is at high risk of extinction due to its limited genetic diversity which has likely been influenced by habitat loss. Each population contained (i) strong genetic structure, (ii) low levels of genetic diversity and (iii) little to no evidence of gene flow. Although excessive homozygosity was not detected, high relatedness among individuals suggests this species may be unlikely to increase its genetic diversity without intervention as populations remain isolated. Further studies which examine (i) reproductive compatibility between populations via hand pollination experiments and (ii) species ecological requirements will help provide insight into strategies to | 2022-09-01T15:26:43.402Z | 2022-08-30T00:00:00.000 | {
"year": 2022,
"sha1": "28898a87cc6be0e6fe7b6da438a4b24d200730a8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10592-022-01468-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "4d3d6b9ead4657e1d237254290c080e6aaf55018",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
258334425 | pes2o/s2orc | v3-fos-license | Coping with COVID: Performance of China’s hierarchical medical system during the COVID-19 pandemic
Objective The COVID-19 pandemic has challenged the health system worldwide. This study aimed to assess how China’s hierarchical medical system (HMS) coped with COVID-19 in the short-and medium-term. We mainly measured the number and distribution of hospital visits and healthcare expenditure between primary and high-level hospitals during Beijing’s 2020–2021 pandemic relative to the 2017–2019 pre-COVID-19 benchmark period. Methods Hospital operational data were extracted from Municipal Health Statistics Information Platform. The COVID-19 period in Beijing was divided into five phases, corresponding to different characteristics, from January 2020 to October 2021. The main outcome measures in this study include the percentage change in inpatient and outpatient emergency visits, and surgeries, and changing distribution of patients between different hospital levels across Beijing’s HMS. In addition, the corresponding health expenditure in each of the 5 phases of COVID-19 was also included. Results In the outbreak phase of the pandemic, the total visits of Beijing hospitals declined dramatically, where outpatient visits fell 44.6%, inpatients visits fell 47.9%; emergency visits fell 35.6%, and surgery inpatients fell 44.5%. Correspondingly, health expenditures declined 30.5% for outpatients and 43.0% for inpatients. The primary hospitals absorbed a 9.51% higher proportion of outpatients than the pre-COVID-19 level in phase 1. In phase 4, the number of patients, including non-local outpatients reached pre-pandemic 2017–2019 benchmark levels. The proportion of outpatients in primary hospitals was only 1.74% above pre-COVID-19 levels in phases 4 and 5. Health expenditure for both outpatients and inpatients reached the baseline level in phase 3 and increased nearly 10% above pre-COVID-19 levels in phases 4 and 5. Conclusion The HMS in Beijing coped with the COVID-19 pandemic in a relatively short time, the early stage of the pandemic reflected an enhanced role for primary hospitals in the HMS, but did not permanently change patient preferences for high-level hospitals. Relative to the pre-COVID-19 benchmark, the elevated hospital expenditure in phase 4 and phase 5 pointed to hospital over-treatment or patient excess treatment demand. We suggest improving the service capacity of primary hospitals and changing the preferences of patients through health education in the post-COVID-19 world.
Introduction
Beginning with the first reported coronavirus disease-19 in Wuhan, the COVID-19 pandemic has challenged the health system worldwide (1,2). China's zero COVID-19 tolerance strategy involves local and regional lockdowns, large-scale compulsory testing, reduced travel, social distancing measures, and changed hospital use. Hospitals played a crucial role in saving lives and curbing the spread of the virus. But hospitals were also hazardous places, where non-COVID-19 patients might catch COVID-19, a major reason patients avoided hospitals, frequently going without necessary preventative and active treatments (3). Besides a fall in demand for medical services, the supply of services also declined as hospitals reduced many normal medical services while prioritizing Covid-related and emergency services. How did China's hierarchical medical system (HMS) cope with the COVID-19 pandemic? We assess the performance of Beijing's hierarchical medical system during the COVID-19 pandemic between January 2020 and October 2021. We measure the number and distribution of primary and high-level hospital visits, emergency visits, surgeries, and medical expenditures during the pandemic relative to the pre-COVID-19 period. Second, we measure the speed, or time, that HMS took to return to pre-COVID-19 levels of hospital visits and expenditures. Our analysis informs researchers and policymakers both in China and other countries about a mega-city's medical system responses to short-and medium-term challenges during the COVID-19 public health emergencies.
A unique, and complicating feature of China's HMS is patients' first preference for high-level hospitals, rather than lower-level primary hospitals, for medical treatment (4). Patients' high-level hospital first preference means high-level hospitals are over-used by patients, when non-emergency and common medical conditions can be adequately treated at primary health facilities. Primary hospitals are not used as gatekeepers, which helps explains much of the low efficiency, poor preventative care, and inadequate treatment regimes in Chinese hospitals (5). To address the over-use of high-level hospitals, improve efficiency and reduce waste, two decades of reform have seen the Chinese government implement a hierarchical medical system (HMS). Primary care hospitals, including village clinics, township hospitals, and community health centers, provide preventive and primary care services. High-level hospitals, including tertiary hospitals and provincial hospitals, provide comprehensive treatments, complex care, and medical research and training (5). China's HMS involves four key parts: primary treatment at the community level; disparate treatment for emergency and chronic diseases; two-way referral; and cooperation between different-level medical facilities (6).
But two decades of reform have not seen a re-distribution of patient use from high-level to primary hospitals. From 2010 to 2019, the proportion of outpatient care provided by primary hospitals fell from 63.9 to 54.1%, and inpatient primary care dropped from 29.3 to 16.9%, of all hospital treatments (7). High-level hospitals comprised just 3.5 percent of medical institutions, but accounted for 45 percent of all outpatient visits (8). One study reported that nearly half of the patients in Shanghai's high-level hospitals suffered from common or frequently-occurring diseases that could be adequately treated at primary hospitals (9). Despite the government's increased health system expenditures and 15 years of health system reforms (10,11), shift of patient preferences towards primary hospital care has been largely unsuccessful.
Against this background of hospital use, how to improve the performance of the HMS has been a research hotspot (4,12,13). COVID-19 challenged China's city hospitals, as well as hospitals and health systems around the world. The evaluation and comparison of hospitals and healthcare systems' performance pre-and during COVID-19 have garnered the attention of numerous scholars and policymakers (14). For example, a study evaluated the performance of Portuguese public hospitals using a network data envelopment analysis model and found a consistent decrease in efficiency during the pandemic, followed by a recovery to levels exceeding those prior to the pandemic (15). Another study estimated 55 nations' efficiency in the fight against the pandemic (16). Zhou et al. calculated the health service efficiency of primary healthcare institutions among 28 provinces in China before COVID-19 and compared compare the urban-rural differences (17). Banafsheh Sadeghi and colleagues evaluated COVID-19 pandemic preparedness and performance in 180 countries using the key outcome measure COVID-19 fatality. A systematic review investigated the impact of the COVID-19 pandemic on the utilization of healthcare services and reported that healthcare utilization decreased by about a third during the pandemic (18). However, no study was found concerning the performance of the HMS in China during the pandemic.
With a well-established HMS, Beijing city is considered one of the medical centers in China and can be a representative example city. Using monitoring data on the operation of 361 Beijing primary and high-level hospitals, we examined how Beijing's HMS coped in the short and medium-term to five phases of the COVID-19 pandemic: phase 1 outbreak, phase 2 epidemic, phase 3 sporadic COVID-19, phase 4 vaccinations, and phase 5 postepidemic. Each COVID-19 phase impacted people's healthcareseeking behavior and supply of hospital services (19)(20)(21). By measuring the distribution and changes in patient visits (including non-local patients), surgery cases, and medical expenditures across Beijing's primary and high-level hospitals, we assessed how the Frontiers in Public Health 03 frontiersin.org HMS coped during COVID-19 in comparison with the pre-COVID-19 period and the speed of returning to pre-COVID-19 treatment levels.
Study design
All data were analyzed at an aggregate level and no individual participants were included. We extracted monthly data on the number of outpatients and inpatients, emergency and surgery patients, and expenditure data from the Beijing Municipal Health Statistics Information Platform database. Our data from January 2017 to October 2021 covered 361 public hospitals, comprising 206 primary health care facilities and 155 high-level hospitals. The COVID-19 period was 22 months long spanning January 2020 to October 2021, and the baseline data were 36 months comprising 2017, 2018, and 2019. A total of 20,938 hospital-months data were included in the analysis. Confirmed cases of COVID-19 were collected from daily reports on the Beijing Municipal Health Commission website and summed to calculate monthly data.
Performance measures
Indicators during the COVID period The baseline level Compared with the baseline pre-COVID-19 level (2017-2019), we expected a large absolute decline in the percentage change indicators, corresponding to the largest number of COVID-19 cases that occurred in phase 1 and phase 2 (24). We expected the decline in high-level hospital visits, especially in phases 1 and 2, to be significantly greater than that at primary hospitals. There were two forces at work: first, primary hospitals were viewed as posing a lower risk of catching COVID-19 than high-level hospitals and, second, some non-essential departments in high-level hospitals shut down at the beginning of the pandemic, forcing patients to primary hospitals (4,25,26). Non-local patients, or patients without Beijing household registration, we anticipated followed the same pattern as local patients, but suffered greater falls. As measures of complex medical problems, we expected that surgeries and emergencies also fell in high-level hospitals. We assessed Beijing's HMS coping well during the 2020-2021 pandemic by its return to 2017-2019 benchmark levels; the speed, or timing, of returning to 2017-2019 benchmark levels; and whether patient preferences for treatments shifted from over-used high-level to under-used primary hospitals.
Indicators
Corresponding results
Comments HMS performance
The number of outpatients and inpatients
Details below
Each shadow area denotes COVID-19 phase 1 to phase 5. The green and red shading correspond to the part of higher expenditure for baseline than pandemic era. The yellow shading corresponds to the part of higher expenditure for the pandemic era than baseline.
Frontiers in Public Health 04 frontiersin.org
Statistical analysis
Three-year 2017-2019 mean values were calculated to provide pre-COVID-19 baseline comparative data. Considering the aim of the current study, to assess the HMS by measuring the number and distribution of hospital visits and healthcare expenditure between primary and high-level hospitals before-and during COVID-19, descriptive analysis was mainly employed. The method and indicators were widely used in the literature (27)(28)(29). First, the monthly total number of outpatients and inpatients was calculated in different COVID-19 phases, compared with the 2017-2019 baseline. This allowed us to assess the speed that the HMS treatments during the COVID-19 phases took to return to pre-COVID-19 levels. Second, we evaluated the percentage changes of emergency outpatients and surgery inpatients in high-level hospitals to identify the influence of the COVID-19 on urgent medical needs during different COVID-19 phases. Third, the proportion of surgery outpatients in primary hospitals and the proportion of surgery outpatients in high-level hospitals were calculated to reflect the changes in patients' healthcare-seeking preferences. Fourth, the proportion of outpatients in primary hospitals and percentage changes of outpatients in primary and high-level hospitals were calculated to reflect changing patient preferences between primary hospitals and high-level hospitals during the COVID-19 pandemic.
Similarly, the percentage change of non-local patients was also assessed in this phase. Finally, we compared the outpatient and the inpatient expenditure in different COVID-19 phases with the baseline level, to identify whether and how much health expenditure under HMS was affected due to changes in patients' healthcareseeking preferences. Figure 1 displays the monthly confirmed COVID-19 cases from January 2020 to May 2021 in Beijing. We did not include the non-Beijing, or imported cases, as these patients were treated in designated hospitals. As shown in Figure 1, January and February 2020 were the severest COVID-19 months with the largest number of confirmed cases widely spread across Beijing's districts. All confirmed cases were discharged from hospitals in April 2020, ending the outbreak phase 1. In June 2020, COVID-19 emerged in the Xinfadi vegetable market district and spread to several surrounding districts, marking the epidemic stage, with this outbreak controlled in July 2020. The sporadic COVID-19 phase 3 in Figure 1 encompassed the subsequent months, with
Change in the total number of outpatients and inpatients
Compared with the baseline data, Figure 2 shows that the total number of patient visits dramatically declined in phase 1 (outpatients decreased by 44.6% and inpatients decreased by 47.9%). The number of patients avoiding hospitals was significantly greater than any increase in COVID-19 patients. The plots of both outpatients and inpatient visits in Figure 2 display an upward trend in phase 2 with patients returning to hospitals, but the number of patients remaining below the pre-COVID-19 baseline until phase 4. There was a rapid return toward benchmark levels in phase 1 and phase 2. But the phase 3 gap to 2017-2019 levels was 13.1% for outpatients and 12% for inpatients, and remained 1.8% below pre-pandemic levels for outpatients in phase 4, although the number of phase 4 inpatients was 3.7% higher than the mean 2017-2019 benchmark. While the speed towards pre-COVID-19 levels was rapid in phase 1 and phase 2, the pre-COVID-19 levels were not attained before phase 4 for inpatients and phase 5 for outpatients (Supplementary Table 1).
Change in the number of emergency outpatients and surgery inpatients in high-level hospitals
The overall inpatient and outpatient visits in Figure 2 disguise the significant differences in hospital visits between high-level and primary hospitals. Year FIGURE 3 Change in number of high-level hospital inpatient surgeries and emergency outpatient visits. Each shadow area denotes COVID-19 phase 1 to phase 5.
Frontiers in Public Health 06 frontiersin.org Percentage change of outpatients in primary and high-level hospitals. Each shadow area denotes COVID-19 phase 1 to phase 5. Table 2 reveals the proportion of surgery outpatients in primary hospitals both in the COVID-19 and in the baseline period. Primary hospitals had a large overall increase in the proportion of surgery outpatients relative to the baseline during the pandemic period and peaked in phase 2. Primary hospitals coped with the pandemic by providing more surgery services for patients in the severest phase 1 and phase 2 of the pandemic. Although the percentage of primary hospital surgery outpatients declined in phase 3, it fluctuated around 1% above pre-COVID-19 levels, rising to 6.5% in phase 4. Though lower than in phase 4, the proportion of outpatient surgeries in primary hospitals in phase 5 was 1.6% higher than baseline on average.
Surgery outpatients in primary hospitals
Performance of high-level and primary hospital outpatient treatment Figure 5 displays the percentage change of outpatients in primary and high-level hospitals compared to the pre-COVID-19 baseline. In phase 1, outpatients in high-level hospitals displayed a sharper drop than that in primary hospitals. The percentage change of outpatients in primary hospitals remained below the baseline, but the margin closed rapidly in phase 1 and phase 2, and was mostly closed in phase 3, but not fully closed until mid-phase 4. In highlevel hospitals, Figure 5 shows that the outpatient visit gap with pre-pandemic levels was not closed until phase 5, with phase 1 and phase 2 showing a speedy narrowing of the gap, which stalled in period 3 and period 4. Figure 6 and Supplementary Table 3 show the proportion of outpatients in primary hospitals rose relative to outpatients in all the hospitals, reflecting a patient preference for primary hospitals. The percentage of primary hospital outpatients relative to all hospital outpatients peaked in February 2020 (36.7%), then fell until phase 5. Phase 1 had the largest increase in the proportion of outpatients in primary hospitals, 11.3% (30.7% vs. 19.5%). While in phase 5, the primary hospitals had only a 1% higher proportion of outpatients than the baseline level. Year FIGURE 4 Proportion of surgery outpatients in primary hospitals. Each shadow area denotes COVID-19 phase 1 to phase 5. The proportion of outpatients in primary hospitals. Each shadow area denotes COVID-19 phase 1 to phase 5. Percentage change in non-local patient visits in high-level hospitals. Each shadow area denotes COVID-19 phase 1 to phase 5.
Frontiers in Public Health 07 frontiersin.org Table 3 illustrate the percent changes in non-local patient clinic visits, including emergency visits and outpatient visits, and non-local inpatients in high-level hospitals in Beijing. Both non-local patient clinic visits and non-local inpatient visits dramatically declined in phase 1, especially in February and March 2020, where clinic visits decreased by 76.7% in February and inpatients visits decreased by 80.5% in March. Both clinic visits and inpatients from other provinces remained below the pre-COVID-19 baseline visits in phase 2, phase 3 and phase 4. In phase 3, the number of non-local inpatients was 5.90% less than the same stage of baseline and was 12.74% less than in phases 4 and 5. In phase 5, the percentage change in the number of non-local clinic visits was close to or exceeded the baseline from May 2021, with a percentage change about 25% greater than the baseline in October 2021.
Discussion
We assessed how Beijing's HMS coped during the COVID-19 epidemic compared with the average 2017-2019 baseline level and also chartered the distribution of patient treatments and expenditures between primary and high-level hospitals relative to each other. In response to the pandemic, our data show that hospital visits fell, not attaining their pre-pandemic 2017-2019 benchmark until phase 4, although the pre-COVID-19 benchmark gap was narrowed in phase 1 and phase 2. Second, emergency services at high-level hospitals remained below the 2017-2019 benchmark until phase 2. Non-local patients fell at high-level hospitals through the entire pandemic period. In terms of patients with basic medical needs, Beijing's HMS coped well during the early stage of the pandemic when patients accessed primary hospitals. However, the shift in the distribution of patients from over-used high-level to under-used primary hospitals during phases 1-3, was not sustained, with the pre-pandemic over-use of high-level hospitals restored by phase 4.
COVID-19 impact on healthcare-seeking
The dramatic decline in outpatient and inpatient rates in the early phases reflected both supply and demand factors. On the demand side, patients avoided or delayed health treatments because of fear of catching COVID-19 and pandemic-induced restrictions, including a reduction of high-level hospital services (26). In a survey analyzing the factors that lead to delayed medical treatment for chronic disease patients during COVID-19 in China, the fear of catching COVID-19 in the hospital was ranked first (3). On the supply side, health service suspensions meant reduced visits, with the Beijing Municipal Health Commission in January 2020 canceling some non-emergency services, such as most stomatology services. Typical of high-level hospitals, the Beijing Hospital of Integrated Traditional Chinese and Western Medicine suspended services across 14 departments in February 2020, including sub-health management and ophthalmology. Current studies highlighted that the cessation of certain health services may lead to a decline in patients' quality of life. Additionally, the healthcare system may incur higher costs in the future to regain the loss of benefits from previous therapies due to their discontinuation (30,31). Thus, to reduce the unessential visits to hospitals and insure basic health services, the government implemented a series of policies including prompting pharmacy delivery services and telemedicine. Furthermore, the doctors Comparison of the health expenditure pre and post COVID-19. Each shadow area denotes COVID-19 phase 1 to phase 5; (A, B) The green and red shading correspond to the part of higher expenditure for baseline than pandemic era; The yellow shading corresponds to the part of higher expenditure for the pandemic era than baseline.
Frontiers in Public Health 08 frontiersin.org were allowed to reasonably prescribe up to 12-week drugs for patients with chronic illnesses, such as hypertension, diabetes, and chronic obstructive pulmonary diseases (32).
Other studies have also revealed significant reductions in hospital visits during the early stage of the COVID-19 pandemic. US studies reported decreased emergency department (ED) visits ranging from 41.5 to 63.5% during the early pandemic period (33). In the 10 weeks following the US COVID-19 outbreak, ED visits declined by 23% for myocardial infarction, 20% for stroke, and 10% for the hyperglycemic crisis, compared with the preceding 10-week period (20,(34)(35)(36). In a tertiary referral center in Boston, 35.1% of hospitalizations decreased in the first 6 weeks compared with the same period in 2019 (37). Importantly, COVID-19 has posed a significant influence on patients' healthcare-seeking behavior worldwide. Studies conducted in India (38), Lithuania (39), and Australia (40) identified delayed healthcare seeking from patients which was similar to our findings. Another study conducted in China also reported that the COVID-19 epidemic has greatly affected the behavior of tuberculosis patients seeking medical care, with some of them delaying or giving up healthcare seeking (41). Similar to our results, patients were also observed returning to hospitals gradually, fluctuating with the change in the COVID-19 situation (42,43). The recovery of hospital visits was rapid in phase 1-phase 3, but did not close the gap with pre-COVID-19 levels. Unlike many other countries, Beijing's zero-tolerance strict prevention and control measures saw the COVID-19 outbreak brought rapidly under control. As the emergency response level was lowered, hospitals re-opened and suspended departments gradually provided services. As residents were vaccinated for COVID-19 during the phase 3 vaccination period, both outpatient and hospitalization rates in primary and high-level hospitals reached, or exceeded, their pre-COVID-19 baseline levels in phase 4 and phase 5.
Rise and fall of visits to primary hospitals
During the early phases of COVID-19, the HMS in Beijing realigned patients to suitable medical resources. The proportion of surgery outpatient visits and common outpatient visits rose in primary hospitals relative to high-level hospitals. A 9.5% increase in outpatients in primary hospitals and a 59.4% decrease in non-local outpatients in high-level hospitals reversed a decade of increased use of high-level hospitals. This realignment of patients reflected a correction to the over-use of high-level hospitals, reflecting one chief aim of the HMS (6). In this respect, the pandemic brought a change in patient preferences for hospital treatment.
The realignment of patients towards high-level hospitals was managed through hospital appointments and a community referral system (44). Older patients and those suffering from chronic illnesses were strongly encouraged to receive primary diagnosis and treatment in primary healthcare facilities. Not only were patients with common or minor medical needs advised to visit primary hospitals, but patients were turned away from high-level to primary hospitals (45). Second, high-level hospitals equipped with a fever department meant patients turned towards primary hospitals due to the increased risk of catching COVID-19 in high-level hospitals. Third, telemedicine and pharmacies as alternatives to hospital visits quickly developed. Supported by the government, telemedicine became an important access point for health care during COVID-19 worldwide, mainly using telephone, video calls, and web servers to "visit" a doctor (19,46,47), which especially replaced high-level hospital visits.
The realignment of patient preferences for primary hospitals was not permanent. Patient preferences for high-level over primary hospitals were reversed by phase 3, and during phase 4 and phase 5 of the pandemic over-use (under-use) of high-level (primary) hospitals was re-established as a key feature of Beijing's HMS. Our data on how Beijing's HMS coped with the pandemic suggest that the task of reforming the use of China's HMS is daunting. The greatest health crisis in 100 years failed to change patient hospital preferences in China. The task of changing patient preferences will require long-term significant new resources and targeted measures. Leveling up primary versus highlevel hospital quality will be a long-term financial task requiring many years. Specifically, the quality of primary health care should be improved, with higher trained and qualified general practitioners and technology innovations, including telemedicine and effective management mechanisms needed by primary hospitals (48). These resource reallocations should reduce competition between primary and highlevel hospitals and reduce the attraction of high-level Beijing hospitals to patients from other cities and regions (31). A more radical approach would involve discouraging patients from visiting high-level hospitals before using primary hospitals as gatekeepers. This could be incentivized through charging substantial differential fees for visiting high-level versus primary hospitals. Alternatively, high-level hospitals could require patients to receive a referral from a primary hospital. Only patient experiences with quality primary hospital healthcare will change patient preferences for primary hospitals. Other supporting measures will be required. High-level hospital appointment policy needs to discourage treatment for minor health issues more appropriately managed at primary hospitals. Targeted information campaigns are required to educate patients on primary hospital gatekeeping functions.
Rise of health expenditures
Health expenditure was below the baseline in the first two COVID-19 phases, but reached pre-COVID-19 levels by phase 3. By mid-phase 4 (March 2021), both the outpatient and the inpatient health expenditure was higher than the 2017-2019 baseline, but without any growth in patient numbers. Compared to 2017-2019, the surging health expenditures suggest more expensive treatments for a stable number of patients. This might reflect over-servicing or patient over-demand for services or some combination of both (49). The government should strengthen supervision on over-treatment, especially unnecessary surgeries and over-diagnosing procedures to avoid both hospital waste and elevated out-of-pocket expenses for patients (50). While the phase 1 and phase 2 expenditure data reflects HMS coping with the pandemic, the rising expenditures surpassing the 2017-2019 baseline, points to a predicament of the HMS to manage healthcare costs, especially in the post-pandemic world.
Strengths and limitations of the study
This is the first study of patient use and healthcare expenses during different phases (outbreak, epidemic, vaccination, sporadic COVID-19, and post-epidemic) in China's pandemic. First, our data are hospitallevel monitoring data, which means patients' individual healthcareseeking behavior could not be assessed. Future studies should assess Frontiers in Public Health 09 frontiersin.org patients' healthcare-seeking behavior from the perspective of different disease groups using individual data. More finely differentiated patient data, such as critical versus common illnesses and treatment of children aged 0-4 would provide new insights. Second, expenditure data on private hospitals and pharmacies were not included, which means our study may not reflect the whole health system's health expenditure.
Since private hospitals and pharmacy were not included, health expenditure above baseline in phase 4 and phase 5 suggests our data under-estimated total healthcare expenditures. Third, phase 5 ended in October 2021 to avoid the confounding effects of the Beijing Olympic Winter Games, which meant no new pandemic wave was identified. Fourth, the Beijing results are likely to be representative of large city HMS, they may not reflect smaller cities.
Conclusion
Beijing's hospital system faced a large fall in hospital visits, emergency treatments and surgeries during the first phases of the COVID-19 pandemic. Beijing's HMS only reach pre-pandemic 2017-2019 levels of treatments in the last phases (phase 4 and phase 5) of the pandemic. In the early pandemic phases, primary hospitals played an important role in guaranteeing healthcare needs as patients substituted primary hospital treatment for high-level hospitals. This redistribution of patients reflected a better allocation of patient healthcare use, as patients moved away from over-used high-level hospitals towards primary care. By the sporadic COVID-19 phase 3, inpatients and outpatient visits, emergency visits and surgeries approached, but only reached in phase 4 and phase 5, the 2017-2019 benchmark level. While the gap with benchmark levels was quickly narrowed, only in phase 4 or phase 5 was the gap closed.
We identified two important further findings. Medical expenditures in phase 4 and phase 5 may point to over-servicing by hospitals and overdemand healthcare by patients. Second, the pandemic did not permanently change patients' preferences for high-level over primary hospitals. The failure to permanently change patient healthcare preferences means that the HMS faces the same challenges postpandemic as in the pre-COVID-19 period. To strengthen the HMS in the post-pandemic world, we suggest the policy-makers focus on the following aspects: (1) Improving the service capacity of the primary hospitals in terms of the talent pool, medical technology, and medical equipment. (2) Enhancing health education for patients and guiding patients with chronic and minor illnesses to seek medical care at primary hospitals. More importantly, to change patients' preference for high-level hospitals. (3) Strengthen the supervision of hospital and physician behavior to avoid excessive medical treatment after the epidemic.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. | 2023-04-27T13:21:53.468Z | 2023-04-27T00:00:00.000 | {
"year": 2023,
"sha1": "d5b53baeda3b390e42dbc150ed08ae8068117c67",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d5b53baeda3b390e42dbc150ed08ae8068117c67",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
} |
15893155 | pes2o/s2orc | v3-fos-license | Reproductive health and pregnancy outcomes among French gulf war veterans
Background Since 1993, many studies on the health of Persian Gulf War veterans (PGWVs) have been undertaken. Some authors have concluded that an association exists between Gulf War service and reported infertility or miscarriage, but that effects on PGWV's children were limited. The present study's objective was to describe the reproductive outcome and health of offspring of French Gulf War veterans. Methods The French Study on the Persian Gulf War (PGW) and its Health Consequences is an exhaustive cross-sectional study on all French PGWVs conducted from 2002 to 2004. Data were collected by postal self-administered questionnaire. A case-control study nested in this cohort was conducted to evaluate the link between PGW-related exposures and fathering a child with a birth defect. Results In the present study, 9% of the 5,666 Gulf veterans who participated reported fertility disorders, and 12% of male veterans reported at least one miscarriage among their partners after the PGW. Overall, 4.2% of fathers reported at least one child with a birth defect conceived after the mission. No PGW-related exposure was associated with any birth defect in children fathered after the PGW mission. Concerning the reported health of children born after the PGW, 1.0% of children presented a pre-term delivery and 2.7% a birth defect. The main birth defects reported were musculoskeletal malformations (0.5%) and urinary system malformations (0.3%). Birth defect incidence in PGWV children conceived after the mission was similar to birth defect incidence described by the Paris Registry of Congenital Malformations, except for Down syndrome (PGWV children incidence was lower than Registry incidence). Conclusion This study did not highlight a high frequency of fertility disorders or miscarriage among French PGW veterans. We found no evidence for a link between paternal exposure during the Gulf War and increased risk of birth defects among French PGWV children.
One study did not find any difference between the reproductive hormones measured in Gulf War veterans and in controls [24]. However, compared with non-deployed controls, Gulf War veterans had a significantly higher risk of reported infertility (OR: 1.4 to 1.5 according to the fertility type) [28] and a higher risk of self-reported sexual problems (OR: 3.5 -p < 0,001) [24].
Most of the studies on the children of Gulf War veterans found no evidence of an increase in the risk of birth defects [18,21,22,24,29,31] or congenital diseases [22,24]. Some authors reported a higher prevalence of reported birth defects in babies of Gulf War veterans conceived after the Gulf War [22,23,25]. When looking at specific defects, only Araneta et al [19] observed a higher prevalence of renal agenesis and hypoplasia among children conceived postwar to GWV men, adjusted for prenatal alcohol exposure and intrauterine growth retardation. The other birth defects described (cardiac valve disorder among children conceived postwar to GWV men and hypospadias among children conceived postwar to GWV women) were no longer significant after adjustment for maternal parameters, branch of military service, and military rank [19].
For Doyle et al [32], there is no strong or consistent evidence in the literature of an effect of paternal service in the first Gulf War on the risk of major birth defects or stillbirth in offspring conceived after deployment, even if effects on specific rare defects cannot be excluded. There is some evidence of small increased risks of miscarriage or infertility associated with service, but the role of bias cannot be ruled out [32]. Finally, with regards to female veterans, firm conclusions cannot be drawn due to lack of sufficient information [32].
The present study reports findings relating to the reproductive outcome and health of offspring of French Gulf War veterans. The French Study on the Persian Gulf War and its Health Consequences is an exhaustive investigation into all French PGWVs conducted from 2002 to 2004. The aim of this descriptive study was, mainly, to examine self-reported symptom data among Gulf War veterans and to describe the main exposures reported in the theater, the symptoms and diseases that appeared during and after the Persian Gulf mission, and children's health.
Population study
Detailed information about this study is given elsewhere [33]. In brief, the French Study on the Persian Gulf War and its Health Consequences is a cross-sectional study with an exhaustive aim which included all civilians and military personnel serving in the Persian Gulf from August 1990 to July 1991. The addresses of 10,478 French troops who served in the Gulf during the period from August 1990 through July 1991 were identified, based on data transmitted by military staff, and 5,666 participated in the study after receiving two reminders. The participation rate was 54% and varied by branch of service (56% in the Army, 55% in the Air Force, and 43% in the Navy).
Questionnaire
Data collection was based on a 12-page self-administered postal questionnaire, accompanied by an explanatory letter stating the objectives of our study and a consent form, was developed on the basis of information published in the French authorities' reports and with reference to questionnaires used in PGWV morbidity studies. The questionnaire, tested on a sample of the target population, requested details of disorders requiring medical consultation, miscarriage or stillbirth, and the number of children born before and after the conflict. If a child presented a disease, his or her year of birth, gender, and detailed illnesses were requested. The notion of infertility was inferred from the following specific disorders: infertility or sperm abnormalities. The questionnaire also explored (i) socio-demographic characteristics (gender, age), (ii) military history (service branch, rank, military status on completion of the questionnaire), (iii) living conditions, and self-reported exposures during the Persian Gulf mission (sandstorms, smoke from oil well fires, chemical or bacteriological alerts, vaccinations, medication, and pesticides) of the PGWVs, and (iv) diseases and symptoms before, during, and after the mission. Hospitalization after the mission for one of the 49 symptoms of the Hopkins Symptom Checklist [34] was also reported.
On reception, the questionnaires were made anonymous, coded by an epidemiologist (CV) [ICD-10 [35]], keyboarded, and analyzed. Birth defects presented by PGWV offspring were grouped for analysis based on the classification system used in the European Registry of Congenital Anomalies (EUROCAT) [36].
Statistical analysis
We used Stata™ statistical software for all analyses. All p values are two sided, and we took values less than 0.05 to indicate statistical significance.
First, a descriptive analysis was conducted on PGWV: i) living conditions and exposures during the mission; ii) self-reported infertility disorders and miscarriage that appeared after the PGW mission.
Secondly, as data on healthy children were not available (except for the number of children), we decided to perform a case-control study nested within the cohort to determine the effect of PGW exposures on the risk of a father conceiving a child with a birth defect after the war. The date of birth had to be later than nine months after the end of the PGW mission for a child to be considered as having been conceived after the war. A case was defined as a man having had at least one child conceived after the PGW mission and presenting at least one birth defect. Two controls were selected for each case. A control was defined as a man of the same age as the case (± 1 year) who had never had a child with a birth defect but who had had at least one child after the PGW mission. Fathers having had at least one child with a birth defect before the PGW were excluded from this analysis. To minimize recall bias, a control should have reported one hospitalization for at least one symptom of the Hopkins Symptom Checklist [34] after the mission. PGW exposure odds-ratios were estimated by conditional logistic regression and then adjusted for service branch, rank, and military status.
Finally, the study provides a description of children born to PGWVs after the Gulf War. Major anomalies were described as Maconochie et al reported [27]. Minor anomalies were coded and specified. As no national registry on birth defects has been developed in France, PGWV chil-dren's birth defect rates were compared to the 10-year incidence rate of birth defects among children conceived from 1991 to 2000, as described by the Paris Registry of Congenital Malformations [37]. The Paris Registry rate was considered as the reference rate among the population. The confidence interval of the PGWV children's birth defect rate was estimated according to a Poisson distribution. The incidence ratio was calculated by dividing the PGWV children's birth defect rate by the reference rate.
The National Commission of Data Processing and Civil Liberty approved this investigation, in conformity with article 15, paragraph 3 of the Law of January 6, 1978, concerning data processing, files, and civil liberty.
Study population
Most of the 5,666 subjects who completed the questionnaire were male (99.5%), with an average age of 41 years (SD: 6 years) at the time of completion of the survey, and 71% of respondents were still in service. Considering the sample size of female veterans, the results are presented below by gender.
Reproductive health of French female Persian Gulf War veterans
The mean age of the 28 women who completed the questionnaire was 44 years (37 to 57 years). They served mainly in military health services (n = 14) and in the Air Force (n = 12). One woman served in the Navy and one in the Army. Six women reported at least one miscarriage after the mission. Nine women reported having at least one child after the mission (total of 25 babies), ranging from 1 to 4 children. No birth defect was reported in children born after the mission.
Reproductive health of French male Persian Gulf War veterans
The main characteristics of the 5,638 male veterans are presented in Table 1. Respondents served mainly in the Army or the Air Force and were servicemen. The respondents' age varied according to service: 63% of Army veterans, 52% of Navy veterans, and 28% of Air Force veterans were less than 40 years old (p < 0,001).
Infertility and other reproductive outcomes
Infertility and reproductive outcomes are described in 5,638 male veterans. Infertility problems (such as spermogram anomalies) were reported by 48 respondents, mainly Army veterans (Table 1).
After deployment, 3,121 veterans fathered at least one child (corresponding to 5,158 babies). The number of pregnancies reported per father varied from 1 to 10 pregnancies.
After deployment, 682 male veterans (12%) reported at least one miscarriage by their partner, 33 of whom reported a miscarriage before and after the mission. Overall, 135 men (4.3% of fathers) reported at least one child born with at least one birth defect (140 children), of which 4 reported having one child presenting a birth defect both before and after the mission (8 children). After the PGW mission, 131 fathers (4.2%) reported at least one child born with a birth defect, without having conceived a child with a birth defect before the PGW mission.
Case-control study of PGW-related exposures on any birth defects
The case-control study of PGW-related exposures on any birth defects included 131 cases and 262 controls. The 262 controls were randomized among the 845 veterans who had reported one hospitalization for at least one symptom (14.9% of participants), and matched for age (+/-one year). Characteristics of subjects in terms of branch of service, rank and military status, and crude and adjusted odds ratios for any birth defects are presented in Table 2.
Cases did not differ from controls according to service branch, rank, or military status.
Description of PGW-related exposures and odds ratios for any birth defects adjusted for branch of service, rank, and military status are presented in Table 3.
Service period and locations for the PGW mission (Iraq and/or Kuwait) were not different for cases and controls. Concerning PGW-related exposures, cases reported an identical exposure to the smoke of oil well fires, sandstorms, chemical alarms, and pesticides. Controls more often reported an exposure to sounding of chemical alarms (74.8% vs 64.9%), which did not persist after adjustment for service branch, rank, and military status.
Health of live born children
The health of live born children and birth defects are described in 5,183 children conceived after the PGW mission. Incidence of pre-term delivery reported in children conceived after the PGW mission was 104.2/10,000 live births [95% CI: 78.3-135.9]. Only 140 babies conceived after the PGW mission presented at least one birth defect (270.1/10,000 live births [95% CI: 227.2-318.7]). Eight babies presented two malformations. These associations were not specific.
The different major birth defects of babies conceived after the Gulf War mission are presented in Table 4. The main birth defects reported in babies conceived after the mission were anomalies of the musculoskeletal system (rate: 46.3/10,000) and urinary system anomalies (34.7/10,000). However, no specific birth defect was found. Only 21 children conceived in the first two years following the conflict presented birth defects, mainly malformations of the digestive system.
Comparison with the Paris Registry of Congenital Malformations
The
Discussion
In this study, 1% of the Gulf veterans reported fertility disorders and 12% of male veterans reported at least one miscarriage among their partners after the PGW mission. Overall, 4.2% of fathers reported at least one child born with a birth defect after the mission. No PGW-related exposure was associated with any birth defect in children fathered after the PGW mission. Concerning the reported health of children born after the PGW mission, 1.0% of children were born with pre-term delivery and 2.7% presented a birth defect. The main birth defects reported were musculoskeletal malformations (0.5%) and urinary system malformations (0.3%). Birth defect incidence in PGWV children conceived after the mission was similar to birth defect incidence described by the Paris Registry; except for Down syndrome, where the incidence in PGWV children was lower than the Paris Registry incidence.
Results are reported with several reservations. Although we sent out two reminders, our study did not have a high response rate, which suggests selective participation by respondents according to health outcome. The French Study on the Persian Gulf War and its Health Consequences aimed to be exhaustive, offering a free medical examination to all veterans provided by the French Government. The media in foreign countries gave massive coverage to complaints lodged due to the health conse-quences of the Gulf War, so respondents with health disorders, or at least worried about them, were more willing to participate in our study. However, this bias could be partly compensated for by the fact that 71% of respondents were still in service.
Data were gathered retrospectively and were based on veterans' self-reported data and were not validated against medical records for healthy veterans or their children. However, it is probable that symptoms or serious diseases requiring specific care or of unusual frequency or intensity were reported more often than events considered as slight or benign.
Recall bias can be a serious problem in case-control studies; when pre-recorded written exposure information is unavailable, controls with identical stimuli should be selected (e. g. a control series of children with malformations other than the one under study) [38]. We conducted a case-control study including controls matched for age and who reported one hospitalization for at least one symptom in order to minimize the recall bias. A direct relationship of birth defects to characteristics of the PGW mission could not be shown because i) data were collected retrospectively on a 10-year period, ii) information was based on the declarations of the PGW veterans (most were men, with no information on the children's mother), and iii) this study can only highlight associations between exposures and diseases and not evidence. Moreover, many children were conceived long after the war itself, personal risk factors (i.e., family history or history of another child with a birth defect) were not examined, and there is a potential for misreporting.
Rates of Congenital Malformations in PGWV' children and in the Paris Registry of Congenital Malformations
No information was available on the use of medication.
Results must be interpreted with caution due to the small number of case children for specific birth defects. Despite these major limitations, frequencies shown in French PGWVs were similar to frequencies in the French general population, except for the low frequency of Down syndrome. In our study, dates of birth of the unaffected children were not available and fathers' ages at birth were not known. However, 82% of fathers were 25 to 35 years old during the period covered by the study. Paternal age is related to maternal age and the risk of infecundity, miscarriage, and birth defects seems to increase with paternal age [39,40]. In the registry, 22% of mothers were above 35 years of age [37]. The analysis could not be controlled for mothers' age or parity.
The results of fertility studies on Gulf War veterans are controversial. The first studies published [24,28] did not find any effect of Gulf War service on markers of male fertility (hormone measurements, oligospermia, azoospermia, asthenospermia, teratospermia, sexual problems). However, some authors described an increased risk of infertility reaching 7% to 14% [25,26], mainly due to an increase of teratospermia and oligoasthenospermia [28]. The low estimation (0.9%) in our study could be explained by the self-reported infertility data collected in this study. Besides these sperm anomalies, pregnancies fathered by PGWVs seemed to take longer to conceive [27].
Authors reported an increased risk of miscarriage among partners of PGWVs, compared to a control group [22,23,[25][26][27], ranging from 12% to 60% among first pregnancies conceived after the PGW mission [25]. In our study, this miscarriage rate was 12% after the mission.
In our study, the frequency of birth defects was slightly lower (2.7%) than the frequencies described in other studies (ranging from 3.6 to 9.0%) [21][22][23]25,26]. The rate of birth defects among the French general population is estimated at 3.2% of all births for the period 1981-2000 [37]. No specific birth defect was highlighted in our study. Moreover, as one hypothesis was that wartime exposures adversely affected spermatogenesis, it was therefore reasonable to evaluate conceptions that occurred 70-90 days after leaving the war environment. The assessment of conceptions that occurred 3-10 years after the PGW mission would be more a reflection of non-war exposures and advanced paternal age than of distant wartime exposures. Since the number of live births per year was not available, in order to show a statistical difference between the birth defect rates of each of the two periods (1991-1992 vs 1993-2001), we estimated by simulation that fewer than 700 live births were needed in 1991-1992 (i.e. 13% of all children born after the PGW mission). However, this figure did not seem plausible in view of the context (a return home after an overseas mission). Specific birth defect frequencies were lower in our study than frequencies described by Doyle [22] in UK PGWVs: malformations of the musculoskeletal system (8.0‰ in Doyle's study vs 4.6‰ in our study), or malformations of the urinary system (4.9‰ in Doyle's study vs 3.5‰ in our study). The frequency of renal anomalies (1.4‰) was similar to that described by Araneta [19] in US PGWVs (1.1‰), and chromosomal anomalies were similar to those previously described (2.7‰ in our study, 0.2‰ to 2.6‰ in the literature) [19,25]. Since 1970, the increase in malformations of the urinary, central nervous, and cardiac systems could be explained by the widespread use of ultrasound, (particularly in antenatal diagnosis), which can identify malformations even without clinical symptomatology.
A combination of genetic and environmental factors may be responsible for 20 to 25% of congenital anomalies [41]. US Gulf veterans were exposed to many chemical, biological, and physical agents suspected of being reproductive toxins [42]. Constant infertility over time of UK veterans, described by Maconochie [28], argues in favor of either paternal germ cell mutation or other damage to spermatogenic stem cells necessary for supporting spermatogenesis. Combined exposure to pyridostigmine bromide, the insect repellent DEET, and the insecticide permethrin seemed to induce apoptosis in rat testicular germ cells, Sertoli cells, and Leydig cells [43]. However, Arfsten [44] showed that implantation of depleted uranium in adult rats does not have an adverse impact on male reproductive success, sperm concentration, or sperm velocity. Our case-control analysis did not highlight the role of PGW-related exposures in birth defects among PGWV children.
Conclusion
In conclusion, this study found the same frequencies of fertility disorders, miscarriage, and health disorders among PGWV children as those described among foreign PGWVs and the French general population. No PGWrelated exposure was associated with any birth defect in children fathered after the PGW mission. Our findings are limited by the reliability of self-reported data concerning exposures and health and pregnancy outcomes. This study highlights the importance of prospective data collection for exposures during future foreign operations and epidemiological surveillance of servicemen and women.
However, if the fertility disorders and birth defects remain constant over time, a more detailed and focused survey would be required to examine fertility and other aspects of reproduction more thoroughly. | 2016-05-09T20:46:10.767Z | 2008-04-28T00:00:00.000 | {
"year": 2008,
"sha1": "6c4ad2c2c9c9635c031495bcfc55c4f8ee674238",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-8-141",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7fece209fe649074c3f38ffb3fcd1e137c95e3b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18311473 | pes2o/s2orc | v3-fos-license | Modification of the Casimir Effect due to a Minimal length scale
The existence of a minimal length scale, a fundamental lower limit on spacetime resolution is motivated by various theories of quantum gravity as well as string theory. Classical calculations involving both quantum theory and general relativity yield the same result. This minimal length scale is naturally of the order of the Planck length, but can be as high as ~TeV^-1 in models with large extra dimensions. We discuss the influence of a minimal scale on the Casimir effect on the basis of an effective model of quantum theory with minimal length.
1. The minimal length scale
Motivation
The idea of a minimal length has a long history and was already discussed by W. Heisenberg in the 1930s, who recognised its importance in regularising UV-divergences 1 . Today, theories beyond the standard model such as string theory or loop quantum gravity -as diverse as they may be -all suggest the existence of a fundamental limit to spacetime resolution of the order of the Planck length. Thus, the motivations for the existence of a minimal length scale are manifold: • In perturbative string theory 2,3 , the feature of a fundamental minimal length scale arises from the fact that strings cannot probe distances smaller than the inverse string scale. If the energy of a string reaches this scale M s = √ α ′ , excitations of the string can occur and increase its extension 4 . In particular, an examination of the spacetime picture of high-energy string scattering shows, that the extension of the string is proportional to its energy 2 in every order of perturbation theory. Due to this, uncertainty in position measurement can never become arbitrarily small. • In loop quantum gravity, spacetime itself is quantised and thus measurements of area and volume at small scales must fall into the spectrum of the respective self-adjoint operators, which is discrete 5 . • Including gravitational effects from general relativity into a classical analysis of the process of position measurement yields a minimal uncertainty 6 , i.e. a minimal length is implicitly contained in the standard model (SM) combined with general relativity.
Large extra dimensions
Arkani-Hamed, Dimopoulos and Dvali proposed a solution to the hierarchy problem (the hugeness of the Planck scale compared to the scale of electroweak symmetry breaking) by the introduction of d additional compactified spacelike dimensions in which only the gravitons can propagate 7,8 . The SM particles are bound to our 4-dimensional sub-manifold, often called our 3-brane. Due to its higher dimensional character, the gravitational force at small distances then is much stronger in these models. This results in a lowering of the Planck scale to a new fundamental scale, M f , which can be as low as the TeV-range. Accordingly, in such models the minimal length scale increases to a new fundamental length scale L f .
Quantum theory with minimal length
To include effects of the minimal length, we assumethat at arbitrarily high momentum p of a particle, its wavelength is bounded by some minimal length L f or, equivalently, its wave-vector k is bounded by a M f = 1/L f 9 . Thus, the relation between the momentum p and the wave vector k is no longer linear p = k but a function k = k(p) a , which has to fulfil the following properties 10,11 : a Note, that this is similar to introducing an energy dependence of Planck's constant . a) For energies much smaller than the new scale it yields the linear relation: for p ≪ M f we have p ≈ k. b) It is an an uneven function (because of parity) and k p. c) The function asymptotically approaches the bound M f .
The quantisation in this scenario is straightforward and follows the usual procedure. Using the well known commutation relations and inserting the functional relation between the wave vector and the momentum then yields the modified commutator for the momentum and results in the generalized uncertainty principle (GUP) which reflects the fact that it is not possible to resolve space-time distances arbitrarily well. Because k(p) becomes asymptotically constant, its derivative ∂k/∂p eventually vanishes and the uncertainty (Eq.(2)) increases for high momenta. Thus, the introduction of the minimal length reproduces the limiting high energy behavior found in string theory 2 . In field theory b , one imposes the commutation relations Eq. (1) and (2) on the field φ and its conjugate momentum Π. Its Fourier expansion leads to the annihilation and creation operators which must obey
The Casimir energy
Zero-point fluctuations of any quantum field give rise to observable Casimir forces if boundaries are present 12 . Here, we consider the case of two conducting parallel plates in a distance a in direction z. Using the framework developed above, in the presence of a minimal length the vacuum expecation value (VEV) for the field energy density is now given by 13 where E is the energy of a mode with momentum p. Here, we have used the specific relation from Ref. 11 for k(p) where e µ is the unit vector in µ direction. It is easily verified that this expression fulfills the requirements (a) -(c).
To obtain the Casimir energy, the difference of the VEVs of the inside and the outside regions of the plates has to be taken: For Minkowski space in 3 + 1 dimensions without boundaries, the energy density in the present model with minimal length is finite due to the squeezed momentum space at high momenta and given by The quantisation of the wavelengths between the plates in the zdirection yields the condition k l = l/a. Since the wavelengths can no longer get arbitrarily small, the smallest wavelength possible belongs to a finite number of nodes l max . As a result, momenta come in steps p l = p(k l ) which are no longer equidistant ∆p l = p l − p l−1 . Then where p 2 = p 2 x + p 2 y and E 2 = p 2 + p 2 l . The result of our calculation is shown in Fig. 1. The slope of the curve changes whenever another mode fits between the plates. Although the slope (and thus the Casimir force) is singular at these points, the plot clearly shows that a finite energy is sufficient to surmount them and thus the result is physical. These singularities result from the assumption of two strictly localised plates and might be cured in a full theory by the minimal length uncertainty on the plate positions.
Conclusion
The existence of a minimal length scale is justified on various grounds. The minimal length is considerably increased in models with large extra dimensions. We presented an effective model that incorporates the minimal length into quantum theory. As an application, the Casimir energy for two parallel plates was studied. This example depicts nicely how the minimal length acts as a natural regulator for infinities in quantum field theories. | 2014-10-01T00:00:00.000Z | 2005-05-02T00:00:00.000 | {
"year": 2005,
"sha1": "9676fb304eda127b1db2a2a7e9a31a2b32d4931c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0505010",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "89d105bea0accdd8c52e4c24d7f525ad983b5975",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
89936038 | pes2o/s2orc | v3-fos-license | Exploration of Ethnopharmacological Potential of Antimicrobial , Antioxidant , Anthelmintic and Phytochemical Analysis of Medicinally Important Plant Centella asiatica ( L . ) Urban in Mart . and Eichl
As there is a huge pressure on the cultivated medicinal plants and due to this pressure a large number of plants are being eradicated yearly. So to reduce this pressure on the cultivated plants an effort is being done to use the wild plants as a good medicinal agent and a cheaper source as well. The present study was undertaken to find out the Antimicrobial activity, Antioxidant activity and Pharmacological Analysis of Centella asiatica. It is a wild plant and mostly found on the damp places of plains and foothills. It was collected, dried and extracted by maceration method in different polar and non-polar solvents i.e. petroleum ether, chloroform, methanol and distilled water. These extracts were further used to find out the antimicrobial, antioxidant and anthelmintic activities. Centella asiatica showed remarkable values comparable with the standard antimicrobial and antioxidant agents. Well defined zones of inhibition were recorded indicating that the plants were potent against pathogenic microbes, such as i.e. Bcteria (Staphylococcus aureus, Staphyllococcus saprophyticus, E. coli) and fungi (Pseudomonas aeruginosa, Aspergillus parasiticus and Rhizopus oryzae). The antioxidant activity of all the plant extracts was studied by DPPH Assay, Total Antioxidant Assay and Total phenolic Assay and the remarkable values comparable with the standard antioxidants were recorded. For pharmacological analysis different secondary metaHow to cite this paper: Aftab, A., Khan, Z.D., Yousaf, Z., Aftab, Z.-e-H., Javad, S., Shamsheer, B., Zahoor, M., Riaz, N., Javed, S., Yasin, H. and Ramzan, H. (2017) Exploration of Ethnopharmacological Potential of Antimicrobial, Antioxidant, Anthelmintic and Phytochemical Analysis of Medicinally Important Plant Centella asiatica (L.) Urban in Mart. and Eichl. American Journal of Plant Sciences, 8, 201-211. http://dx.doi.org/10.4236/ajps.2017.82016 Received: September 12, 2016 Accepted: January 19, 2017 Published: January 22, 2017 Copyright © 2017 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/
solvents i.e. petroleum ether, chloroform, methanol and distilled water.These extracts were further used to find out the antimicrobial, antioxidant and anthelmintic activities.Centella asiatica showed remarkable values comparable with the standard antimicrobial and antioxidant agents.Well defined zones of inhibition were recorded indicating that the plants were potent against pathogenic microbes, such as i.e.Bcteria (Staphylococcus aureus, Staphyllococcus saprophyticus, E. coli) and fungi (Pseudomonas aeruginosa, Aspergillus parasiticus and Rhizopus oryzae).The antioxidant activity of all the plant extracts was studied by DPPH Assay, Total Antioxidant Assay and Total phenolic Assay and the remarkable values comparable with the standard antioxidants were recorded.For pharmacological analysis different secondary meta-
Introduction
Plants are oldest source of pharmacologically active compounds and have been very useful for human kind with reference to medically important compounds from centuries.Today it is estimated that more than two third of the world's population relies on plant derived drugs.7000 medicinal compounds used in the Pharmacopoeia are derived from plants.Previously the plant C. asiatica was also known as Hydrocotyl asiatica and commonly as Brahmi Booti.It belongs to family Apiaceae.It is mostly found on the damp places of plains and foothills.The stems are slender, creeping stolons, green to reddish green in color, interconnecting one plant to another.It has long-stalked, green, reniform leaves with rounded apices which have smooth texture with palmately netted veins.The leaves are borne on pericardial petioles, around 2 cm.The rootstock consists of rhizomes, growing vertically down.They are creamish in color and covered with root hairs.The flowering period ranges from April to September [1].In previous literature, a variety of chemical compounds from C. asiatica have been documented as: alkaloids, hydrocotyle, pectic acid, essential oils and Asiatic Acid.Medicinally the plant is considered very important being tonic, diuretic and local stimulant of skin diseases and as well memory sharper [2].
In Africa, chewing sticks are the most common means of maintaining oral hygiene, and roots, stems and twigs of numerous plants are employed for this purpose.Chewing sticks are recommended for oral hygiene by the World Health Organization, and some of them, or their extracts, are also used in the ethnomedical treatment of oral infections.Primary screens have demonstrated that extracts from many chewing sticks have antimicrobial activity against a broad spectrum of microorganisms, including those commonly implicated in orofacial infections.Some chewing stick extracts have additional biological activities.Preparation, extraction and antimicrobial screening methodologies are largely unstandardized and bioactivity-guided fractionation has only been conducted on a few chewing stick extracts.It is therefore highly likely that many chewing sticks contain secondary metabolites with as yet unreported antimicrobial activity.Antimicrobial principles that have been identified include novel flavenoid compounds and alkaloids.Chewing sticks offer considerable and underexploited potential as sources of new antimicrobial backbones [3].Jeewan et al., [4] observed the ethnopharmacological and antimicrobial properties of certain medicinal plants used by Adivasi tribes of the Eastern Ghats of Andhra Pradesh, India.They used 23 crude drug samples for various skin diseases and assayed for antimicrobial activity against four bacterial and one fungal human pathogen.
Materials and Methods
The plan of work was designed to qualitatively analyze the plant for its phytochemicals and the quantitative analysis of pharmacological aspects as antimicrobial, antioxidant and anthelmintic activities.The plant was collected from the GCU Botanic garden during the month of February.The plant specimen was authenticated and submitted to GCU herbarium.The collected specimen was dried at room temperature and then obtained the finely grind powder.Later on the extraction was done through maceration method in a series of non-polar and polar solvents i.e. petroleum ether, chloroform, methanol and distilled water.
Phytochemical Analysis
Qualitative phytochemical analysis of the crude extracts of the Centella asiatica was carried out by using standard procedures to identify the constituents as described by Edeoga et al. [5].
Antimicrobial Activity
Antimicrobial activity of the extracts was done according to Ortega et al. [6] and Ferreira et al., (1996) by agar well diffusion method.The fungi were cultured on potato dextrose agar medium, which was prepared according to Johansen [7].
The antimicrobial activity was done against four bacterial (Staphylococcus aureus, Staphylococcus saprophyticus, E coli and Pseudomonas aeruginosa) and two fungal (Aspergillus parasiticus and Rhizopus oryzae) strains.The standard antimicrobial discs were used in comparison as: Ampicillin disc (10 ug) against Staphylococcus aureus Ampicillin disc (10 ug) against Staphylococcus saprophyticus Amikacin disc (30 ug) against Pseudomonas Sulphomethoxazole disc (23.75 ug) against E. coli Fucanozole medicine in the form of dilution as 250 mg/625 ml against Aspergillus parasiticus and Rhizopus oryzae The whole process was carried out in the aseptic conditions.The zone of inhibition became prominent after the incubation time, i.e. 24 hours for bacteria and 48 hours for the fungi.
Antioxidant Activity
For antioxidant evaluation of the plant extracts different assays were run as DPPH (Diphenyl Picryl Hydrazyl Radical) assay, Total antioxidant assay and Total Phenolic assay.
DPPH Assay was done according to Erasto et al. [8].The total antioxidant capacity of all the extracts was assayed according to the method of Prieto et al. [9] whereas Total Phenolic Assay was done by following the methodology of Makkar et al. [10].
Anthelmintic Activity
Haemonchus contortus, intestinal parasite of sheep were used by following the
Results and Discussion
Different polar and non-polar solvents were used for the extraction and thus their antimicrobial aspect and different values of inhibitory action against the microbes in mm were observed.In most of the cases the fungal species were strongly effected.Moreover, the water extracts in majority of the cases showed the maximum inhibitory values against the micro-organisms used.
The plant showed positive results for Alkaloids, Saponins, Tannins, Phlobatannins, Cardiac glycosides and Flavonoids by producing characteristic precipitates, froth or ring formation.Whereas in some cases it showed negative results that are for Terpenoids, Coumarins and Anthraquinones (Figures 1-9).
Centella asiatica petroleum ether leaf extract produced the maximum value of zone of inhibition, i.e. 68 ± 1.64 a against S. aureus among all the bacteria and the minimum values for inhibitory zone were observed by rhizome methanol and chloroform extracts against S. saprophyticus 3 ± 1.84 b and 3 ± 1.69 b respectively.
Whereas for fungal strains the maximum inhibition value was given by rhizome methanol extract (52 ± 0.76 a ) against R. oryzae, and the minimum by leaf petroleum ether extract (4 ± 0.76 a ) against A. parasiticus.
The absorption values at 517 nm and %age DPPH values of all the extracts were recorded and later on compared with the standard antioxidant chemicals, BHT (Butyl HydroxyToluine) and α-Tocopherol.Among all the extracts of Centella asiatica, leaf chloroform extract and rhizome chloroform extracts showed DPPH activity that was closer to that of BHT, with the absorption values 0.13 ± 0.007 and 0.13 ± 0.01 respectively.
For total antioxidant assay, the absorption values were recorded at 695 nm and the rhizome extracts of Centella asiatica showed more closeness with α-Tocopherol with the values (0.54 ± 0.03), (0.57 ± 0.01) and (0.57 ± 0.01) respectively recorded.
As well as the total phenolic assay is concerned the results in this case were noted down by comparing them with the Gallic acid equivalent and the resultant values were recorded in the form of µg/g of Gallic acid.As the value increases, it means the quantity of phenol is increasing.The whole procedure was carried out on three sample replicates and the mean of these values were being recorded in association with the standard deviation among the replicates, the maximum value was shown by rhizome water extract, 422 ± 8.62 while rhizome petroleum ether extract showed the lowest value.
The results indicated that the plant is antimicrobial in nature.Some of the extracts showed very highly antimicrobial potential against bacteria and fungi, while some of them were comparatively less antimicrobial in nature.A variety of standard antimicrobial discs were run to compare the zones of inhibition against bacteria like S. aureus, S. saprophyticus, E. coli & P. aeruginosa, and fungi like A. parasiticus & R. oryzae.Centella asiatica rhizome chloroform extract showed the maximum value for zone of inhibition 68 ± 1.64 a against S. aureus among all the bacteria, this very high antimicrobial value may be due to the compounds having strong antibiotic potential present in the extract.The maximum antifungal value was observed in rhizome methanol extract (52 ± 0.76 a ) against R. oryzae.Generally the secondary metabolites like alkaloids, terpenoids and tannins, etc. in the plants are readily extracted in the methanol like solvents showing higher polarity.Therefore the high antifungal value of the extracts may be due to the presence of such compounds in the extracts.While the lower value of antimicrobial activity may be due to the less concentration or absence of such compounds Rios and Recio [11] carried out somewhat related work on antimicrobial activity and recorded the results related to the results obtained in the present work.
In the same way Ibrahim et al. [12] observed the leaf extracts of two Nigerian edible vegetables by agar well diffusion method on selected food borne pathogens of medical importance for their antimicrobial activity.Both aqueous and ethanolic extracts of these plants were tested against E. coli, Staphylococcus aureus, Bacillus cereus, Shigella dysentriae and Salmonella typhimurium in which the later one showed better and significant anitibacterial activity among all tested samples.
As well as the antioxidant activity is concerned, the results revealed that the plant had significant free radical scavenging activity.The free radical scavenging
Conclusion
In the present study an effort has been done to find out the different ethnopharmacological effects of Centella asiatica.The plant fractions showed very potent antimicrobial results and thus it can be concluded that the plants can be used as the better sources for diminishing the microbes.As the antioxidant ac-tivity is concerned, many of the extracts showed strong antioxidant activity and as well a very high number of the phenolic agents; thus these can be considered as good anti-aging agents.The anthelmintic activity of the plant also showed good results against the worms.If we consider all these aspects collectively we can conclude that the plant is very active medical agents and can be used as an authentic source for reducing the pressure on the pharmaceutical industry as they are all the natural resources thus without any side effects.
Figure 3 .
Figure 3. Zone of inhibition produced by standard discs against microbes.
Figure 4 .
Figure 4. Antioxidant activity of various extracts of Centella asiatica in different solvents through DPPH assay.
Figure 6 .
Figure 6.Antioxidant activity of various extracts of Centella asiatica indifferent solvents through total antioxidant assay.
Figure 7 .
Figure 7. Antioxidant activity of various extracts of C. asiatica through total phenolic assay.
activity might be one of the mechanisms by which the plant extracts exhibited high antioxidant activity.Hence the present study provided a strong evidence for their use in food industry and medicine.Centella asiatica leaf chloroform extract and rhizome chloroform extracts showed the values closer to the BHT with the absorption values of 0.13 ± 0.007 and 0.13 ± 0.01 respectively.As these values were very close to the standard samples, it means that the plant is highly antioxidant.The same work done by Irina et al. (2001) recorded somewhat similar findings, in which they utilized the same methodology for testing the antioxidant activity of different extracts.Just like the present work Seneviratne and Kotuwegedara[13] compared the antioxidant activities of the phenolic extracts of seed oils and seed hulls of five plant species with those of butylated hydroxyl toluene (BHT) solutions at comparable phenolic concentrations in order to understand the phenolic dependence of the antioxidant activity and to evaluate the potentials of these phenolic extracts as alternatives for synthetic antioxidants.Antioxidant activities of the phenolic extracts from different plants varied even at equal total phenol concentrations.o-Diphenol contents showed better correlations with the antioxidant activities than total phenol contents.In Centella asiatica, the petroleum ether leaf extracts was the strongest one as the worms survived in this extract only for the duration of 2 hours while the leaf distilled water extract was mild one as the worms survived for 4 hours.When compared with the standard medicine (Levamisole), in which the living duration of worms was 4 hours, and then the water leaf extract was almost equal in strength with the standard medicine whereas the petroleum ether leaf extract was proved to be much stronger than the standard medicine Among the extracts of rhizome, all of them were showing equal mortality rate i.e. 3 hours living duration, hence they are all weaker than standard medicine.extracts against bacteria, fungi and standard discs | 2019-01-11T02:08:38.834Z | 2017-01-19T00:00:00.000 | {
"year": 2017,
"sha1": "527aecd15c2ec600df3224c8e6edaf5f75f42265",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=73673",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "527aecd15c2ec600df3224c8e6edaf5f75f42265",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
236381196 | pes2o/s2orc | v3-fos-license | Overview of the Change in the Organization of the Education System in Gabon
The question of change in the organization is at the heart of the concerns of public policy managers. This article proposes to analyze the management of organizations in the educational system in Gabon. We have examined what is said and what is best done in the field in an attempt to identify this consideration by the theorists of change. As pointed out by several authors, particularly those who have investigated this issue, change is a stakeholder in the management of organizations. At the end of this literature review, we found that Gabon is not the only country to be confronted with the problem of change management in its education system. In conducting an education reform, it must take into account the balance of power within the organization. In order for the organization to achieve its goal, communication must become a permanent feature of human resources management. Thus, changing the education system is a titanic but achievable undertaking. The competence of the managers, the piloting mode and the management of human resources are determining factors in the process of change.
Introduction
The issue of change in organizations is at the heart of the concerns of public policy managers.
As some authors who have investigated this issue point out, change is a key part of the management of organizations.
At present, education is viewed from a different perspective from that which had been at the origin of the various works in the early 1960s. The economics of The question of change in the organization is at the heart of the concerns of public policy managers. This article proposes to analyze the management of organizations in the educational system in Gabon. We have examined what is said and what is best done in the field in an attempt to identify this consideration by the theorists of change. As pointed out by several authors, particularly those who have investigated this issue, change is a stakeholder in the management of organizations. At the end of this literature review, we found that Gabon is not the only country to be confronted with the problem of change management in its education system. In conducting an education reform, it must take into account the balance of power within the organization. In order for the organization to achieve its goal, communication must become a permanent feature of human resources management. Thus, changing the education system is a titanic but achievable undertaking. The competence of the managers, the piloting mode and the management of human resources are determining factors in the process of change. education currently focuses on three main areas: the contribution of education to economic growth, individual demand for education and the links between education and the labor market, and the management of education systems.
This literature review falls within the field of the management of education systems, in an attempt to analyze the management of organizations in the education system in Gabon. To do so, we will first define the concepts of change and organization, then present this consideration and finally draw a conclusion on the basis of the meta-analysis.
Conceptual clarification of change and organization
In the opinion, for example, change is leaving a thing, a state. It is therefore the act of modifying something, as can be noted from what Autissier and Moutot (2012) report and define it in terms of a break between an obsolete existing and a future synonymous with progress. For this, change only exists through the dynamics of the individuals who implement it. It becomes a rupture in its functioning when the following elements are transformed: practices, working conditions, tools, organization, profession, strategy and culture (the value system).
However, understanding change according to Collerette, Delisle and Perron (2011) is an attempt to understand a complex set of phenomena, movements, among other movements, and it is in fact an attempt to explain a continuous process that lies at the center of organizations and that is difficult to stop and take a snapshot of. Knowing that the process of change is difficult to isolate as a social phenomenon, we must, however, act as if we have carried it out in order to highlight what characterizes it. From the point of view of these authors, any transition from one state to another, observable in the environment and which has a relatively sustainable character could be understood as a change.
By extension, their definition of organizational change, which we consider to be relevant, is to state that it is any relatively lasting change in a subsystem of the organization, provided that this change is observable by its members or the people who are related to this system.
Here we can see that the term change refers to an observable modification that has occurred in the social system. This leads us to note that the expression process of change refers to the different phases experienced by the social system that must integrate the change. The process therefore takes place at the level of the personal experience of those who are experiencing change and who are not its promoters.
Thus, following what Collerette, Delisle and Perron (2011) argue, we can admit that the change process refers to the different steps that will be taken to undertake, promote and implement a change in a system. Such an approach includes the various activities that will be carried out by change people to ensure that the change materializes in the organization. In this sense, we agree with these authors when they argue that the organization refers to any production system, in a given environment, bringing together two or more actors who must interact, guided by a formal mission to be accomplished, and whose coordination is carried out by one or more of the actors who explicitly play this role.
In education, change could take place in the areas of infrastructure for working conditions, institutional organization based on the organization chart of services, classroom practices, curricula, teaching approaches, evaluation, and tools such as personnel, heritage and student management. Making a change in the education system therefore means putting an entire system into action. This requires decision making and learning new ways of relating.
According to Crozier and Friedberg (1977), for change to take place, there must be the acquisition of new capacities, that is, new knowledge and new modes of relationship and functioning between people, because changing the balance of power is not enough. The members of an organization are quite willing to change, very much so. However, the successful implementation of change requires a number of decisions throughout the project that structure the design and conduct of the reform.
About organizational change
To examine change in organizations, we have drawn on the work of a number of authors who have dealt with change management, including d' Autissier (2007Autissier ( , 2012, Dupuy (2004), Perrenoud (1999), Crozier (1977), Maganga (2005Maganga ( , 2011Maganga ( , 2012, Bareil (2004), Kourilsky (2014), and Mintzerg (1998, 2000. In their work, these authors set the scene for debates that involve a range of actors in driving the reforms. An examination of the relationships that these actors develop with respect to change could certainly shed light on what this entails. When we look at change in organizations, we see that it is debated because of its importance in the reforms. Indeed, the authors who have investigated this question maintain that change, seen as a rupture, contributes to the evolution of organizations in the responses that actors have given to provide solutions to the problems that block the functioning of these organizations. Moreover, taking up the clarification that Autissier and Moutot (2007), Collerette, Delisle and Perron (2011) make about change, we can retain that it is a rupture between an obsolete existing and a future synonymous with progress. For them, change only exists through the dynamics of the individuals who implement it, because it only becomes a rupture in its functioning when practices (ways of doing things), working conditions (material environment), tools (IT and management), organization (zones of power and functional delimitations), business (the organization's know-how), strategy (the collective goals pursued and envisaged) and culture (the value system) are transformed. Now, we know that in an organization like the Ministry of National Education, adherence to change requires that the actors involved agree to abandon what already exists in order to believe in the future.
But if we consider that an individual is trained throughout his life and that change is part of this training, we can agree with its authors, to emphasize that the fear of the future or of what we will find is no longer justified. Indeed, the existing is a little bit the routine piloting of our practice and the future is the hope of our evolution, our promotion, our improvement. From the point of view of these authors, the link between these two visions is the risk that we take to abandon routine in favor of an uncertain but potentially better future. It is with good reason that it is believed that the heads of schools, school districts and Academy Directors, pedagogical advisors and inspectors deserve special attention. They are, unlike ministerial cabinets and directorates general, very close to the daily lives of teachers and students, but remain executives whose position is both an asset and a handicap in innovation.
These executives are an asset because they can legitimately take initiatives, embody a community, negotiate with central or local government. Following the same authors, we believe that these actors do indeed constitute active relays for reforms of the education system at the local level. The knowledge of action and innovation of this particular category of actors is more than decisive, especially since they are now in all countries, in search of a clearer identity and training commensurate with their new roles.
According to our interpretation of Perrenoud's (1999) remarks, these actors also form a handicap because the relationship to change of the actors involved cannot be merged with that of those who are at the base and for whom the ministerial department is an ecosystem. Thus, it is not surprising that one can witness the development of resistance, sometimes fierce, in the implementation of reforms. It is at this level that we agree with what this author says, when he maintains that a reform is a political act notwithstanding that its motives are economic, pedagogical or demographic. Hence his mistrust of those who believe that all reforms must succeed and that resistance is reprehensible. Like all public policies, a school reform, from his point of view, is a complex human enterprise. That such an undertaking does not automatically succeed is not strange because of its limited rationality. It can never be the subject of a total consensus on the goals and strategies for implementation. Hence the need to place it within a system of collective action that no one fully masters insofar as some believe that evolution is the result of the clash of multiple value systems and multiple logics of action, none of which can be imposed without sharing.
As a systemic phenomenon, change can be seen, according to Crozier (1977) and Maganga (2005Maganga ( , 2011Maganga ( , 2012, as the transformation of a field of action in order to find a model of regulation that integrates all contradictions, favourable power relations, relational cognitive capacities and sufficient models of government. For these authors, it is a stage in the process of human development, of a social organizational model because it is more rational, resulting from a struggle between different people.
Conducting a reform in the educational system suggests that attention should be paid to the organization itself, to the points of view of the actors involved in the change, and to the environment where the reform takes place. In this sense, one can agree with Autissier (2003), Perrenoud (1999), Crozier (1977) and Maganga (2005Maganga ( , 2011, who recognize that, even if a reform claims to be in the public good or general interest, it cannot be agreed upon unanimously. For, it necessarily has overt or covert opponents who, at one level or another, actively oppose it or practice passive resistance. This is why they suggest, when steering the reform in an organization such as the Ministry of National Education, to take into account the adhesion of actors at all levels to overcome resistance of all kinds, knowing that everyone must respect the decision of the majority.
It should also be noted that, in their majority, those who initiate the reforms are very impatient in their practices. They want to move forward and evaluate too quickly to brandish results that must be considered over the medium or long term. However, by hiding the reality of teachers' work, for example, there is a risk that management will be faced with actors who are insensitive to change. It is hardly surprising that teachers, students, pedagogical supervisors and school heads are indifferent to change in terms of content, teaching methods or evaluation, if the reforms are silent on these tensions or content themselves with professing an ideal.
In the case of Gabon, teachers and students arbitrated pragmatically by distinguishing between what they were told to do and what they could and could do effectively.
We can therefore understand that resistance to change could be linked, among other things, to the strength of the opponents of the reform, the involvement of all the actors, the fact of moving forward or evaluating too quickly, the underestimation of the power of relay, the failure to take into account the reality of teachers' work, and the level of qualification of professionals. What about the impact of communication in change management?
Organization and communication in education management
When a complex organization such as the Ministry of National Education commits to reforming its system, it is therefore essential to institute a communication plan that will allow change to be accepted and reduce resistance. A viable communication plan must take into account the elements of interpretation of each one and define its purpose beforehand in order to reach and integrate all the actors.
Communication (documentation, posters, leaflets and brochures, websites and forums, conferences and meetings) is an essential step in change management. And this must be done according to the more or less collective nature of the target and the desired interactivity. For each identified population, the different tools above will be mobilized in order to achieve the right message, at the right time.
For example, according to Autissier and Moutot (2003), for changes to be welcomed smoothly by all employees, communication must be made an integrated ally in the team in charge of the project. This communication must support the various departments of the organization. In other words, during a reform in the Education sector, communication plays an advisory role. The tools chosen should depend on the type of messages and targets to respond to internal problems (risk of social movements, persistent rumors). Thanks to its communication plan, the steering team will choose the words, media and working methods that will make pedagogical supervisors, general directors, directors of academies, school heads, teachers, students and parents adhere to the project and defuse internal tensions.
Education-communication could allow the Minister of National Education and his managers to meet all the actors (teachers, students, parents) to provide them with information about the change and work with them on the working methods impacted by this change.
Internal marketing enables communication to be deployed according to a logic based on the targets and groups previously identified. Whether it is through meetings, forums, written or oral communication, by participating in these spaces of expression, all the actors concerned by the change can both express their questions and obtain explanations.
A well-developed communication plan during the implementation of an education reform promotes a good appropriation of the change project in terms of objectives, means, action plans, monitoring methods and adherence to the messages. The role of a communication plan in the success of the change is clear if it is at the center of the reformers' concerns.
However, prior to change, can human resource management not be an obstacle to the implementation of a reform, particularly in education?
Human resource management in the administration of the education system The Ministry of National Education has as its human resources, teachers, nonpermanent manpower, pedagogical advisors and inspectors, study leaders, department heads, directors, general directors, general inspectors, general secretaries, the Minister's office and the Minister himself. In this sense, the objective of human resource management at the Ministry of National Education is to mobilize all staff for more effective public action.
It is therefore, on the one hand, the human resources administration that deals with payroll, jurisdiction, etc., and on the other hand, the human resources administration that deals with the administration of the Ministry of Education. On the other hand, we are talking about human resources development, which is responsible for career management, skills management, recruitment and training. In short, human resources management is characterized by the participation of the staff in the management of the organization, the development of human capital by taking into account its social status, its participation in decisionmaking, its decisive role in productivity and quality of work as well as the development of its skills. In this regard, all the characteristics of human resources management at the Ministry of National Education must be taken into account to ensure that change and decision making proceed smoothly.
Following on from what has just been argued, Dupuy (2004), pursuing the questioning, shows that the change that leads the system to transform itself is a bit like the one that allows the rules of the human system to be questioned. According to him, change leads to a real transformation of organizations and human relations. This is why he remains cautious in stressing that change requires entering into relationships with others through dialogue, cooperation and sharing. Indeed, he continues, any change requires a reconstruction of the reality of the Ministry of National Education. Since reality is inseparable from the way we look at things, reframing could be the best technique for initiating conceptual or emotional change.
We understand, therefore, that change in an organization such as the educational system necessarily leads to changes in attitudes and behaviors that bring about profound transformation.
From this point of view, the management of human resources becomes a fundamental condition for initiating change. This is why Perrenoud (1999) suggests that when it comes to the question of change in organizations, the human dimension of the balance of power in the implementation of a reform should not be overlooked. The games around rationality and efficiency, which are constant in the functioning of any organization, intensify in periods of reform. Whether the changes are proposed spontaneously by the enlightened, modernist fraction of the organization or whether they respond to changes in the environment, demand, resources, and the law, there is always a problem of appropriation.
For Perrenoud (1999), neither endogenous modernization nor survival reactions to ecosystem evolution are shared evidence. It is necessary to justify them, therefore to propose a new construction of the reality of the organization, its efficiency, its functioning and its environment. This is what he called a construction likely to legitimize the change whose necessity never imposes itself. Change is rarely automatic in organizations. For, it generally appears to be decided according to the representations, analyses and anticipations of the actors, within the framework of their ordinary functioning within the organization. This is why it is necessary to integrate in the change a central element in human resources management, i.e. the organizing power of the actors in the exercise of it.
Conclusion
At the end of this overview of the change in the organization of education, we looked back on the process of its realization. In this regard, we noted that Gabon is not the only country to be confronted with the issue of change management in its education system. The literature review allowed us to observe what is being done elsewhere in order to shed light on our understanding of such a situation. In order to do so, we defined the key concepts, particularly that of change and organization.
After review, it was retained that it is not as simple to carry out a reform in education because the implementation of change cannot be done without taking into account the balance of power within organizations. With regard to organization and communication, we noted that these two concepts were inseparable, i.e. they share common areas if we look at how they work. In order for the organization to achieve its goals, communication must be permanently established in the management of human resources, i.e. the staff as drivers of change. Following this review of the literature, we have concluded that changing the education system is a titanic but achievable undertaking. The competence of the managers, the steering mode and the management of human resources are determining factors in the conduct of change. | 2021-07-27T00:05:35.825Z | 2021-05-26T00:00:00.000 | {
"year": 2021,
"sha1": "8f669b06ce92a85d428aecf1cb06f9be49cd61d0",
"oa_license": "CCBY",
"oa_url": "https://osjournal.org/ojs/index.php/OSJ/article/download/2713/372",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c27040dd1a1b1db891cec51badfd4f39e8dda314",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
119322294 | pes2o/s2orc | v3-fos-license | W-Types with Reductions and the Small Object Argument
We define a simple kind of higher inductive type generalising dependent $W$-types, which we refer to as $W$-types with reductions. Just as dependent $W$-types can be characterised as initial algebras of certain endofunctors (referred to as polynomial endofunctors), we will define our generalisation as initial algebras of certain pointed endofunctors, which we will refer to as pointed polynomial endofunctors. We will show that $W$-types with reductions exist in all $\Pi W$-pretoposes that satisfy a weak choice axiom, known as weakly initial set of covers (WISC). This includes all Grothendieck toposes and realizability toposes as long as WISC holds in the background universe. We will show that a large class of $W$-types with reductions in internal presheaf categories can be constructed without using WISC. We will show that $W$-types with reductions suffice to construct some interesting examples of algebraic weak factorisation systems (awfs's). Specifically, we will see how to construct awfs's that are cofibrantly generated with respect to a codomain fibration, as defined in a previous paper by the author.
Introduction
A key idea in type theory is that of inductively generated types. The essential idea is that one specifies a way to construct new elements of a type from old, and an inductively generated type is the "least" type matching this specification. The simplest example is the natural numbers, N. It is the type inductively generated by the requirements that 0 is an element of N and S(n) is an element of N whenever n is. Since N is the least such type, we can prove a formula ϕ holds for all natural numbers n, by first proving ϕ for 0, then showing ϕ holds for S(n) whenever it holds for n.
An important class of inductive types is that of W -types. These have elegant categorical semantics due to Moerdijk and Palmgren [17], and later developed further to dependent W -types by Gambino and Hyland [9]. In these semantics, W -types are implemented as initial algebras of a certain class of endofunctors, known as polynomial endofunctors. Type theoretically the idea (for the simpler non dependent case) is that we are given a type Y that we refer to as constructors and a family of types X y indexed by the elements of y, which we refer to as arities. We then construct a type W , which contains an element of the form sup(y, α) whenever y ∈ Y and α : X y → W .
Higher inductive types are one of the main ideas in homotopy type theory [29], in which one defines a new type by specifying not only how to construct elements of a type, but also how to construct proofs of equality between elements (and also proofs of equality between proofs of equality, etc). A lot of the time the aim here is to construct types with nontrivial higher type structure that represent interesting topological spaces (such as n dimensional spheres) type theoretically. However, there are examples of higher inductive types that are non trivial even when working in an extensional setting, where UIP holds (any two proofs of equality are equal). Many years before the term "higher inductive type" was even coined, it was known that free algebras can be constructed for (infinitary) varieties, and as observed by Blass, this can even be carried out internally in a topos with a natural numbers object satisfying the internal axiom of choice [5,Section 8]. As observed by Lumsdaine and Shulman in the introduction to [15], this can now be viewed as a kind of higher inductive type. More recently, in [1] Altenkirch, Capriotti, Dijkstra and Forsberg developed a class of higher inductive types, which they call quotient inductive-inductive types which also have interesting structure even within extensional type theory. See also the earlier work on quotient inductive types by Altenkirch and Kaposi in [2].
We will develop an idea for a simple kind of higher inductive type that we will call W -type with reductions. Essentially, we identify sup(y, α) with some of the elements α(x) used to construct it.
Although W -types with reductions are relatively simple, we will see that they have an interesting application in homotopical algebra and the semantics of homotopy type theory. A well known construction in homotopical algebra is Garner's small object argument [11], in which a cofibrantly generated algebraic weak factorisation system (awfs) is constructed, making essential use of transfinite colimits. In an earlier paper [28] the author defined a new generalised definition of cofibrantly generated within a Grothendieck fibration, and showed that to construct a cofibrantly generated awfs in this new sense, it suffices to show that certain pointed endofunctors have initial algebras. We will show that when working over the codomain fibration for a locally cartesian closed category, these initial algebras can be seen as W -types with reductions. This will then be used to construct some interesting, previously unknown examples of awfs's.
W -types with reductions may turn out to be special cases of free algebras for varieties and/or QIITs, and just like with those they are non trivial even when working in extensional type theory. Indeed throughout this paper we will be working with locally cartesian closed categories which we think of as models for extensional type theory. However, the relative simplicity of W -types with reductions will have some important advantages. We will show how the semantics for dependent W -types can be generalised to also give us semantics for W -types with reductions. We will then show that W -types with reductions can be implemented in any ΠW -pretopos satisfying a weak choice axiom known as WISC (such categories are sometimes referred to as predicative toposes [30]). An interesting aspect of this is that currently approaches to the semantics of higher inductive types such as the work of Lumsdaine and Shulman in [15] use transfinite colimits for the construction of the underlying objects. On the other hand, there are interesting examples of predicative toposes based on realizability that do not have infinite colimits, that we will see in section 8. The key is that we will construct the types within the internal logic of the predicative topos using W -types.
The main focus of this paper is on semantics, in the same spirit as Gambino and Hyland in [9]. We will, however give an intuitive explanation of what Wtypes with reductions look like in the internal logic of a ΠW -pretopos, which will suggest what a syntax for W -types with reductions might look like.
On Internal Languages for Locally Cartesian Closed Categories
Throughout this paper we will use type theoretic notation for objects in a locally cartesian closed category, and type theory style arguments for some of the proofs. Often, given a map f : X → Y we will think of it as a family of types indexed by Y , written as X y or X(y). This is justified by the well known paper by Seely [26], although strictly speaking, in order to really interpret extensional type theory one needs the later work by Hofmann in [12]. One can also add disjoint coproducts, propositional truncation and effective quotients to the type theory, as long as the locally cartesian closed category possesses the appropriate structure. See e.g. the work of Maietti in [16].
Furthermore, as shown by Moerdijk and Palmgren W -types in type theory correspond closely to the categorical definition that we will use here. See [17] for more details.
In [17,Remark 5.9] Moerdijk and Palmgren point out a subtle issue to bear in mind when working with W -types. If we are constructing a map from a W -type, W to an object A, then it is very straightforward to convert an argument by recursion in type theory into a direct argument using the initial algebra property of W . However, sometimes in proofs we want to construct a predicate on W by induction. In this case there is not a straightforward way to interpret such arguments in an arbitrary locally cartesian closed category. However, as Moerdijk and Palmgren show in [18], such arguments can be interpreted in the richer structure of a stratified pseudotopos, and that many natural examples of ΠW -pretoposes possess this additional structure. In this paper we will sometimes see such arguments, since they are often the most natural and easy to understand proofs. However, our results do apply to arbitrary locally cartesian categories and we will also include brief explanations of how the proofs can be adapted to work in general.
Definition
We recall from [9] that Gambino and Hyland defined the following notions of polynomial, dependent polynomial endofunctor and dependent W -type, which we will generalise. Throughout we assume that we are given a locally cartesian closed and finitely cocomplete category C. Definition 2.1 (Gambino and Hyland). A polynomial is a diagram of the following form.
where g, f and h are as above. We denote this endofunctor as P f,g,h .
A dependent W -type is an initial object in the category of P f,g,h -algebras for some dependent polynomial endofunctor P f,g,h .
We now give the new more general definition of polynomial with reductions and pointed polynomial endofunctor with reductions.
Definition 2.2. Suppose we are given maps f, g, h and r as in the following diagram.
We say the diagram is coherent, or satisfies the coherence condition if g We say that a diagram as in (1) satisfying the coherence condition is a polynomial with reductions.
We refer to the subdiagram consisting of f , g and h as the underlying polynomial, and to R and k as the reductions. Proposition 2.3. Polynomials in the sense of definition 2.1 correspond precisely to polynomials with reductions where R is the initial object in C.
Proof. We draw attention to the fact that the coherence condition is vacuous when R is initial. Aside from this it is obvious. Definition 2.4. Suppose we are given a polynomial with reductions as in definition 2.2.
We construct a pointed endofunctor P f,g,h,k as follows.
Note that the coherence conditions gives us the isomorphism (equality, in Note that we have an evaluation map f * Π f → Id C/Z in C/X (which is just the counit of the adjunction f * ⊣ Π f ). We also have a map Σ k k * → Id C/Z over X given by the counit of the adjunction Σ k ⊣ k * (which recall is just one of the projection maps in the pullback). We have a similar such map for h. We put these together in the following composition: Again using the counits of Σ and pullback adjunctions we get a composition Finally, we combine these together to get two maps out of Σ h Σ k k * f * Π f h * in C/Z and then take the pushout.
This defines a pointed endofunctor on C/Z with the point given by the right hand inclusion of the pushout. We will refer to pointed endofunctors defined in this way as pointed polynomial endofunctors.
We first note that we get in this way a generalisation of Gambino and Hyland's notion of dependent polynomial endofunctor in the following proposition.
Proposition 2.5. If R is an initial object, then P f,g,h,k is just P f,g,h + 1, which is a pointed endofunctor with a category of algebras isomorphic to the algebras of the dependent polynomial endofunctor on the underlying polynomial.
Definition 2.6. Let f, g, h, k be a polynomial with reductions. We refer to the initial object of the category of P f,g,h,k -algebras (if it exists) as the W -type with reductions on f, g, h, k.
Proposition 2.7. If R is initial, then the W -type with reductions is just the dependent W -type on the underlying polynomial.
A Formulation in the Internal Language of a Category
We will often work in the internal logic of C. In this case it is useful to reformulate the definition in a more intuitive way as follows. We will view g : Y → Z as a family of types Y z indexed by z ∈ Z, and f : X → Y as a family of types X z,y indexed by z ∈ Z and y ∈ Y z . We view k as a family of types R z,y,x for x ∈ X z,y .
We refer to Y z as the constructors over z ∈ Z. For y ∈ Y z , we refer to X z,y as the arity of the constructor y. We will refer to the map h : X → Z as the reindexing map.
Suppose we are given a family (W z ) z∈Z over Z. Now we can reformulate the pointed polynomial endofunctor with reductions at W as the following pushout using type theoretic notation as below.
Then note that by the universal property of the pushout, P f,g,h,k -algebra structures on W correspond precisely to commutative triangles of the form below.
We can rephrase this as the following.
1. For each z ∈ Z, each constructor y ∈ Y (z), and each element α of type Π x:X(y) W (h(x)), we are given a choice of element c(y, α) of type W (z).
2. For each y ∈ Y z and each x ∈ X(y), if there exists r ∈ R(x) then the equation c(y, α) = α(x) is true. We refer to such equations as reduction equations or just reductions.
Remark 2.8. Note that the coherence condition ensures that whenever y ∈ Y (z), x ∈ X(y) and there exists r ∈ R(x), we have h(x) = g(f (x)) and so α(x) lies in the fibre W (z), the same as c(y, α).
The first part is then the same as an algebra structure over the underlying polynomial endofunctor, and the second part is what we gain by adding reductions.
The W -type with reductions is then the object inductively generated by the first condition subject to the equations in the second condition. The way we combine an inductively defined type with equations in this way is an example of a higher inductive type. These play an important role in homotopy type theory (see [29]).
In the above we only talked about R(x) being inhabited, and didn't need to depend on any particular choice of element from R(x). We justify this with the following proposition. Proposition 2.9. Every pointed polynomial endofunctor with reductions is isomorphic to one derived from a polynomial with reductions where where k is monic. Moreover, given any polynomial with reductions, we obtain an isomorphic pointed endofunctor by replacing k with the inclusion with its image in X.
Proof. Recall that the image factorisation of k is defined as the (unique up to isomorphism) factorisation of k as a regular epimorphism followed by a monomorphism, as in the diagram below.
Coproducts of Pointed Polynomial Endofunctors with Reductions
In [9, Section 5], Gambino and Hyland observe that under suitable conditions, the class of dependent polynomial endofunctors over a fixed object Z is closed under coproduct. We will now show the analogous result when reductions are added. Note that since we are now working with pointed endofunctors, the appropriate notion of coproduct is the coproduct in the category of pointed endofunctors, which appears in the category of endofunctors as pushout along the units of the pointed endofunctors.
Proposition 2.10. Suppose that C is a finitely cocomplete locally cartesian closed category with disjoint coproducts. 1 Then the class of pointed polynomial endofunctors over a fixed object Z is closed under coproduct.
Proof. Suppose we are given two diagrams as below.
Similarly to the case for dependent polynomial endofunctors, we combine the two diagrams using coproduct as below.
x x q q q q q q q q q q q Again, by the same argument as for dependent polynomial endofunctors, note we deduce that the pointed polynomial endofunctor generated by (3) is Id C/Z → S in the following pushout.
However, a quick diagram chase verifies that Id C/Z → S is the map produced by the following three pushouts.
It's useful to note that every such category is extensive, as a corollary of [6, Proposition 2.14].
We deduce that the dependent pointed polynomial endofunctor produced by (3) (given by Id C/Z → S) is the coproduct of the two diagrams given, as required.
Review of Small Cover Bases and WISC
The axiom WISC was independently noticed and studied by various authors. For example, it was considered by Van den Berg in [30] under the name AMC, as a weakening of the axiom AMC considered by Moerdijk and Palmgren in [18]. We recall the definition below and make some basic observations that will be used later.
We say the square is covering if both p and the canonical map D → B × A C are covers.
In the internal logic of the category we can think of a covering square as follows. We think of the map f : B → A as a family of types indexed by A, which we write (B a ) a∈A . We think of the map p : C → A as a family of types indexed by A, (C a ) a∈A , where the requirement that p is a cover says that each C a is inhabited. We then think of the map g : D → C as a family of types (D a,c ) a∈A,c∈Ca . Finally, the requirement that the canonical map D → B × A C is a cover says that for every a ∈ A and c ∈ C a we have a surjection q a,c : D a,c ։ B a . Hence such a square is sometimes referred to as a set of covers.
Definition 3.4. We say that a square as in (4) is collection if the following holds in the internal logic 2 . For all a ∈ A and for each cover e : E ։ B a there is c ∈ C a and a map t : D c → E such that q a,c = e • t.
Squares that are both covering and collection are sometimes referred to as weakly initial sets of covers or cover bases. Definition 3.5. Let C be a regular category. We say that a map f : B → A admits a cover base if f fits into the right hand side of a square as in (4) that is both covering and collection.
The axiom weakly initial set of covers (WISC) states that any map admits a cover base. Lemma 3.6. Suppose that we are given a covering collection square as in (4). Then the following holds in the internal language.
For all a ∈ A, we have the following. Suppose we are given a family of types (X b ) b∈Ba such that X b is inhabited for all b ∈ B a . Then there exists c ∈ C a and an element of the product type Π d∈Da,c X qa,c(d) .
Proof. We apply collection to the cover Σ b∈Ba X b ։ B a given by projection (which is a cover since each X b is inhabited).
The following lemmas, which will be used later are easy to check, so we omit proofs here. Moreover, the pullback of the covering and collection square along h is also covering and collection.
Lemma 3.8. Suppose that C has disjoint coproducts. Suppose that f 1 : B 1 → A 1 and f 2 : B 2 → A 2 both admit weak cover bases. Then the same is true for Moreover, the coproduct of the two covering and collection squares is itself covering and collection.
Construction of the Initial Algebras
In this section we work towards the construction of initial algebras for dependent pointed polynomial endofunctors with reductions over ΠW -pretoposes. Although there are a number of possible approaches to doing this that already appear in the literature, none seems to be quite adequate for our purposes (this will be discussed further in section 9.2). The main obstacle is that we wish for the construction to hold in categories that do not have infinite colimits, such as realizability toposes. We therefore give a direct construction for ΠW -pretoposes rather than applying an existing result.
Outline of the Construction
We start with a rough illustration of the overall idea, with the motivation for each part of the proof.
For the proof to apply for realizability toposes, the proof should be carried out in the internal logic of the ΠW -pretopos. We can see that some kind of transfinite construction is likely to be necessary, and the only such construction available to us internally is to use W -types (and in section 7 we will see that W -types really are necessary for the theorem to hold). By the results of Gambino and Hyland in [9] we may use dependent W -types. Some form of the axiom of choice may be necessary. WISC is acceptable, since it holds in many examples of ΠW -pretoposes including realizability toposes, but we will try to avoid anything stronger.
The most naïve approach using W -types is as follows. We know from the description of P f,g,h,k algebras before that an algebra structure on W consists of the structure of an algebra over the polynomial endofunctor P f,g,h whose operators satisfy the reduction equations. We might therefore take W to be an initial algebra for P f,g,h and then simply quotient out by the equivalence relation generated by the reduction equations. Note however, that this won't work. We need in particular an algebra structure on W/∼. For the time being we will consider the non dependent case for simplicity. Suppose that we want to define sup(α) for α : X y → W/∼ (the solid horizontal line below). We want to use the algebra structure on W to define sup(α), but to do this, we need a map X y → W (the dotted line below).
In order for any such map to exist, we need the axiom of choice, and then once we've found such a map we need to ensure that the particular choice of map doesn't matter in order to produce a well defined algebra structure.
Note however, that if (A i , q i ) i∈I is a cover base for (X y ) y∈Yz,z∈Z , then there does exist a dotted line in the diagram below for some i ∈ I.
We therefore modify the naïve argument as follows. We first form a dependent W type, using as arities, not (X y ) y∈Yz directly, but instead (A i ) i∈I where (A i , q i ) i∈I is a cover base for (X y ) y∈Yz .
We then define an equivalence relation ∼ on W as (the image of) another dependent W -type. We need to ensure of all of the following: 1. The reduction equations are satisfied.
3. ∼ is an equivalence relation, in particular symmetric and transitive.
Using a cover base like this has solved one problem but introduced another. In order to show that the algebra structure is initial, we will need that any α : A i → W/∼ extends to X y as below, but this is not always the case.
In fact the dotted line exists if and only if α(q i (a)) ∼ α(q i ′ (a ′ )) whenever q i (a) = q i ′ (a ′ ). To deal with this point we define ∼ not to be an equivalence relation, but instead a partial equivalence relation. We then ensure that whenever sup(α) ∼ sup(α) the condition above is satisfied (we will refer to such elements as well defined ). Then we can restrict to w ∈ W such that w ∼ w in our construction. A final point is that we know the dotted map in (5) exists, but now we also have to show it is well defined. We will define ∼ as the image of a certain W -type, and well definedness will amount to the existence of a function which provides for each a ∈ A i and a ′ ∈ A i ′ such that q i (a) = q i ′ (a ′ ), a witness of α(a) ∼ α(a ′ ). We have effective quotients and ensured that ∼ is an equivalence relation, but this only tells us that such a witness exists for each a, not how to find one. To deal with this, we use another cover base, this time for A i × Xy A i ′ over all y ∈ Y z . We then can use the same trick again of using the cover base We now provide a more careful, detailed version of the above argument.
2-Cover Bases
At the end of the outline we indicated that we would need two levels of cover base. We formalise this using the following notion.
Definition 3.9. Let u : U → I be a morphism in C. A 2-cover base for u consists of two squares of the following form that are both covering and collection.
Note in particular that if WISC holds in the pretopos, then any map has a 2-cover base by applying WISC twice. Also if X is the surjective image of a projective object then g • f has a 2-cover base, which in particular includes all finite colimits of representables in presheaf categories.
We also prove below that maps that admit 2-cover bases are closed under pullback and coproduct. Proof. By applying lemma 3.7 twice. Lemma 3.11. Suppose that C has disjoint coproducts. Suppose further that f 1 : X 1 → Y 1 and f 2 : X 2 → Y 2 admit 2-cover bases. Then the same is true for Proof. By applying lemma 3.8 twice.
The Underlying Object of the Initial Algebra
We assume we are given a polynomial with reductions as in (1), which as in section 2.2, we view as families of types Y z , X z,y and R z,y,x (which we'll sometimes abbreviate to X y and R x ).
We will assume that f has a 2-cover base and view it as families of types as follows. We assume we have a type I z,y for each z ∈ Z and y ∈ Y z together with a type A z,y,i (which we will usually write just as A i ) and surjections q i : A i ։ X y such that (A i , q i ) i∈I form a cover base for X y .
For the second part of the 2-cover base, we say that for each y and z we have a type J i,i ′ for each i, i ′ ∈ I z,y and a family of types and surjections We will now construct the initial algebra. We first define a family of types W z for z ∈ Z as the dependent W -type generated by the following rule: We now form a second dependent W -type, Q, which will be indexed over W × Z W . First note that by the definition of W and the basic properties of dependent W -types, for every w ∈ W z there is unique y ∈ Y z , i ∈ I z,y and α ∈ Π a∈Ai W h(qi(a)) such that w = cons(y, i, α). We will sometimes write Q w0,w1 as Q(w 0 , w 1 ) to ease readability.
has an element of the form trans(q 1 , q 2 ).
We now define Q z := Σ w0∈Wz Σ w1∈Wz Q w0,w1 and define l, r : Q z → W z to be the two projections.
Note that we have defined Q z so that its image in W z × W z , which we write as ∼, is a partial equivalence relation. For transitivity we use trans. We prove symmetry in the following lemma. Proof. We show by induction on the construction of Q that given any element q of Q w0,w1 we can prove there exists an element of Q w1,w0 . Formally, we need to be a little careful to make this argument work in general ΠW -pretoposes. Write τ : W × Z W → W × Z W for the map swapping the two components. Then we need to define a map from Q to τ * (∼), regarded as objects in C/(W × Z W ). We do this by defining an algebra structure on τ * (∼) and then using the initial map.
The proof below is presented as an argument by induction on the structure of Q w0,w1 because it's more intuitive, but it's easy to adapt to the form above.
Note that the definitions of reduceleft and reduceright were chosen so that they can just be swapped round, and trans is easy to deal with by induction.
This only leaves us with the case of extn, which is a little non trivial. Suppose we are given an element of Q w0,w1 of the form extn(α 0 , α 1 , γ). Suppose further that we are given some (a ′ , a) By induction, we may assume therefore that Q(α 1 (π 1 (t j (b))), α 0 (π 0 (t j (b)))) contains some element q ′ . Then using the fact that (B i ′ ,i,j ) j∈J i ′ ,i is a cover base, we deduce that there exists j ∈ J i ′ ,i together with γ ′ : B i ′ ,i ′ ,j → Q ′ choosing witnesses of this. We then form the element of Q w1,w0 , extn(α 1 , α 0 , γ ′ ) and note that it is as required.
We say that w ∈ W is well defined if w ∼ w. We write W ′ for the set of well defined elements of Z. Note that ∼ restricts to an equivalence relation on W ′ (as is always the case for partial equivalence relations). Note that we can use extn to produce well defined elements as follows.
Proof. This is a special case of the previous lemma where α = α ′ and i = i ′ .
The Algebra Structure of the Initial Algebra
We now give W ′ /∼ an algebra structure over the pointed endofunctor. We first show the following lemma.
Proof. By the characterisation of algebra structures in section 2.2, it suffices to construct sup(α 0 ) for every α 0 ∈ Π x∈Xy W ′ h(x) /∼ and show that it respects the reduction equations. Given for every a ∈ A i . This determines a unique element of W z /∼ by lemma 3.13.
We now need to show that, for all x ∈ X, if R z,y,x is inhabited, then sup(α 0 ) = α 0 (x). To do this, we will show there exists an appropriate element of Q using reduceleft. Firstly, let i and α be as above. Let a ∈ A i be such that q i (a) = x. Next, note that following the proof of lemma 3.13 we can show there exists j ∈ J i,i and γ :
Proof of Initiality
We now show that the algebra structure we defined is initial. Suppose that we are given an object T together with an algebra structure on T . We will use the presentation from section 2.2, where we view an algebra structure as an algebra structure for the underlying polynomial, c : Σ g Π f h * (T ) → T such that c respects the reduction equations.
We first need to construct algebra map from W ′ /∼ to T , and then show that it is unique.
For this, we will follow the basic outline below.
1. Define a relation S W × Z T by induction on the construction of W .
2. Show by induction on the construction of Q that for every q ∈ Q w0,w1 there exists a unique t ∈ T such that w 0 , t ∈ S and the same t is unique such that w 1 , t ∈ S (which in particular tells us that when w ∼ w there exists a unique t ∈ T such that w 0 , t ∈ S).
3. Deduce (using effectiveness of quotients) that the corresponding relation on W ′ /∼ × Z T is functional, and so gives a morphism W ′ /∼ → T over Z.
We define S W × Z T inductively as follows.
is the unique t such that α(a), t ∈ S and x is the result of applying the algebra structure of T to α ′ .
Formally, we can construct S in an arbitrary ΠW -pretopos as a dependent W -type as follows. We work over the context W × Z T .
Let w, t ∈ W × Z T . We construct S(w, t) as follows. Suppose we are given all of the following.
c is the algebra structure for T ).
One can check that the composition S → W × Z T → W is monic, it follows that this definition of S matches the other definition.
We can now state and prove the main lemma.
Lemma 3.17. Let T and S be as above. Then for any e ∈ Q w0,w1 , there exists a unique t such that w 0 , t ∈ S and the same t is unique such that w 1 , t ∈ S.
Proof. We prove this by induction on the construction of e ∈ Q w0,w1 . The case trans is easy to deal with by induction. We next consider extn. Suppose that w 0 = cons(y, i 0 , α 0 ), w 1 = cons(y, i 1 , α 1 ) and e is of the form extn(α 0 , α 1 , γ). Note that we may assume by induction that for every b ∈ B j , γ(b) satisfies the statement of the lemma. We define an element such that α(a), t ∈ S. We will takeα(x) to be such a t, but we still need to complete the proof that t is uniquely determined by x. It only remains to ). Using the inductive hypothesis, we have then a unique t ′ such that α ′ (a ′ ), t ′ ∈ S and t = t ′ and t ′′ = t ′ , which implies t = t ′′ , as required. Finally, note that theα we have now defined is unique such that for all a ∈ A i , α(a),α(q i (a)) ∈ S. By the same argument as above,α is also unique such that for all a ∈ A i ′ , α ′ (a),α(q i ′ (a)) ∈ S. Therefore, applying the algebra structure of T toα gives us a unique t such that w 0 , t ∈ S and the same t is unique such that w 1 , t ∈ S as required.
The last two cases to consider are reduceleft and reduceright. We will just consider when e is of the form reduceleft(a 0 , j, α, γ), the other case being similar.
First note that by induction we may assume that for every b ∈ B j , γ(b) satisfies the statement of the lemma. Hence, we may apply the same argument as before to construct a unique α 0 ∈ Π x∈X T h(x) such that for all a ∈ A i , α(a), α 0 (q i (a)) ∈ S. 3 We now have, as before that applying the algebra structure of T to α 0 gives us a unique t such that cons(i, α), t ∈ S.
Also, note that there exists b ∈ B j such that q j (b) = (a 0 , a 0 ), and so again by induction, there is a unique t ′ such that α(a 0 ), t ′ ∈ S.
Finally, since the algebra structure on T has to respect the reduction equations, we have t = t ′ , as required.
Finally, since W ′ includes only the well defined elements of W , we deduce that for every w ∈ W ′ , there is a unique t ∈ T such that w, t ∈ S, and if w ∼ w ′ and t ′ is unique such that w ′ , t ′ ∈ S then t = t ′ . We deduce that this gives us a well defined function W ′ /∼ → T . Finally note that by the definition of S and the algebra structure on W ′ /∼, we can easily see that the function is the unique algebra structure preserving map, which gives us the lemma below.
Lemma 3.18. W ′ /∼ with the algebra structure given in lemma 3.16 is initial.
We can now deduce the main theorem of this section.
1. Suppose we are given a polynomial with reductions in C together with a 2-covering for it. Then we can construct an initial algebra for the corresponding pointed polynomial endofunctor.
2. Suppose that WISC holds in C, making it a predicative topos. Then every pointed polynomial endofunctor admits an initial algebra. In other words, C has all W -types with reductions.
A Simplification in Categories of Presheaves
In section 3 we gave a very general construction that works for any polynomial with reductions in any predicative topos. However, the result is in some ways unsatisfactory. Since we relied on effective quotients, the result does not apply to presheaf assemblies, which are one of the main intended applications of this work. The reliance on cover bases and WISC may turn out to be less serious in practice, but is still not ideal. It could, for example lead to subtle coherence issues when applying the results to the semantics of type theory.
In this section we therefore give another version of the main result, which will appear as theorem 4.16. We no longer assume effective quotients or WISC, so the result is applicable to a wider range of categories, and we obtain more concrete descriptions of the initial algebras. The class of polynomials with reductions that we consider is, however, much more restricted, but will still include many interesting examples.
Recall, e.g. from [13,Chapter 7] that in any finitely complete category we can define the notion of internal category, and thereby a notion of category of internal diagrams (which we will refer to here as internal presheaves).
Let C be a finitely cocomplete locally cartesian closed category with disjoint coproducts and W -types (e.g. a category of assemblies). Note that for any internal category C in C, the category of internal assemblies is also finitely cocomplete locally cartesian closed, and has disjoint coproducts. We will construct initial algebras for a certain class of polynomials with reductions in such internal presheaf categories.
Dependent W -Types in Internal Presheaves
We first give an explicit description of dependent W -types in presheaves. We will consider polynomial endofunctors over the following polynomial in internal presheaves. Note that by forgetting the action, we can also view this as a polynomial in C/C 0 Suppose we are given a morphism of presheaves A → Z. Then, using (the internal version of) Yoneda and the adjunctions f * ⊣ Π f and Σ h ⊣ h * we can show that for c ∈ C 0 elements of Π f h * (A)(c) consist of y ∈ Y (c) (which we view as a map y : y(c) → Y ) together with with a map f * (y(c)) → A making the following square commute.
Expanding the definition of y(c), we see that this consists of a (dependent) function assigning, for each d ∈ C 0 , each σ : c → d in C 1 , and each x ∈ f −1 d (Y (σ)(y)), an element, α(σ, x) of A(d, h d (Y (σ)(y))), satisfying the naturality condition that for all τ : d → d ′ we have α(τ • σ, X(τ )(x)) = A(τ )(α(σ, x)). Note that if we drop the naturality condition, then we get a dependent polynomial functor in C. We denote the corresponding dependent W -type as W 0 . We define the action of morphisms making W 0 into a presheaf over Z as follows. For c ∈ C 0 and z ∈ Z(c), everything in W 0 (c, z) is of the form sup(y, α) where y and α are as above. Given τ : c → c ′ , we define W 0 (τ )(sup(y, α)) to be sup(Y (τ )(y), α ′ ) where α ′ (σ, x) is defined to be α(σ • τ, x). Following Moerdijk and Palmgren in [17,Paragraph 5.4] we note that if we can form the subobject of W 0 consisting of the corresponding dependent W -type consisting of hereditarily natural 4 elements, then this gives the W -type in presheaves. We can construct this subobject in an arbitrary locally cartesian closed category with W -types by a similar technique to the construction of dependent W -types from ordinary W -types, which we do in the following lemma. Proof. We first modify the definition of W 0 to get a dependent W -type, V defined as follows. We take the context and the constructors to be the same as for W 0 . For W 0 , the arity at Y ∈ Y (c, z) consisted of pairs (σ, x) where σ : c → d and x ∈ X(d, Y (σ)(y)). For V , we instead define an element of the arity over y to consist of two morphisms σ : c → d and τ : d → e in C, together with x ∈ X(d, Y (σ)(y)). We define the reindexing map at (σ, τ, x) to be Z(τ • σ)(z). In other words we add an element to V (c, z) of the form sup(y, α) whenever y ∈ Y (c, z), and α is a dependent function such that for σ : c → d, τ : d → e and x ∈ X(d, Y (σ)(y)), α(σ, τ, x) is an element of V (e, Z(τ • σ)(z)).
We define W to be the equaliser of r and s. Note that r and s have a common retract t : V → W 0 defined recursively as follows. Given an element of V of the form sup(y, α), we define t(sup(y, α)) to be sup(y, α ′ ) where α ′ is defined as follows. Given σ : c → d and x ∈ X(d, Y (σ)(y)), we define α ′ (σ, x) := t(α (σ, 1 d , x)).
Decidable and Locally Decidable Polynomials with Reductions
We now define the class of polynomials with reductions that we will work over. The basic idea is that a polynomial is decidable when for each constructor there is either no reduction at all, or there is exactly one reduction. W -types with reductions over decidable polynomials can be viewed directly as dependent Wtypes. This makes them simple to construct but not so useful in practice when we already have W -types. Therefore, instead of decidable polynomials with reductions, we look at locally decidable polynomials with reductions. In this case we work in an internal presheaf category, and then the polynomial does not have to be decidable in the internal logic of the presheaf category. It turns out to be sufficient that it is decidable in the external category, in order to construct the initial algebras. 1. The polynomial with reductions (1) is isomorphic to one of the following form.
2. f • k is isomorphic to one of the inclusion maps of a coproduct.
3. f • k is a monomorphism with decidable image.
4. In the internal logic, the following holds. For each constructor y ∈ Y z , either there are no x ∈ X y such that R z,y,x is inhabited, or there exists exactly one x ∈ X z,y such that R z,y,x is inhabited, and in this case R z,y,x also has exactly one element. Definition 4.5. When we are working in the internal logic of the locally cartesian closed category, and y ∈ Y z , we will say y does not reduce if R z,y,x is empty for all x, and we will say y reduces at x if x is unique such that R z,y,x is inhabited.
Definition 4.6. We say a polynomial with reductions in presheaves is locally decidable if its image in C/C 0 after forgetting the action is decidable. Given a polynomial with reductions in a category of presheaves, it makes sense to talk about it being locally decidable and it also makes sense to talk about the polynomial with reductions being decidable internally in the category of presheaves. It's important to note the distinction between the two notions.
Every decidable polynomial with reductions is also locally decidable, but the converse does not hold in general. Given a morphism σ : c → d in the internal category C, locally decidability says that any y ∈ Y (c) either lies in the image of f c • k c or does not, and the same for y ∈ Y (d). In any case we know that if y ∈ Y (c) belongs to the image of f c • k c then also Y (σ)(y) belongs to the image of f d • k d . Decidability states that the converse also holds, so if Y (σ)(y) lies in the image of f d • k d , then y lies in the image of f c • k c . In order to get a result applicable to the CCHM model of type theory, we need it to apply to locally decidable pointed polynomial endofunctors that aren't decidable. Explicitly, we need to allow for the case of y ∈ Y (c) that does not belong to the image of f c •k c but where Y (σ)(y) does belong to the image of f d • k d , or informally "sup(y, α) does not yet reduce at c, but will reduce at d."
Construction of the Initial Algebras
Assume we are given a polynomial with reductions of the form (1) that is locally decidable. We will construct an initial algebra for the corresponding pointed endofunctor, showing that W -types with reductions exist for all locally decidable polynomials with reductions (theorem 4.16).
Normal Forms
We first form a variant of the dependent W -type W 0 that we used in the construction of dependent W -types in presheaves. We call this N 0 , and define it as follows. For c ∈ C 0 and z ∈ Z(c), we add an element sup(y, α) to N (c, z) whenever y ∈ Y (c, z) with y / ∈ im(f •k) and α ∈ Π d∈C0 Π σ : c→d Π x∈X(c,z,y) N 0 (d, h d (Y (σ)(y))). For the moment we don't add any naturality condition. Note that if W 0 is the corresponding W -type over all elements of Y (again, with the naturality condition dropped), then we have a canonical monomorphism i : N 0 → W 0 over Z. We refer to elements of N 0 as normal forms. In other words we only consider those terms that do not reduce because they have constructor y ∈ Y whose fibre over f • k is empty. Like with W 0 , we can define for each τ : c → c ′ and each z ∈ Z(c), a map N 0 (τ ) : N 0 (c, z) → N 0 (c ′ , Z(τ )(z)). Any element of N 0 (c, z) is of the form sup(y, α). Define α ′ the same as for W 0 . Note that sup(Y (τ )(y), α ′ ) is not necessarily an element of N 0 (c ′ , Z(τ )(z)), since Y (τ )(y) might reduce. However, by local decidability we can split into two cases: either Y (τ )(y) reduces or it does not. If it does not, we define N 0 (τ )(sup(y, α)) to be sup(Y (τ )(y), α ′ ), the same as for W 0 . If Y (τ )(y) reduces, at x, say, define N 0 (τ )(sup(y, α)) to be α(τ, x). Unlike with W 0 , this does not make N 0 into a presheaf over Z. We will see why in the proof of lemma 4.8.
The Presheaf of Natural Normal Forms
By analogy with W in section 4.1, we define a subobject N of N 0 . Given sup(y, α) ∈ N 0 (c, z), we say it is natural if for all σ : c → d and τ : d → e in C and all x ∈ X Y (σ)(y) , we have N 0 (τ )(α(σ, x)) = α(τ • σ, X(τ )(x)). We define the subobject N of N 0 of hereditarily natural elements to be those of the form sup(y, α) which are natural and such that for all σ : c → d and all x ∈ X Y (σ)(y) , α(σ, x) is hereditarily natural. Formally, we can define this object using the same technique as for lemma 4.1.
The Algebra Structure
It only remains to check that N really is an initial algebra. In this section we define the algebra structure s. We will use the presentation we saw in section 2.2 where an algebra structure is an algebra structure for the underlying dependent polynomial endofunctor that satisfies the reduction equations. We need to define s z,c (y, α) whenever α : f * (y(c)) → h * (N ). As explained in section 4.1, this is just an element of Π Σ σ : c→d X(Y (σ)(y)) N (d, Z(σ)(z)) that satisfies the naturality condition. We split into cases depending on whether y reduces. If it does, then we define s(y, α) to be α(x) where y reduces at x. Otherwise, we take s(y, α) to be the element sup(y, α) in N 0 , which in fact lies in N since it is clearly hereditarily natural by the fact that α maps into N and is natural. We also need to show that s is natural, which we do in the lemma below.
Lemma 4.9. The operation s z,c defined above is natural in the following sense. For any τ : c → c ′ in C and z ∈ Z(c), we have the following commutative diagram (where the dependent product is the one internal in the category of presheaves).
There are three cases to consider. Either neither y nor Y (τ )(y) reduces, or Y (τ )(y) reduces but not y, or y reduces. The first case is essentially the same as for ordinary W -types in presheaves, and the other two cases are straightforward to check.
Finally, we also need to check the reduction equations. However, note that they hold internally if and only if they hold pointwise, and it is clear that they do by the definition of N and s.
We can now deduce the following lemma.
Lemma 4.10. The operation s c defined above gives N the structure of an algebra over the given pointed polynomial endofunctor.
Proof of Initiality
We now show that the algebra structure we have defined really is initial. Suppose we are given an internal presheaf A with the structure of an algebra over the pointed polynomial endofunctor. As before we use the presentation in section 2.2, where we view an algebra over the pointed endofunctor as an algebra structure over the dependent polynomial endofunctor Σ g Π f h * , which we'll write as r : Σ g Π f h * (A) → A, such that this algebra structure satisfies the reduction equations. We need to define a structure preserving map t : N → A, and show that it is the unique such map. The basic idea for the definition of t is fairly simple. Given sup(y, α) in N (c, z), we want to define t(sup(y, α)) to be r(y, t • α). This is however quite tricky to formalise, since r(y, t • α) is only well defined when we know that t • α is natural, but this only makes sense when we have already defined at least some of t. This issue already occurs for ordinary W -types in presheaves, but is especially relevant here, where the proof of naturality is more difficult. What we need to do is to simultaneously show that t is natural while we are defining it, since then we can deduce that t • α is also natural, and so r(y, t • α) is well defined.
To help us with this, we define another presheaf T , again using dependent Wtypes in C over Z, where we modify the definition of N by adding in also elements of A. We will in fact construct T in several stages, first using a dependent Wtype, T 0 , then taking a succession of inductively defined subobjects T 1 , T 2 and finally T . In each case, we'll just give the inductive definition, but in fact they can all be constructed in arbitrary locally cartesian closed categories with Wtypes using similar techniques to those in the proof of lemma 4.1.
We first define the dependent W -type, T 0 by the following inductive definition.
Let c ∈ C and z ∈ Z(c). Suppose that we are given y ∈ Y (c, z) such that y does not reduce, a ∈ A(c, z) and α in Π Σ σ : c→d X(Y (σ)(y)) T 0 (d, Z(σ)(z)). Then T 0 (c, z) contains an element of the form sup(y, a, α).
Note that we have a projection π 0 : T 0 → N 0 over Z by simply "forgetting" the a's. We also have a projection π 1 : T 0 → A given by π 1 (sup(y, a, α)) := a.
We define T 0 (τ ) : T 0 (c, z) → T 0 (c ′ , Z(τ )(z)) the same as for N 0 (τ ). We now define T 1 to be the subobject of T 0 of hereditarily natural elements, which is defined exactly the same as in N . It follows that π 0 restricts to a function T 1 → N . We also have naturality in the following lemma.
Proof. Since we mimicked the construction of N from N 0 , it's clear that we can use the same proof as in lemma 4.8 to show T 1 is a presheaf and that π 0 is natural.
We now define a subobject T 2 of T 1 by the following inductive definition. Given, sup(y, a, α) ∈ T 1 , we say sup(y, a, α) belongs to T 2 if the following hold.
For all
We can now show the following lemma.
Lemma 4.12. The restriction of π 1 to T 2 is natural.
The key point is that naturality in the definition of T 1 ensures that we also have naturality for the composition of α with projection to A, in the following sense.
Proof. This is straightforward from the definition of T 1 (together with the observation that the same then applies when restricting to the subobject T 2 ) and lemma 4.12.
We now know that the expression r(y, π 1 • α) is well defined, which finally allows us to define T as the subobject of T 2 defined inductively as follows. An element sup(y, a, α) of T 2 belongs to T if both of the conditions below hold.
We can now show the main lemma.
Lemma 4.14. Let T be as above. Then π 0 : T → N is an isomorphism.
Proof. We show by induction on the construction of N that for all v ∈ N , the fibre π −1 0 ({v}) in T contains exactly one element. Suppose we are given an element of N (c, z) of the form sup(y, α). Clearly any element of π −1 0 (sup(y, α)) must be of the form sup(y, r(y, π 1 • π −1 0 • α), π −1 0 • α). We just need to check that this really is a well defined expression and that it belongs to T (as opposed to just T 0 , say).
In the above, we were just using π −1 0 as a convenient notation for a partial function, rather than a total inverse. Note however, that the induction hypothesis tells us that π −1 0 • α is a well defined function and the usual proof that the levelwise inverse of a natural transformation is natural still applies and, together with lemma 4.11 and the naturality of α, allows us to show that π −1 0 • α is natural.
We can now define t : N → A to be π 1 • π −1 0 . We now just need to check that it is a structure preserving map, and unique with this property.
Lemma 4.15. The map t : N → A defined by π 1 • π −1 0 is a natural transformation that is structure preserving and is the unique such map.
To show that t is structure preserving, we again need to split into two cases depending on whether there is a reduction. However, both cases are straightforward to show from the definition.
It's also clear from the definition that t is the unique structure preserving map, and in fact for uniqueness it's sufficient just to look at the case where there is no reduction. Theorem 4. 16. In any category of internal presheaves in a locally cartesian closed category with disjoint coproducts, every locally decidable pointed polynomial endofunctor has an initial algebra.
W -Types with Reductions in Classical Logic
We will see in this section how to construct all W -types with reductions in boolean toposes with natural number object. We have already seen the main idea in the previous section. Every topos is a category of internal presheaves over itself via the trivial category, and in this case locally decidable is the same as decidable. For a boolean topos, a polynomial with reductions is decidable just when the map f • k is monic. This only leaves the case where f • k is not monic. What this says is that the same constructor can reduce in more than one place. The key point is that when we know that this happens, things become trivial, in the following sense.
Lemma 5.1. Suppose we are given a polynomial with reductions of the form (1). Let (A z ) z∈Z be a family of types over Z with algebra structure given by c (which we will view as an algebra on the underlying polynomial that satisfies the reduction equations). Suppose that for some z ∈ Z there is a constructor y ∈ Y z that reduces in two distinct places x 1 = x 2 ∈ X y and there exists a dependent function α : Π x∈Xy A h(z) . Then A z contains exactly one element.
Proof. First of all, note that A z contains at least one element using the algebra structure, which is c(y, α).
Next, suppose that a 1 and a 2 are both elements of A z . Then we define a new dependent function α ′ as follows.
Note that the coherence condition ensures that this is still a dependent function of type Π x:Xy A h(x) . Also note that we needed classical logic to show this is a well defined function.
Then the reduction equation at x 1 tell us c(y, α ′ ) = a 1 , and the reduction equation at x 2 tells us c(y, α ′ ) = a 2 . Hence a 1 = a 2 . Therefore, A z contains exactly one element.
We will now use this idea to construct any W -type with reductions. We aim towards the following theorem.
Theorem 5.2. Let C be a boolean topos with natural number object. Then C has all W -types with reductions.
We first define a useful construction. Suppose we are given a subobject C ⊆ Z. Then we construct a new polynomial as follows. We work over the same context Z. For z ∈ C, we define the set of constructors Y ′ z to consist of exactly one element * , with empty arity X ′ * := ∅.
Otherwise, for z / ∈ C, we define Y ′ z to be the subobject of Y z consisting of those y with no reductions. That is, those where R y,x = ∅ for all x ∈ X y . We define the arity X ′ y to be X y . Write W C for the resulting W -type on the polynomial. Observe that for z ∈ C, W C z has exactly one element, of the form sup( * , ∅), where * is the only constructor over z. We say that C is closed if whenever z ∈ Z is such that there exists a constructor y ∈ Y z that reduces in two distinct places x 1 = x 2 and there exists some dependent function α : Π x:Xy W C h(x) , then we have z ∈ C. We then define C 0 to be the intersection of all closed sets C.
Proof. Let z ∈ Z be such that there exists a constructor y ∈ Y z that reduces in two distinct places x 1 = x 2 and let α : Π x:Xy W C0 h(x) . We need to show that for any closed set C, z ∈ C, so let C be an arbitrary closed set.
We first construct a map i : W C0 → W C over Z recursively as follows. Suppose that z ′ ∈ Z, and we are given an element of W C0 z ′ of the form sup(y, α). First suppose that z ′ ∈ C. In this case we take i(sup(y, α)) to be the unique element of W C z ′ . Otherwise we know that z ′ / ∈ C. In that case, we define i(sup(y, α)) to be sup(y, i • α), which is a valid element of W C z ′ since z ′ / ∈ C, and also z ′ / ∈ C 0 (since C 0 ⊆ C).
We then use i to construct an element of Π x∈Xy W C h(x) defined by i • α. But we can now deduce that z ∈ C.
Since we showed z ∈ C for any closed set, we have z ∈ C 0 , and so C 0 is closed, as required.
Lemma 5.5. For any closed set C, we give W C an algebra structure d for our given polynomial with reductions.
Proof. Suppose we are given y ∈ Y z for some z ∈ Z, and a dependent function α : Π x∈Xy W C h(x) . To define d(y, α) we split into cases. Firstly, if z ∈ C, we take d(y, α) to be the unique element of W C z . Now consider just the case when z / ∈ C. If y reduces in two different places, then we could show z ∈ C, since C is closed, deriving a contradiction. Hence we may assume that y either reduces exactly once, or not at all. We now proceed the same as in section 4.3.3. If y reduces at x, we define d(y, α) to be α(x). Otherwise y does not reduce at all, and so we can use the W -type structure and take d(y, α) to be sup(y, α).
This algebra structure clearly satisfies the reduction equations.
Lemma 5.6. W C0 with the algebra structure given in lemma 5.5 is initial.
Proof. Suppose we are given a family of types (A z ) z∈Z with algebra structure c. We need to show that there is a unique structure preserving map i : W C0 → A over Z.
We define C to consist of those z ∈ Z such that A z contains exactly one element. We now recursively define a map j : W C → A. Suppose we are given z ∈ Z, and sup(y, α) ∈ W C z . If y = * , then we must have z ∈ C. But then we can take j(sup( * , α)) to be the unique element of A z . Otherwise, y must be one of the original constructors in Y z , and α : Π x:Xy W C h(x) . We define j(sup(y, α)) to be c(y, j • α).
We can now deduce that C is closed, since if we are given a constructor y ∈ Y z that reduces in two distinct places and a dependent function α : Π x:Xy W C h(x) , then by considering j•α, we show by lemma 5.1 that A z has exactly one element, and so z ∈ C. But this implies that C 0 ⊆ C, and so we get a canonical map W C0 → W C , as in the proof of lemma 5.4. Composing with j gives us the map W C0 → A over Z.
However, it is now straightforward to check that this is the unique structure preserving map.
We can now use the above lemma to deduce the main theorem 5.2.
Review of Lifting Problems over Codomain Fibrations
We recall some definitions from [28, Section 7.5]. Since we focus only on the special case of codomain fibrations, we can simplify some of the definitions a little.
Definition 6.1. Let f be a map in C/I and let g be a map in C/J. A family of lifting problems from f to g over K ∈ C is diagram of the following form, where the squares on the left are both pullbacks.
A solution to the family of lifting problems is a map σ * (V ) → X making the upper right square into two commutative triangles.
Definition 6.2. Let f be a map in C/I and let g be a map in C/J. The universal family of lifting problems from f to g, is the family of lifting problems, where we define K to be type below, and the right maps in the family of lifting problems are given by evaluation.
Step 1 of the small object argument at Y is the pointed endofunctor R 1 : C/Y → C/Y defined as follows. Suppose that we are given f : X → Y in C/Y . We first form the universal lifting problem from m to f as in definition 6.2. We then define R 1 f to be the unique map out of the pushout, with unit given by the pushout inclusion λ f , as below.
Proposition 6.4. The pointed endofunctors are preserved by pullback along all maps J ′ → J. We say R 1 is a fibred lawfs. Definition 6.5. We say R 1 is strongly fibred if it is preserved by pullback along all maps Y ′ → Y .
Given any f : X → Y in C/Y , we have a pointed endofunctor, which we will denote I X , defined by coproduct, sending X ′ to X ′ + X, with unit given by coproduct inclusion. We clearly have the following proposition (by taking reductions and arities both to be initial). Proposition 6.6. For any X, I X is pointed polynomial. Theorem 6.7. Suppose that for each map Y → J and every f : X → Y in C/Y we are given a choice of initial algebra for the pointed endofunctor I X +R 1 . Then the awfs cofibrantly generated by m exists, and is fibred.
Proof. See [28,Corollary 5.4.7]. Theorem 6.8. If R 1 is strongly fibred then so is the resulting cofibrantly generated rawfs, if it exists.
Step 1 as a Pointed Polynomial Endofunctor
Theorem 6.9. R 1 is pointed polynomial.
Proof. Unfolding the type theoretic definition of universal lifting problem, we get the following descriptions of σ * (U ) and σ * (V ).
However, like this it is clear that the definition matches the definition of pointed polynomial endofunctor.
It is easiest to understand the definition of the polynomial with reductions for R 1 when we phrase it in terms of constructors, arities, reindexing and reductions. We read these off from the description above.
The overall context we are working in is the object Y , which in type theoretic notation is Σ j:J Y (j) (since we are thinking of Y as a family of types indexed by J).
We can think of the corresponding W -type with reductions directly in terms of lifting problems as follows. Suppose we are given a constructor (i, v 0 , β) and a map α : Π Σ v:V (i) U(i,v) X(j, β(j, p 0 (z))). Then, firstly β and α together form a lifting problem of m i against f j . We think of sup(i, v 0 , β, α) as a diagonal filler of the lifting problem, evaluated at v 0 . The reduction equations then ensure that the upper triangle of the diagonal filler commutes. Therefore, we think of an initial algebra of R 1 as the result of freely adding a filler for every lifting problem, subject to ensuring that the upper triangles do always commute.
An initial algebra for R 1 + I X is similar. Once again, we are freely adding a filler for every lifting problem. However in this case we start off with a copy of X before adding all the fillers.
Finally, we will later need the lemma below.
Lemma 6.10. For each Y → J, R 1 at Y is generated by the polynomial with reductions of the form below, where the map A → C is a pullback of the map U → I.
Proof. We can read off an description of the map A → C from the arguments above. 5 In type theoretic notation, A and C are defined as below, with the map A → C given by projection.
However, in this form it is clear that the map A → C is just the pullback of the map U → I along the projection C → I.
We can now deduce the following. Theorem 6.11. Suppose we are given a family of maps of the following form over the codomain functor on a ΠW -pretopos Furthermore suppose we are given a 2-cover base of the map U → I. Then m cofibrantly generates an awfs.
Proof. We have shown in theorem 6.9 that R 1 is pointed polynomial. Hence for each f : X → Y , the pointed endofunctor R 1 + I X from theorem 6.7 is also pointed polynomial.
I X trivially has a 2-cover base. R 1 has a 2-cover base since by lemma 6.10 it is a pullback of the map U → I for which we are given a 2-cover base and so we can apply lemma 3.10.
Hence we can construct a 2-cover base for each R 1 + I X by lemma 3.11. But then by theorem 3.19 we can find initial algebras, so we can deduce by theorem 6.7 that the cofibrantly generated awfs on m exists. Corollary 6.12. Suppose we are given a family of maps of the following form over the codomain functor on a ΠW -pretopos satisfying WISC Then the awfs cofibrantly generated by the family of maps exists.
Proof. By WISC, the map U → I has a 2-cover base. Hence we can apply theorem 6.11. Remark 6.13. One might expect that corollary 6.12 can be proved directly without going via theorem 6.11, by using WISC directly to find each 2-cover base. However, this doesn't work because we need to have a choice of 2-cover bases for every vertical map X → Y → J, and WISC only tells us at least one such 2-cover base exists. When we use theorem 6.11 this does not matter because we only have to apply WISC once (or rather, twice), to get a 2-cover base for the map U → I, and from that we can define all the other 2-cover bases that we need.
We can also apply the simplified construction from section 4 to get the following theorem. Theorem 6.14. Let C be a finitely cocomplete locally cartesian closed category with disjoint coproducts. Let A be an internal category in C, and C A the category of diagrams of shape A. Suppose we are given a family of maps of the following form over the codomain functor on C A .
18. Fix a square over an object I as in (9). We define a pointed endofunctor R 1 over cod called step one of the small object argument as follows. Given f : X → Y we define R 1 f to be the map below given by the universal property of the pushout, where we take σ : K → I to be as in the universal lifting problem from the square to f . The unit at f , is given by the inclusion λ f into the pushout.
For any family of squares as in (9), step one of the small object argument is a pointed polynomial endofunctor.
Proof. By unfolding the type theoretic definition, similarly to as in theorem 6.9.
Theorem 6.20. Suppose that C is a locally cartesian closed category and we are given a family of squares as in (9). Suppose further that one of the following two conditions holds.
2. C is a category of internal presheaves over a finitely cocomplete locally cartesian closed category with disjoint coproducts, and the map U 0 → V 0 is a locally decidable monomorphism.
Then the rawfs cofibrantly generated by (9) exists. Furthermore, if the map V 1 → I is an isomorphism then the resulting rawfs is strongly fibred.
For showing the rawfs is strongly fibred, we use [ In section 6 we saw that cofibrantly generated awfs's could be constructed using W -types and WISC. We will know show that the assumption of the existence of W -types is strictly necessary. We will show that in fact W -types can be recovered from the existence of cofibrantly generated awfs's. This shows that the results in section 6 don't hold for the category of sets in CZF, even if we add PAx, a choice axiom which implies WISC.
Theorem 7.1. Let C be a locally cartesian closed category with disjoint coproducts. Suppose that every monic decidable family of maps cofibrantly generates an awfs. Then C has all W -types.
the reference [33] by Van Oosten for a comprehensive introduction to all of these notions. We will use the same terminology and notation as Van Oosten. None of these categories admit colimits over arbitrary infinite sequences (even countably infinite sequences).
Kan Fibrations in the Effective Topos
In [31], Van den Berg and Frumin considered two classes of maps in the effective topos, Ef f referred to as trivial fibrations and fibrations. In [28,Section 7.5.2], the author showed that these classes are both cofibrantly generated with respect to the codomain fibration, by the following two families of maps.
In loc. cit., Van den Berg and Frumin showed that if one restricts to the full subcategory of Ef f of fibrant objects (i.e. objects X where the unique map X → 1 is a fibration) then fibrations are the right classes of a wfs, and moreover this forms part of a model structure on the subcategory. However, their proof relies on restricting to fibrant objects, and doesn't apply to the entire category Ef f .
We can now confirm that in fact, we do get awfs's on all of Ef f , without restricting to fibrant objects. 3. The awfs (C, F t ) is strongly fibred (i.e. stable under pullback).
Proof. In [30], Van den Berg showed that Ef f is a ΠW -pretopos and satisfies WISC (there referred to as AMC). We can therefore construct (C, F t ) and (C t , F ) using corollary 6.12. We see that (C, F t ) is strongly fibred by [28,Corollary 7.5.5].
Remark 8.2. In fact we can define (C, F t ) in two different ways. We can either take the underlying lawfs to be (C 1 , F t 1 ) together with a multiplication that we can add using the fact that cofibrations can be composed. Alternatively, we can take (C, F t ) to be the awfs algebraically free on (C 1 , F t 1 ). As Gambino and Sattler point out in [10, Remark 9.5] these two definitions are not the same. However, both are strongly fibred and we end up with the same wfs in either case.
Computable Hurewicz Fibrations in the Kleene-Vesley Topos
Recall that the function realizability topos, RT(K 2 ) is the realizability topos on K 2 . Then RT(K 2 ) has as a subcategory, the Kleene-Vesley topos, KV, which is defined as the relative realizability topos RT(K rec 2 , K 2 ). See [33,Section 4.5] for more details.
We can embed subspaces of R n into RT(K 2 ). A subspace of R n is in particular a countably based T 0 -space, which Bauer showed in [3] embed into PER(K 2 ), which in turn embeds into RT(K 2 ). Note however, that for the special case of subspaces of R n , we can more explicitly describe the embedding into Asm(K 2 ). Given a subspace X of R n , we take the underlying set of the assembly to be X itself, and we define the existence predicate, E, by taking E(x) to be the set of (functions encoding) Cauchy sequences of rationals that converge to x, for each x ∈ X.
Hence the endpoint inclusion into the topological interval δ 0 : 1 → [0, 1], can be viewed as a map in RT(K 2 ). Moreover, since the map is evidently computable, it in fact lies in the subcategory KV. Definition 8.3. We say a map in KV is a computable Hurewicz fibration if it has the fibred right lifting property against the following (trivial) family of maps.
Note that since this is the fibred right lifting property, it is equivalent to having the right lifting property against the map δ 0 × X : X → X × [0, 1], for every object X of KV. This justifies the name computable Hurewicz fibration, by analogy with Hurewicz fibrations in topology.
Theorem 8.4. There is an awfs on KV where the maps that admit the structure of a right map are precisely the computable Hurewicz fibrations.
Proof. It suffices to show that KV is a ΠW -pretopos and satisfies WISC. Van den Berg showed in [30] that this is the case for internal realizability toposes, as long as it holds in the background. However, Birkendal and Van Oosten showed in [4] that relative realizability toposes can be viewed as internal realizability toposes in Set 2 , so KV is indeed a ΠW -pretopos satisfying WISC. We can now apply corollary 6.12.
Cubical Assemblies
We will construct a category of internal presheaves in Asm(K 1 ) which we will call the category of cubical assemblies, which will be a realizability variant of the category of cubical sets defined by Cohen, Coquand, Huber and Mörtberg in [7]. The definitions of Kan trivial fibration and fibration are based on the presentation in [28,Section 7.5.4].
First, note that we can view the free de Morgan algebra on a countable set A as follows. We write dM 0 (A) for the set of strings in the language of de Morgan algebras with constants from A. Then dM(A) is the quotient of dM 0 (A) by the appropriate equalities corresponding the de Morgan algebra axioms. We write φ ≡ ψ if φ and ψ are words that are identified in dM(A). Clearly there is a Gödelnumbering of dM 0 (A). Given φ ∈ dM 0 (A), we write the corresponding Gödelnumber as φ .
We define an internal category in assemblies as follows. We take the underlying small category to be the same as for CCHM cubical sets. That is, the full subcategory of the Kleisli category on dM with objects the finite subsets of A. We then need to define existence predicates E 0 and E 1 for the objects and morphisms. Given a finite subset A of A, we define E 0 (A) to consist of lists a 1 , . . . , a n such that A = {a 1 , . . . , a n }. Given a morphism θ : A → B, we define E 1 (θ) to consist of triples d, c, e , where d and c are codes for the domain and codomain, and e tracks the function A → dM(B) underlying θ. That is, given a ∈ A, θ a is defined and equal to φ for some φ such that φ ≡ θ(a). We call this internal category the cube category.
We now define the category of cubical assemblies to be the category of diagrams for the cube category. Note that the forgetful functor Γ : Asm(K 1 ) → Set extends to a functor from cubical assemblies to cubical sets.
We define an interval object I as the following cubical assembly. The underlying cubical set is the same as the interval in CCHM cubical sets. Namely, we take I(A) to be dM(A). We define the existence predicate on I(A) by taking E([φ]) to be the set consisting of ψ for ψ such that ψ ≡ φ.
We define the face lattice, F , to be the quotient of I by the following equivalence relation. We define [φ] ∼ [ψ] when φ ≡ 1 ⇔ ψ ≡ 1 holds in cubical assemblies. In Asm(K 1 ), this says that for As Coquand et al remark in [7, Section 3], free de Morgan algebras have decidable equality. In fact the equality in dM(A) is uniformly computably decidable over all finite subsets A of A, and so I has decidable equality in Asm(K 1 ).
We will check that the map ⊤ : 1 → F has decidable image. Note that since Asm(K 1 ) does not have effective quotients in general we need to be a little careful.
Suppose we are given an element of F (A) of the form [φ]. By decidability of ≡ we know that φ ≡ 1 or φ ≡ 1. In the former case we clearly have φ ∼ 1 and so [φ] = 1. We now show that in the latter case [φ] = 1. Just using the fact that the quotient is a coequalizer and again that I has decidable equality we can define a map f : In fact one can deduce that this particular quotient is effective, but we don't need that here.
Therefore, by theorem 6.14 there exists a (strongly fibred) awfs cofibrantly generated by the following family of maps, which we refer to as the awfs of Kan cofibrations and trivial fibrations.
Finally note that the Leibniz product δ 0× ⊤, is the subobject of F × I, which at A consists of ([φ], [ψ]) in F (A) × I(A) such that φ ≡ 1 or ψ ≡ 0 6 . It follows that δ 0× ⊤ is also locally decidable. It follows again by theorem 6.14 that there is a (fibred) awfs cofibrantly generated by the family of maps below, which we refer to as the awfs of Kan trivial cofibrations and fibrations.
Comparison With Existing Constructions of Higher Inductive Type
As remarked in the introduction, W -types with reductions may be a special cases of free algebra over varieties (as defined by Blass in [5]), and of QIITs, as developed by Altenkirch, Capriotti, Dijkstra and Forsberg in [1]. We were able to show initial algebras can be constructed in a wide variety of categories. For algebraic varieties, Blass observed that initial algebras can be constructed in any topos with natural number object satisfying the internal axiom of choice, which is a much smaller class than the one we considered. However, the construction in section 3 is fairly flexible, and may lead to a refinement of Blass' result, as in the conjecture below.
Conjecture 9.1. Free algebras for varieties exist in any ΠW -pretopos that satisfies WISC.
In fact this has already been conjectured in [30,Section 8], where the question is attributed to Alex Simpson.
The question of when QIITs can be constructed remains open, although in loc. cit., Altenkirch et al do make some progress towards a solution. The technique used in section 3 might also be helpful here.
In [15], Lumsdaine and Shulman give a very general approach to the semantics of higher inductive types in homotopy type theory. Although the set up is quite different, the problem of constructing the higher inductive types turns out to be quite similar to the problems we saw in this paper. For this Lumsdaine and Shulman use some general transfinite constructions due to Kelly [14]. Unfortunately this approach is not suitable for the examples we consider here, as we discuss further in the next section.
Other Approaches to the Construction of Initial Algebras
In section 3.2 we gave a relatively direct proof, in place of an application of existing results from literature. The reader might wonder why this is the case. A commonly used approach to constructing initial algebras is to use a transfinite construction. Following Garner's small object argument [11], we might try to use one of the general theorems of Kelly from [14]. However, such constructions have the disadvantage that they make essential use of transfinite colimits of ordinal indexed sequences. This means they will not work for general elementary toposes, which need not be cocomplete. This is critical here, because our examples are based on realizability toposes, which are certainly not cocomplete.
It is also difficult to simply carry out a similar transfinite construction internally in the ΠW -pretopos, since it is unclear how to formulate ordinals in the internal language in way that the set theoretic arguments can be easily transferred.
Another possible approach would be to use an internal version of the special adjoint functor theorem as developed by Day in [8] or Paré and Schumacher in [20]. In fact Paré and Schumacher indicate in [20, Section V.2] how their result can be used to construct free algebras of certain endofunctors. However, it is unclear how to show that the pointed endofunctors here satisfy the necessary conditions to apply the internal special adjoint functor theorem. Indeed in the paragraph at the end of loc cit. Paré and Schumacher remark that the addition of equations makes things more problematic and suggest using in this case the more powerful results of Rosebrugh in [24]. However, Rosebrugh's proofs apply only to internal toposes of sheaves inside toposes satisfying the axiom of choice. This again would eliminate our examples based on realizability. Blass proved in [5] that some form of the axiom of choice really is necessary for Rosebrugh's result to hold, although like with our results it may be possible to adapt Rosebrugh's proofs to use a weak form of choice such as WISC. There is also the issue that the techniques of Rosebrugh and of Paré and Schumacher make heavy use of impredicative notions such as the subobject classifier and the assumptions of well poweredness and cowell poweredness, and will thus not apply to ΠW -pretoposes without further work.
Is Choice Really Necessary?
In our construction of arbitrary W -types with reductions in a ΠW -pretopos we relied on the axiom WISC. It's natural to ask whether WISC was really necessary, or whether there's a way to construct W -types with reductions without using any choice.
We saw in section 5 that using classical logic we can derive all W -types with reductions from W -types without using any choice. It might be possible to generalise this result to all categories of internal presheaves in a boolean topos.
However, we conjecture that in general there are toposes where some form of choice is strictly necessary, even just for monic polynomials with reductions.
Conjecture 9.2.
1. There is a topos with natural number object with a monic polynomial with reductions that does not have an initial algebra.
2. It is consistent with IZF that there is a monic polynomial with reductions in the category of sets that does not have an initial algebra.
Note that by theorem 3.19 we know that WISC has to fail in the above conjecture. Also by theorem 4.16 we know that the topos cannot be a category of internal presheaves over a boolean topos (and in particular cannot be boolean itself).
Applications to the Semantics of Homotopy Type Theory
The main aim of this work is towards the semantics of homotopy type theory and in particular better understanding and generalising the cubical set model of type theory. We have already seen one aspect of this, which is that W -types with reductions can be used to construct awfs's where R-algebra structures correspond to Kan filling operators (which in turn are used in the interpretation of dependent types). We note that in fact we don't need all W -types with reductions in order to do this, but only those where the map f • k in (1) is a cofibration (assuming cofibrations are closed under coproduct and pullback). We'll refer to such polynomials with reductions as cofibrant.
Cofibrant W -types with reductions may also have further applications to the semantics of type theory. In [7], Coquand et al implement higher inductive types by freely adding an hcomp operator to a type. This can be seen as a kind of weak fibrant replacement that can be phrased as a cofibrantly generated rawfs, as we developed in section 6.3. An important point is that this construction is stable under pullback, which corresponds to our notion of strongly fibred rawfs. We again notice that we only need cofibrant W -types with reductions.
The author hopes to develop these ideas further in a future paper. The following conjecture illustrates the kind of result expected. Conjecture 9.3. Let C be a topos with natural number object. Suppose further that C satisfies all of the axioms considered by Orton and Pitts in [19]. Suppose further that initial algebras exist for all cofibrant polynomials with reductions. Then pushouts, n-truncations, set-quotients, suspensions and n-spheres can be implemented in the resulting CwF.
Algebraic Model Structures on Realizability Toposes
In section 8 we saw three examples of awfs's based on realizability. It's natural to ask whether these in fact form part of algebraic model structures (as defined by Riehl in [22]). We conjecture that in fact this is possible.
Firstly, by generalising results by Sattler in [25] the author expects it will be possible to prove the following conjectures. The status of the example in KV is less clear, but by analogy with the well known model structure on topological spaces by Strøm [27], the following conjecture might also be true. | 2018-02-21T14:38:55.000Z | 2018-02-21T00:00:00.000 | {
"year": 2018,
"sha1": "a83187389ed285440cc3427222253ff1f7f72f3c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a83187389ed285440cc3427222253ff1f7f72f3c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
227162821 | pes2o/s2orc | v3-fos-license | A prospective cohort study of the safety of breast cancer surgery during COVID-19 pandemic in the West of Scotland
Introduction In order to minimise the risk of breast cancer patients for COVID-19 infection related morbidity and mortality prioritisation of care has utmost importance since the onset of the pandemic. However, COVID-19 related risk in patients undergoing breast cancer surgery has not been studied yet. We evaluated the safety of breast cancer surgery during COVID-19 pandemic in the West of Scotland region. Methods A prospective cohort study of patients having breast cancer surgery was carried out in a geographical region during the first eight weeks of the hospital lockdown and outcomes were compared to the regional cancer registry data of pre-COVID-19 patients of the same units (n = 1415). Results 188 operations were carried out in 179 patients. Tumour size was significantly larger in patients undergoing surgery during hospital lockdown than before (cT3-4: 16.8% vs. 7.4%; p < 0.001; pT2 – pT4: 45.5% vs. 35.6%; p = 0.002). ER negative and HER-2 positive rate was significantly higher during lockdown (ER negative: 41.3% vs. 17%, p < 0.001; HER-2 positive: 23.4% vs. 14.8%; p = 0.004). While breast conservation rate was lower during lockdown (58.6% vs. 65%; p < 0.001), level II oncoplastic conservation was significantly higher in order to reduce mastectomy rate (22.8% vs. 5.6%; p < 0.001). No immediate reconstruction was offered during lockdown. 51.2% had co-morbidity, and 7.8% developed postoperative complications in lockdown. There was no peri-operative COVID-19 infection related morbidity or mortality. Conclusion breast cancer can be safely provided during COVID-19 pandemic in selected patients.
Introduction
Patients diagnosed with breast cancer have been facing unprecedented challenges during their treatment since the onset of SARS-CoV-2 (COVID- 19) pandemic. Breast cancer specialists have struggled to maintain optimal breast cancer treatment for their patients in the midst of potentially compromised medical resources for cancer therapy while minimising exposure of their patients to COVID-19 infection related risks [1].
Numerous professional bodies issued valuable recommendations to aid prioritisation of breast cancer care based on tumour biology and cancer stage including recommendations for the surgical treatment of breast cancer in the health care crisis [2e4]. In general, upfront surgery was recommended as a priority led by the biology and potential prognosis therefore, triple-negative and HER-2 positive disease were deemed as priority, while primary endocrine treatment was accepted to temporise surgery in luminal-A tumours [5].
COVID-19 infection related death has been implicated to be dependent on co-morbidities, age, and anti-cancer treatment including surgery, although the extent of contribution of these factors is confounding due to the limited evidence available [6e12]. Specifically, COVID-19 related risk in patients requiring surgery for breast cancer have been evaluated in three studies only [7,13,14]. Therefore, we evaluated the safety of breast cancer surgery during COVID-19 pandemic in a prospective observational study in the West of Scotland region during the first eight weeks of the United Kingdom national lockdown, and compared outcomes to the regional cancer registry data of pre-COVID-19 patients.
Methods
A prospective registry of patients who had surgical treatment for invasive or non-invasive breast cancer in the West of Scotland was created when lockdown was introduced by the Scottish Government on 23 March 2020. Patients entered in the first 8 weeks of the lockdown, between 23 March 2020 and 15 May 2020, were included in the analysis. Three NHS Scotland Health Boards participated in the audit, which was approved by the relevant clinical directors of the health boards.
The following parameters were collected prospectively: age, dates of diagnosis and surgery, perioperative risk factors (BMI, comorbidities, smoking habit, ASA grade), clinical and pathological tumour size, nodal status, subtype, grade, ER and HER-2 expression, details of neoadjuvant treatment, types of breast and axillary surgery, length of hospital stay, treatment affected by COVID-19 pandemic, COVID-19 infection rates, details of postoperative complications, unplanned hospital readmission or return to operating theatre.
This prospective cohort was compared against a cohort of patients (n ¼ 1415) from the same region, who were diagnosed with invasive or non-invasive breast cancer between 1 January 2015 and 31 December 2015. This cohort was identified from the prospectively maintained Managed Clinical Network (MCN) database and Caldicott Guardian approval was gained previously [15]. Comparison was made of clinicopathological factors and surgical treatments between pre-COVID-19 hospital lockdown and the same units during hospital lockdown due to COVID-19 pandemic.
During lockdown all patients were screened for possible COVID-19 infection related symptoms. In cases where COVID-19 infection was clinically suspected patients were asked to self-isolate and surgery was postponed by a minimum of two weeks followed by a re-assessment of the patient. In one Health Board routine preoperative COVID-19 PCR testing was introduced four weeks after the hospital lockdown, which was performed within 72 h of the date of surgical treatment followed by self-isolation until the time of surgery. The operating hospitals were non-receiving hospitals for patients with diagnosed COVID-19 infections including Ambulatory Care and Diagnostic Centre facility or independent sector hospital procured for NHS cancer surgery. These hospitals do not have a High Dependency Unit so patients requiring emergency surgery, or those deemed as having a high anaesthetic risk were operated on in an acute receiving hospital where patients with diagnosed COVID-19 infection were being treated. Data collection and analysis was performed using Microsoft Excel 365 Software. Statistical significance (considered as p 0.05) was calculated using Mann-Whitney U test, Chi-Square test and Z-test for two proportions, as appropriate. 189 surgeries were carried out in 180 patients. 5 patients had two oncological surgeries, another 4 patients returned to theatre due to postoperative complications. One patient required emergency surgery to remove an infected implant inserted 10 months earlier, who was excluded from the analysis.
Results
Median age of the patients was 54 years (27e81). Date of diagnosis ranged between 31 July 2019 and 7 May 2020. 42 of the 179 patients were diagnosed during lockdown due to the COVID-19 pandemic. Almost two-thirds of the patients were diagnosed in the symptomatic service (64.8%), which was significantly higher compared to patients diagnosed in the symptomatic service before lockdown in this region (52.9%; p < 0.001) (Table 1.). Breast screening had been stopped in Scotland at the start of lockdown.
Median preoperative tumour size was 25 mm (5e110). The clinical tumour size was significantly larger in patients undergoing surgery during lockdown with 28 patients (16.8%) having cT3-4 disease compared to patients operated before lockdown (vs. 154 of 1415 patients (7.4%); p < 0.001)) ( Table 1). This trend is reflected in the pathological tumour size with more patients having surgery for pT2 e pT4 disease during the pandemic compared to patients treated before lockdown (45.5% vs. 35.6%; p ¼ 0.002). However, the rate of clinically and pathologically node positive disease were similar in patients who underwent surgery during lockdown compared to the pre-lockdown times (cN1-3: 24.9% vs. 19.1%, p ¼ 0.099; (y)pN1-3: 30.8% vs. 31.8%, p ¼ 0.791). Tumour subtypes and grade were comparable in the two groups with somewhat less patients undergoing surgery for DCIS and more patients undergoing surgery with G3 disease during the COVID-19 pandemic (p ¼ 0.057 and p ¼ 0.107, respectively). However, a sharp difference between ER-and HER-2 expression were found in between the two groups with significantly more patients having ER negative and HER-2 positive disease in the COVID-19 group compared to patients operated before the pandemic (ER negative: 41.3% vs 17%, p < 0.001; and HER-2 positive: 23.4% vs. 14.8%, p ¼ 0.004) (Table 1.). 105 (58.6%) patients had breast conservation surgery (BCS) during lockdown, of which 24 (13.4%) patients underwent level II oncoplastic breast conservation surgery comprising of 22.8% oncoplastic surgical rate of all BCSs (Table 2.). While BCS rate was higher in patients operated before the COVID-19 pandemic (65%), only 5.6% of the patients were treated with oncoplastic surgery of all patients treated with BCS (Table 2.). There was no immediate reconstruction carried out during lockdown and no significant difference was found in terms of axillary surgical procedures between the two groups of patients. Length of hospital stay during lockdown was less than 24 h in 166 cases (90.2%), and of these daycase surgery was carried out in 65 cases (35.3%). Significantly higher proportion of patients received neo-adjuvant chemotherapy in the COVID-19 group compared to the patients treated before the pandemic (30.1% vs. 10.4%; p < 0.001).
For perioperative risk factors BMI, co-morbidities, recent smoking habit and COVID-19 infection was analysed (Table 3.). The median BMI of the patients was 26.3 (15e48), with 128 patients (71.5%) being at least overweight, of which 57 (35%) suffered from various degree of obesity (Table 3). 93 patients (51.2%) had comorbidity, of which 29 patients (16.2%) had at least two comorbidities documented. 27 patients (15.7%) were current smokers. Similar data for co-morbidities are not available in the MCN database, hence a direct comparison could not have been carried out.
Altogether 14 patients (7.8%) developed postoperative complications, of which 6 patients (3.3%) had major complication requiring in-hospital treatment. 4 patients returned to theatre for complications including evacuation of haematoma and washout of infected seroma (Table 3.). Two of these four cases were carried out in an acute receiving hospital with patients treated with COVID-19 infection. A further two patients required transfer to the acute receiving hospital. One of them developed postoperative hypoxia, while the other patient had delirium. Of the elective cases, four patients with significant co-morbidities were operated on in acute receiving site (one unilateral therapeutic mammoplasty and three mastectomies).
Patient management was affected by COVID-19 pandemic in 78 patients (43.6%) overall (Table 4.). 40 patients would have been suitable for immediate postmastectomy breast reconstruction, which comprised of 62.5% of all patients treated with mastectomy during COVID-19 lockdown. Of the six patients who had unilateral therapeutic mammoplasty through a "Wise" pattern incision, five would have had immediate contralateral symmetrisation outside the pandemic. 28 patients had their neo-adjuvant chemotherapy interrupted due the pandemic, which comprised of 51.8% of all patients having surgery after neo-adjuvant chemotherapy during the pandemic. Conversely, 12 patients went straight to surgery who would have been offered neoadjuvant chemotherapy outside COVID-19. In 14 patients (7.8%) both the surgical and adjuvant treatments were affected by the pandemic (Table 4.).
COVID-19 infection was suspected in five patients altogether. In two patients the preoperative imaging raised suspicion of COVID-19 infection, and surgery was delayed by two weeks but patients were not tested. In further three patients postoperative COVID-19 infection was suspected. These three patients all subsequently tested negative, although one of them required transfer to an acute receiving hospital due to hypoxia. There was one patient who tested positive on routine preoperative COVID-19 testing, whose surgery was also delayed. There was no mortality and no perioperative COVID-19 infection related morbidity detected in this cohort of patients.
Discussion
Our study of 179 patients undergoing breast cancer surgery in the West of Scotland region during the COVID-19 pandemic demonstrates that selected surgery for breast cancer surgery can be safely delivered. Initial data suggested that cancer patients receiving anti-cancer treatment have a higher mortality rate if they develop COVID-19 infection. A retrospective analysis by Zhan et al. of 28 patients from Wuhan, China showed a 28.6% mortality rate, and having the last anti-cancer treatment within 14 days of the infection significantly increased the risk of mortality from COVID-19 infection [12]. Similarly, a nationwide analysis by Liang et al. showing similar data based on the extraction of data from 18 cancer patients from 1590 patient with COVID-19 infection [16]. However more recent data by Lee et al. from the UK Coronavirus Cancer Monitoring Project (UKCCMP), which involved 800 cancer patients Table 1 Comparison of clinicopathological characteristics of patients treated during COVID-19 pandemic caused hospital lock down and outside of the pandemic in the West of Scotland. 1 Data was not available for 12 patients in the COVID database and 717 patients had either cT0, or primary tumour was not assessed in the MCN database. 2 Data was not available for 11 patients in the COVID database and 26 patients lymph nodes were not assessed or recorded in the MCN database. 3 Final pathology is awaiting in 7 patients in the COVID database and primary tumour subtype was not assessable or recorded in 19 patients in the MCN database. 4 Grade not assessable or not applicable in 218 patients in the MCN database. Grade, ER status and HER-2 status were determined in invasive cancer only. [10]. In fact, age (>70), male gender and severe comorbidities were independently associated with mortality from COVID-19 infection [8,10]. Early data of patients with COVID-19 infection undergoing elective general surgery suggested a significantly increased mortality rate up to 20.5% based on the analysis of 34 patients in Wuhan, China [9]. This preliminary finding was confirmed by a large scale international cohort study (COVIDsurg collective) including 294 patients with preoperatively confirmed COVID-19 infection from a cohort of 1128 undergoing surgery [6]. In adjusted analyses, 30-day mortality was associated with male gender, age (>70), ASA grade 3e5, malignancy, emergency and/or major surgery [6]. The COVID-19 and Cancer Consortium (CCC19) database including 928 patients with COVID-19 infection undergoing active anti-cancer treatment revealed that 30-day all-cause mortality is independently associated with age, male gender, and the number of comorbidities among others, but not with the type of anti-cancer therapy or recent surgery [7].
There is hardly any evidence however on the safety of breast cancer surgery during COVID-19 pandemic available as the number of patients who had breast cancer surgery were either single figures (Wuhan study, COVIDsurg collective) or the breast cancer specific anti-cancer treatment (CCC19 database: 191 breast cancers, UKCCMP study: 102 breast cancers) were not provided [6e9].
In terms of surgical techniques more oncoplastic breast conservations were carried out in comparison to our pre-COVID-19 practice due to immediate breast reconstruction not being offered after mastectomy (Table 2.). Oncoplastic breast conservation surgery has been shown as a safe alternative to mastectomy and immediate breast reconstruction in selected patients based on the combined data of iBRA-2 and TeaM studies of 2916 patients [17]. Further, the Scottish audits of oncoplastic breast conservations indicate that oncoplastic surgery can widen the indications for breast conservation, and provide good oncological outcome with low complication rates in our hands, hence it can be a reasonable alternative to mastectomy with immediate reconstruction [18e21]. One unit in Italy did offer immediate breast reconstruction even during the peak of the COVID-19 pandemic as it is indicated by Fragetti et al. who reported 15 nipple-sparing mastectomies with immediate reconstruction done in 13 patients, although reconstructive techniques were not disclosed [14]. In our study the higher rate of oncoplastic breast conservation surgery was partly a consequence of declined immediate breast reconstruction due to COVID-19 risks as opposed to an elective planned argument, although it also reflects practice changes over a period of five years. Nevertheless, a very careful approach, within a framework of close collaboration between breast and reconstructive surgeons, is required to carefully select patients and reconstructive techniques to allow re-starting of immediate breast reconstructions when appropriate [2,22].
In terms of COVID-19-related risk in patients undergoing treatment for breast cancer we found six patients of the 179 who had suspected or proven COVID-19 infection perioperatively. Corsi et al. reported on 63 patients who underwent breast cancer surgery over a five-week period in one of the breast units in Pavia (Lombardy, Italy), with one patient only being diagnosed with COVID-19 infection [23]. Similarly, Fragetti et al. reported on 85 patients, who had breast cancer surgery in a four-week time period with three patients being diagnosed with COVID-19 infection preoperatively Table 2 Comparison of breast cancer surgeries during COVID-19 pandemic caused hospital lock down and outside of the pandemic in the West of Scotland. 1 In 7 patients contralateral symmetrising reduction was carried out simultaneously. 2 In one patient bilateral mastectomy was carried out. LICAP ¼ lateral intercostal perforator flap. 3 In 7 patients the WLE was carried out before the hospital lock down, while in another patients both the wide and the re-excision were done during lock down. 4 In the breast 220 patients and in the axilla 344 patients did not receive any/require surgery or refused treatment or data not recorded. TM ¼ therapeutic mammoplasty with breast reduction technique from "wise" patter incision. ANC ¼ axillary node clearance. SLNB ¼ sentinel node biopsy. Sym. red. ¼ symmetrising reduction. Round bl. ¼ round block technique. LICAP ¼ lateral intercostal perforator flap. AICAP ¼ anterior intercostal perforator flap. In 2 cases axillary surgery was carried out only. In 28 cases no axillary surgery was carried out. and further three patients required to have two-week delay in surgery due to suspected infection [14]. These figures imply that we need to carefully select our patients and avoid operating e if possible e on those with relatively high COVID-19 mortality risk. The above mentioned three large prospective cohort studies (UKCCMP, CC19, COVIDSurg) had similar outcomes in terms of risk factors for COVID-19 related death, hence surgery should be carried out with extreme caution in patients with multiple co-morbidities in particular those who are elderly [6e8]. There is some weakness of this paper which mainly relates to the control group of patients from the MCN database. Breast surgical practice has undoubtedly changed in the last 5 years hence a more recent cohort would have been more ideal. Due to time pressure arising from the relative urge of these results during lockdown this was not available in the MCN database at the time when the manuscript was written. Further, we did not have comorbidity data in the MCN database so we could not make a comparison which would have been an important point of the study. Nevertheless, this study provides the strongest evidence about safety of breast cancer surgery in lockdown due to COVID-19 infection, and may provide reassurance in the future if lockdown happens again.
In conclusion, we have demonstrated that in a population in whom over 50% have co-morbidities surgery for breast cancer can be safely provided during COVID-19 pandemic in selected patients. | 2020-11-26T05:09:50.745Z | 2020-11-25T00:00:00.000 | {
"year": 2020,
"sha1": "879f6c97e3ce353eb629a7f3a47e75b12e8a5754",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thebreastonline.com/article/S0960977620302216/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "879f6c97e3ce353eb629a7f3a47e75b12e8a5754",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257088620 | pes2o/s2orc | v3-fos-license | On how religions could accidentally incite lies and violence: folktales as a cultural transmitter
Folklore has a critical role as a cultural transmitter, all the while being a socially accepted medium for the expressions of culturally contradicting wishes and conducts. In this study of Vietnamese folktales, through the use of Bayesian multilevel modeling and the Markov chain Monte Carlo technique, we offer empirical evidence for how the interplay between religious teachings (Confucianism, Buddhism, and Taoism) and deviant behaviors (lying and violence) could affect a folktale’s outcome. The findings indicate that characters who lie and/or commit violent acts tend to have bad endings, as intuition would dictate, but when they are associated with any of the above Three Teachings, the final endings may vary. Positive outcomes are seen in cases where characters associated with Confucianism lie and characters associated with Buddhism act violently. The results supplement the worldwide literature on discrepancies between folklore and real-life conduct, as well as on the contradictory human behaviors vis-à-vis religious teachings. Overall, the study highlights the complexity of human decision-making, especially beyond the folklore realm.
Introduction
"Le vice se cache souvent sous le manteau de la vertu." -French proverb F olklore materials offer one of the most imaginative windows into the livelihood and psychology of people from different walks of life at a certain time. These colorful narratives bring to life the identities, practices, values, and norms of a culture from a bygone era that may provide insights on speech play and tongue-twisters (Nikolić and Bakarić, 2016), habitat quality of farmers (Møller et al., 2017), and contemporary attitudes and beliefs (Michalopoulos and Xue, 2019;Thenmozhi et al., 2018). While the stories tend to honor the value of hard work, honesty, benevolence, and many other desirable virtues, many of such messages are undercut by actions that seem outlandish, morally questionable, or brutally violent (Meehan, 1994;Victor, 1990;Haar, 2005;Chima and Helen, 2015;Alcantud-Diaz, 2010. For example, in a popular Vietnamese folktale known as "Story of a bird named bìm bịp (coucal)," a robber who repents on his killing and cuts open his chest to offer his heart to the Buddha gets a better ending than a Buddhist monk who has been religiously chaste for his whole life but fails to honor his promise to the robber-i.e., bringing the robber's heart to the Buddha. In his quest for the robber's missing heart, not only does the monk never reach enlightenment, but he also turns into a coucal, a bird in the cuckoo family. On the one hand, the gory details of this story likely serve to highlight the literal determination and commitment of the robber to repentance, which is in line with the Buddhist teaching of turning around regardless of whichever wrong directions one has taken. On the other hand, it is puzzling how oral storytelling and later handwriting traditions have kept alive the graphic details-the images of the robber killing himself in the name of Buddhism, a religion largely known for its nonviolence and compassion.
Aiming to make sense of these apparent contradictions, this study uses Bayesian hierarchical analysis to analyze the behavior of Vietnamese folk characters as influenced by long-standing cultural and religious factors. Given that there is a certain degree of interactions among the elements constitutive of the three religions of Confucianism, Buddhism, and Taoism-the cultural additivity phenomenon (Vuong et al., 2018), it is reasonable to hypothesize that there may be some relationship between these religiously-imbued teachings and the universally-frowned upon acts of lying and violence. The focus on the folkloristic realm facilitates the discovery of behavioral patterns that may otherwise escape our usual intuitions. Indeed, while scholars have pointed out the prevalence of elements related to violence and lies in folktales (Meehan, 1994;Victor, 1990;Haar, 2005;Chima and Helen, 2015;Alcantud-Diaz, 2010, few have offered a rigorous statistical method to understand the interactions between these elements and their religio-cultural contexts. Thus, the present study's research method adds to the wave of studies on computational folkloristics, which currently emphasize the digitization of resources, the classification of folklore, and the necessary algorithms for data structure development rather than the statistical analysis of behavioral patterns in folktales (Abello et al., 2012;Tangherlini, 2013;Nguyen et al., 2013;Dogra, 2018;Tehrani and d'Huy, 2017). Moreover, the scope of the present research differs from the largely Euro-centric research projects (Nguyen et al., 2013;Bortolini et al., 2017a, b;d'Huy et al., 2017;Nikolić and Bakarić, 2016), as it contributes to the wave of scholarship on non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies by shedding light on the littleknown behavioral variability and contradictions in the folklore of a developing Asian country (Henrich et al., 2010). The research, thus, aims to resolve the following research question (RQ): What are the credible statistical patterns for the interaction between negative behaviors (lie and violence) and the values of three major Eastern religious teachings (Buddhism, Taoism and Confucianism) to determine the outcome of folktales? In other words, what kind of outcome could we expect from the interaction of the negative behaviors with the three religious teachings?
Literature review
In order to situate the study within the literature, it is necessary to see (i) how religious teachings and religions in general are linked to deviant acts such as lying and violence, (ii) how folklore around the world has portrayed lies and violence, and (iii) what the Three Teachings in Vietnam cover.
The relationship between religions and lying or violence. Research on the relationship between religions and lying or violence highlights two notable trends, namely the debatable role of religions in restraining negative behaviors and the skewed attention toward the three great Western religions-Judaism, Christianity, and Islam-and their influence on followers' behavior. First, it is clear that the acts of lying and violence represent deviances to the acceptable moral norms regardless of the cultural and religious settings. When examined through the binary religion-secular dimension, it is widely believed that religiosity, with its emphasis on being, loving, compassionate, honest, humble, and forgiving, should create changes reflecting such virtues in the behavior of the religious followers. This assumption is supported by the theory of cognitive dissonance: because religious people have an internal motivation to behave consistently with their beliefs, any behaviors that are not so would result in dissonance (Perrin, 2000;Festinger, 1962). Along this line of argument, research on the role of religion frequently draws on the work of Emile Durkheim, who recognizes religion as the prime source of social cohesion and moral enforcement (Durkheim, 1897).
Yet, not just the clergymen who have doubts about the constraining effects of religious faith but also scholars over the ages. To make sense of the relationship between religiosity and deviant behaviors, scholars from as far back as the 1960s have sought to measure how church membership or religious commitment could deter delinquent activities, though pieces of empirical evidence over the years remain inconclusive (Albrecht et al., 1977;Rohrbaugh and Jessor, 1975;Tittle and Welch, 1983;Hirschi and Stark, 1969). In their influential study, Hirschi and Stark (1969) ask if the Christian punishment of hellfire for sinners can deter delinquent acts among the firm believers, and surprisingly find no connection between religiosity and juvenile delinquency. Subsequent studies tend to fall along two lines, either confirming the irrelevance of religion and deviance (Tittle and Welch, 1983;Welch et al., 2006;Cochran and Akers, 1989), or pointing out certain inhibiting effect of religiosity depending on the types of religious contexts (Evans et al., 1995;Corcoran et al., 2012;Benda, 2002;Rohrbaugh and Jessor, 2017). Additional studies have looked at religious contexts beyond the WEIRD (Western, educated, industrialized, rich, democratic) countries such as in South Korea and China but also reached inconsistent results on the religiosity-deviance relationship (Wang and Jang, 2018;Yun and Lee, 2016).
Second, in the scholarship on the relationship between religious teachings/commitment and misconduct, the spotlight has largely been on Western monotheistic religions and its punitive supernatural systems. Although non-religious people's moral attitude and behaviors can be drawn from their experiences and interactions with religious others (Sumerau and Cragun, 2016), the formulation of moral identity is a complex process involving conceptualization of the self over different developmental stages (Wainryb and Pasupathi, 2015). As such, one needs not define religion merely as a "belief in spiritual beings" but should include the in-between spaces of spirit and non-spirit (E. B. Tylor as cited in Day et al. (2016)). This definition gives room for studying the influence of semi-religious teachings or folk religion in countries where the word religion itself does not evoke the same sentiment or understanding. In this study, Confucianism, Buddhism, and Taoism can either be regarded as religions or loosely as religious teachings for they do not posit an almighty supernatural being but instead focus more on the cultivation of certain sets of virtues. The details of the Three Teachings are explained below.
The portrayal of lies and violence in folklore. The acts of hurting or killing one another are common images in folklore and religious narratives around the world (Meehan, 1994;Victor, 1990;Haar, 2005;Chima and Helen, 2015;Houben and van Kooij, 1999). This is attributable to the role violence plays in human storytelling-as a story device, it gives voices to both the offenders and the victims (Sandberg et al., 2015), as well as serves interactional and recreational purposes (Coupland and Jaworski, 2003). How people tell their stories, lying or being honest, not only reflects but also allows us to grasp the intertwining nature of values, identities, and cultures (Sandberg, 2014). For example, Victor (1990) shows that rumors about the satanic cult-which is rooted in the mythologized ancient blood ritual and Satan's conflict with God-often arise during a period of intense social stresses and cultural crisis. Similarly, the high amounts of violent terms and actions in the Grimm's fairy tales have been shown to be tied up with power and social status in the construction of the self (Alcantud-Diaz, 2010. In a different study, Haar (2005) analyzes a number of motifs in Chinese witch-hunt stories, such as the consumption of adult human body parts, children, and fetuses, to illustrate the force of the anti-Christian movements and the interplay of folkloric fears and political history in China. This finding is supported by Tian (2014) when looking at the Tianjin Missionary Case of 1870.
In contrast to the wealth of studies on violence in folklore, there is scant research on the act of lying and its implications. This is surprising given how prevalent lying is across folk cultures. Lying tales make up one category of its own within the folktales of Thailand (MacDonald and Vathanaprida, 1994). Research studies that touch on this topic are understandably centered around the themes of honesty/dishonesty and moral development in storytelling (Kim et al., 2018;MacDonald, 2013). In a rare approach that examines the semantics of lies, one study compares the function of lies in folktales to a prosthesis in the domain of discourse, such that the use of lie transforms the story system from horizontal to vertical, hence, action plan to metaaction (Towhidloo and Shairi, 2017).
This survey highlights a gap in the literature on the interactions of different religions or religious teachings with deviant behaviors such as lying and killing in folklore. Even beyond the folkloristic realm, findings also remain inconclusive on the relationships between lying/cheating and religion (Bruggeman and Hart, 1996;Rettinger and Jordan, 2005;Mensah and Azila-Gbettor, 2018), as well as between violence and religion (Purzycki and Gibson, 2011;Atran, 2016;Henrich et al., 2019;Blogowska et al., 2013) Graham et al., 2016). In other words, while all religions stress the need to cultivate virtues such as loyalty, reciprocity, honesty, and moderation, how these virtues are practiced in reality are not universal across cultures. What is equally noteworthy is how certain vices, e.g., lying and violence, are portrayed and tolerated in different parts of the world. An analysis of the cultural history of South Asian has revealed the development of arguments that seemingly rationalize violence, turning violence into non-violence over the course of millennia (Houben and van Kooij, 1999). One example the authors point out is the glorification of the gods and goddesses who have committed the most extreme forms of violence.
Thus, while extant research has not confirmed the relationship between religious teachings and lying and/or violence, the interplay of these two variables may be different and better understood when looking through folklore-a colorful window into folk psychology. This is where the current study fits inthrough the case of Vietnam, it looks at the two universal acts of violence and lies in storytelling to shed light on the influence of traditional religions on folkloristic behaviors.
The three teachings in Vietnam. Before delving in further, it is important to note that the Vietnamese word tôn giáo is not equivalent to its English translation religion, which is derived from the Latin root religio meaning 'to bind together' or 'to reconnect' (Durkheim, 1897). The Vietnamese word has its origin from the Chinese word zongjiao (宗教), which was imported from Japan (shukyo 宗教) in the first decade of the twentieth century (Casadio, 2016, p. 45). The word, comprised of zong as in "divisional lineage" and jiao as in "teaching," encompasses the praxis and doctrine of religion (Casadio, 2016). The word religion in Vietnamese can be interpreted as a "way of life" (the Chinese dao 道) or "teaching" (教) (Tran, 2017). In practice, the Vietnamese popular religion involves ancestor and deity worshiping, exorcism, spirit-possession, etc. (Cleary, 1991;Toan-Anh, 2005;Kendall, 2011;Tran, 2017). For this reason, the present study uses "the Three Teachings" to avoid the religious connotations and to instead refer to their influence in both lifestyle and traditional philosophies.
The fundamental contents of the Three Teachings are presented in Table 1. The details of these religions can be found in Vuong et al. (2018).
As summarized above, the Three Teachings share cultivation of moral character but differ in the process and its end goal. For Confucianism, the process centers around building harmonious relationships with other society members and sustaining the societal structures. For Taoism, the emphasis is instead on protecting one's relationship with nature, keeping the natural flow of life to the point that one may detach oneself entirely from society. For Buddhism, the key to enlightenment (nirvana) is to understand the nature of reality-that life is suffering because one is ignorant of the impermanent nature of things. None of the Three Teachings explicitly forbid lying, though Buddhist teachings do hold "Do not kill" as its first precept.
Materials and method
Folktales encoding scheme. This paper analyzes the outcome associated with behaviors of lying and violence of the main characters in selected Vietnamese folktales, as well as the association of the Three Teachings with said behaviors. First, we encode the details of 307 Vietnamese folk stories into 345 binary data points, which are coded as blue in Fig. 1. All the variables are all binomially distributed, and they are presented by blue nodes in Fig. 1 . For further details of the coding system, see Vuong et al. (2018). For example, for simplicity's sake, if the main character in a story believes in the law of karma of Buddhism, and lies, and still succeeds in the end, (Nguyen, 1998) 2nd century AD from China (Xu, 2002) 1st or 2nd century AD from India (Nguyen, 1985;Nguyen, 1993;Nguyen, 1998;Nguyen, 2008;Nguyen, 2014) Peak development Neo-Confucianism grew from the 15th century to its peak in the 19th century during the Nguyen dynasty (Nguyen, 1998, p. 93) From the 11th to 15th centuries, during the Ly and Tran dynasties (1010-1400) (Xu, 2002) 11th century during the Ly dynasty (Nguyen, 2008, p. 19) Core teachings Three moral bonds, three obediences, five cardinal virtues, four virtues (one set for women and another set for general) Letting the natural flow of life, searching for longevity and immortality, and spiritual healing, which gets mixed into Vietnamese popular religious beliefs (Tran, 2017, p. 13) The Four Noble Truths and the Eight-fold Path Core concepts moral conduct (đức德), benevolence (nhân 仁), loyalty (trung 忠), wisdom (trí 智), filial piety (hiếu 孝), chastity or purity (tiết 節), righteousness (nghĩa 義), propriety (lễ 禮), integrity/faithfulness (tín 信) "effortless action" (vô vi or wuwei 无为), "spontaneity" (tự nhiên 自然) Karma (nghiệp): means the spiritual principle of cause and effect. It determines the cycle of reincarnation. Fig. 1 The Primary Model. The primary model of evaluating the influence of the Three Teachings ("VB", "VC", "VT") on lying ("Lie") and violent behavior ("Viol") of main characters based on the outcome of the folktales. ARTICLE PALGRAVE COMMUNICATIONS | https://doi.org/10.1057/s41599-020-0442-3 then the coding will be "VB" equals 1, "Lie" equals 1, and "Out" equals 1. The details of the stories concerning Confucianism and Taoism are encoded similarly (Vuong et al., 2018;La and Vuong, 2019). We are also interested in whether external intervention from either human ("Int1") or the supernatural ("Int2") might influence the story's outcome. After encoding 307 folktales, to find out credible statistical patterns for the outcome of the main character lied or committed violent acts and at the same time, and their behaviors align with certain core values of the Three Teachings (RQ), the following a multi-level varying intercept model is constructed. We then perform the Bayesian Markov Chain Monte Carlo (MCMC) analysis based on the multilevel model in Fig. 1. Concretely, we set out to measure the probability of an outcome of a character, given any intervention thereof, and the religious values ("VB", "VC", "VT") and the actions ("Lie," and "Viol"), to which he or she is committed (RQ). The measurement was conducted by applying the Bayesian MCMC estimation of 5000 iterations, 2000 warm-ups, and four chains on the model. Then to check for goodness-of-fit, we will use the Pareto smoothed importance-sampling leave-one-out cross-validation (PSIS-LOO) approach (Vehtari et al., 2017).
There are several advantages of Bayesian multilevel modeling. First, the method helps formalize the use of background knowledge to make more realistic inferences about a particular problem. With multilevel or hierarchical modeling, this idea is taken to another level, where simultaneous analyses of individual quantities are performed (Spiegelhalter, 2019). Second, it reflects the approach of "mathematics on top of common sense" (Scales and Snieder, 1997) with Bayes' theorem makes no assumption about an infinite amount of posterior data, all observations are probabilistic depending on prior distributions and can be updated by conditioning on newly-observed data (Gill, 2002;Kruschke, 2015;McElreath, 2016). The approach is, thus, especially helpful to social sciences where there are various conflicting research philosophies . Third, past studies in psychological and ecological sciences have demonstrated this effectiveness and flexibility of multilevel modeling. For example, Doré and Bolger found the data on the impacts of stressful life events on well-being are best fit with a varying curve model rather than a varying slope or a varying intercept model, which shows a wide range of different trajectories in life satisfaction different people to show a wide surrounding a negative life event (Doré and Bolger, 2018). A seminal study by Vallerand shows a hierarchical model of extrinsic and intrinsic motivation not only generates a framework to organize the literature on the subject, but also new and testable hypotheses (Holman and Walker, 2018;Vallerand, 1997).
Model construction. This section deals with the details of the model construction process. First of all, Fig. 1 is a logic map of the causal relationship between different (level) of variables and the outcome ("Out"), and this is a multi-level varying intercept model. To evaluate the influence of the Three Teachings ("VB", "VC", "VT") and negative behaviors ("Lie", and "Viol") on the outcome of the stories, we join the Three Teachings variables with the negative behaviors variables to create transformed data. To evaluate whether the outcome of a story is changed because of an intervention, whether from the supernatural ("Int1") or human ("Int2"), we combine the two observation variables into one new transformed variable. In Fig. 1, the transformed data are represented as green nodes , and the relation for transform data is represented in Fig. 1 using the dash-line arrow ( ).
As overfitting is a common issue among models with more parameters (McElreath, 2016), we will compare the performance of the Primary Model in Fig. 1 with the following less complex models: (i) a basic model consists of "O", "Viol", and "Lie", whose formula is presented in Eq. 2; (ii) a model which investigates the interplay of violence and religious teachings only-Violence Model in short (Eq. 3); (iii) a model which investigates the interplay of lie and religious teachings only-Lie Model in short (Eq. 4). To execute the model comparison, we deploy the Pareto smoothed importance-sampling leave-one-out cross-validation (PSIS-LOO) approach (Vehtari et al., 2017), and compare the weights of the models by computing the WAIC weights, Pseudo-BMA weights without Bayesian bootstrap, Pseudo-BMA+ weights with Bayesian bootstrap, and 4) Bayesian stacking weights (Yao et al., 2018;Vehtari and Gabry, 2019).
The visualizations of the models can also be seen in Supplementary file (Figs. S1-S2).
Analysis results
Technical validation Convergence diagnostics. After running MCMC analyses on all four models, the basic two standard diagnostics demonstrate a good convergence: all Rhat's values are one, and all values of n_eff (effective sample size) are above 1,000. The detailed summaries of the models are presented in the Appendix. Visualizations of the convergence of the Markov chains, the autocorrelation coefficient, and the Gelman Shrink Factor can be viewed in the Supplementary file (Figs. S3-S10).
Model comparison. Next, using the PSIS-LOO approach, we present the results of the model comparison process in Table 3: As can be seen in Table 3, the Basic Model performs the best, which is expected due to the straightforward nature of the model. The model fits the data well with all Pareto k estimates being good (k < 0.5). When it comes to the more complex models, the PSIS-LOO results for the Lie Model and Violence Model are compatible, both with 99.7% of good Pareto k estimates.
Meanwhile, as expected from the complexity of the Primary Model, it performs the worst with 99.1% of the Pareto k estimates being good (k < 0.5), and 0.3% being ok (0.5 < k ≤ 0.7). The visualizations of the Pareto k values for all the models are presented in the Supplementary file ( Fig. S11-S14). The PSIS-LOO results clearly indicate that the goodness-of-fit of each model decreases as its complexity increases.
To further verify the probability of each model, we calculate four different types of weight for each model and rank the models accordingly (see Table 4).
It becomes clear from Table 4 that the Primary Model is the least probable, as it ranks fourth in all indicators, similar to the PSIS-LOO results. As the Primary Model includes many more parameters than the other three, it poses the risk of overfitting. The computing of the model's weights demonstrates as much. However, it should be noted that in two indicators, Pseudo-BMA with Bayesian bootstrap and Bayesian stacking, there is still considerable weight for the Primary Model (0.128 and 0.122, respectively).
The Basic Model should be considered the most probable, given that it ranks the best in all categories except for Bayesian stacking. Here, it is notable that in this category, the weight of the Basic Model is almost similar to the weight of the Violence Model (0.217 and 0.216, respectively). The Violence Model consistently ranks third in all categories. Finally, the Lie Model ranks first in the Bayesian stacking and ranks second in all other indicators.
In sum, the Basic Model and the Lie Model should be considered highly probable, as they both have consistently high weights in all categories. The Violence Model is less likely than the previous two; however, it still has considerable weight in three out of four categories. The Primary Model is the least probable, yet, it also has over 12% weight Pseudo-BMA with Bayesian bootstrap and Bayesian stacking, the two types recommended by Yao et al. (2018). Given such evidence, the Violence and the Primary Model should not be dismissed. Moreover, considering the current paper investigates how the interplay of negative behaviors (violence and lie) with religious teachings could influence the outcome of folktales, the two models are still useful in helping us make sense of such complex social phenomena. Thus, we choose to present and compare the results of all models, providing a caveat of the overfitting risk of the two least probable models.
Interpreting the results
Assessing the regression coefficients. First, let's take a look at the results of the most credible model-the Basic Model, which is presented in Fig. 2.
It is clear that violent and lying behaviors tend to bring about the negative outcome for the characters of the folktales, with the coefficients are distributed almost exclusively in the negative range. This tendency is also found in all other models, as can be seen in Figs. 3 and 4.
Second, it can be noted there is a clear similarity in the posterior distribution of the coefficients involved lying in the Primary Model and the Lie Model. Figure 3a, b both show that, generally, lying does not bring about good outcomes for the main character. The coefficient b_Lie_O in each model is distributed almost entirely in the negative range, similar to results of the Basic Model in Fig. 2.
However, the trend is more complicated when we consider the interaction of lying with religious teachings. It can be seen in Fig. 3 that Confucianism seems to be the most tolerant of lying. The coefficient b_C_and_Lie_O in each model is distributed almost entirely in the positive range. When there is the influence of Confucianism, it seems more likely that the main character enjoys a good outcome, though he or she might lie. For the interaction with Buddhism and Taoism, the Primary Model and the Lie Model both show an ambivalent effect of lying.
Concerning the effect of violent acts on the outcome of stories, the results of both the Primary and Violence Model indicate that violence tends to produce bad endings for the main characters (Fig. 4). Similar to the result in the case of the Basic Model (Fig. 2), the coefficient b_Viol_O in Fig. 4 is mostly negatively distributed.
Again, when considering violence together with the Three Teachings variable, the effect is not straightforward. In both Fig. 4a, b, the distribution of coefficients b_C_and_Viol_O and b_T_and_Viol_O overlaps both the positive and negative range, suggesting the interplay of Confucian and Taoist values with violence has ambiguous effects on story outcomes. On the contrary, the distribution of b_B_and_Viol_O falls almost exclusively within the positive range. The result suggests that when Buddhist values are included, the main character can commit a violent act and still be likely to have a good outcome.
In Figs. 3 and 4, when the full range of the values of the coefficients are taken into consideration, there are cases where the interaction of negative behaviors and the all Three Teachings constitute positive endings for the main characters of the folktales. The pair parameters comparison figures from the Primary Model below underlie this pattern clearly (Fig. 5).
Comparing situations with and without intervention. The respective distributions of coefficients a_Int1_or_Int2[1] and a_Int1_or_Int2[2] have the same curve pattern, as can be seen
B_and_Viol
The main character behaves according to the core values of Buddhism yet commits violent acts.
C_and_Viol
The main character behaves according to the core values of Confucianism yet commits violent acts.
T_and_Viol
The main character behaves according to the core values of Taoism yet commits violent acts.
Int1_or_Int2
There exists an intervention in a story of either the supernatural or the humans in the stories. This is the only varying intercept variable in the model as it helps evaluate the outcome of a folktale in two cases: with and without intervention.
ARTICLE PALGRAVE COMMUNICATIONS | https://doi.org/10.1057/s41599-020-0442-3 from Fig. 6. This suggests that when there are external interventions, improvements in main character behaviors compared to when there are no interventions are negligible. In other words, the ending of a story does not depend significantly on the nature of external interventions. The detailed results of the Interventional coefficients of all models can be seen in the Appendix.
Discussion
Limitations and technical implications. The present study is not exempt from limitations. First, given the dataset only covers Vietnamese folktales, one should be cautious about generalizing the results. Data on folktales of different countries, regions, and cultural settings should be collected and analyzed to move beyond the country-specific nature here. Besides, the folktales can be subjected to a different method of coding; for example, the variable "violence" can be further broken down to "murder", "assault", etc. Limitations aside, the use of Bayesian multilevel modeling and the MCMC method of data simulation in this study has enabled us to efficiently analyze the complex relationships among different variables-such as the religious teachings/values and the acts of lying and violence-when the amount of data is limited. Studies on Vietnamese folklore, though insightful in their own rights, have remained qualitative and touched on some themes such as femininity-masculinity (Nguyen, 2002;Do and Brennan, 2015), folk medical practices (Du, 1980), or psychological mindedness (Nguyễn et al., 1991). Here, the computational method suggests there is a fertile ground for novel and interdisciplinary quantitative research on culture, religion, ethics, sociology, and many other social science disciplines (Peels and Bouter, 2018;Pedersen, 2016).
Implications for the three teachings of Confucianism, Buddhism, and Taoism. This study asks what outcome, statistically, can be expected from the interaction of negative behaviors and the values of the Three Teachings in folktales. And one of the most striking results of this study is how, in stories with Buddhist teachings, the acts of violence are often linked to a positive outcome. This finding brings us back to the opening Buddhist story on the monk who turns into a coucal and the murderous robber who gets salvation. It also resonates with what Houben and van Kooij (1999) have once pointed out, i.e., the prevalence of violence and the rationalization of violence in South Asian folklore, particularly in stories influenced by Buddhism and Hinduism. To make sense of the underlying mechanism behind this statistical pattern, we suggest examining which values in the Three Teachings are most observed by the laypeople. The tolerance for violence can be explained by the emphasis on karma in Buddhisma concept that might be loosely interpreted as "what goes around, comes around" or "an eye for an eye". Broadly speaking, the law of karma maintains a sort of cosmic justice according to which all crimes are punished and deeds are rewarded, suitably, in the long run (Gombrich, 1975); including from one lifetime to another. While one may argue that the karmic doctrine is much more intricate than just this and varies according to the teachings of different branches of Buddhism, this simplistic view is more likely to be how the principle was perceived by the average person. Another way to interpret the findings is how violence can serve as a stark contrast with the purity and beauty of Buddhist-espoused virtues such as compassion, renunciation, and tranquility. Another interesting finding how in stories with Confucian teachings, characters who lie tend to enjoy desirable outcomes, statistically speaking. The propensity for lying is attributable to the need to preserve social order and being loyal and pious toward one's King and kins in Confucianism. In historical terms, Confucianism first started as a remedy to the chaotic, violent time of the Spring-Autumn Warring States. As the Chinese teaching prioritizes social order, understandably they would be antithetical to the conspicuous display of violence and more acceptant to lying as the price in practice. That being said, the acceptance of such behavior could still present philosophical and practical dilemma, especially in the case of Confucianism which boasts sets of concrete moral rules. An example of how the Confucian doctrine rationalized passive actions such as lying by omission is a story in which the loyalty of the son to his father and the duty of a man to his ruler are pitted against each other if the father transgresses the law (Bi and D'Agostino, 2004). From a practical point of view, the prevalence of Confucian principles in a culture could factor into social interactions, such as when the standard practice of obtaining informed patient consent by medical professionals was complicated by Confucian ethics (Fan, 2000). Implications for folkloristic and behavioral studies. The findings add to the literature on discrepancies between folklore and real-life conduct that can be found throughout the world, in the stories of Grimm brothers, the folktales of the Native American Zuni, or even as far back as the Greek mythologies-in which the gods were known to commit many crimes that mortals would be punished for. Lies and violence are only two among many actions found in folktales but are both considered as prohibited or shocking in real life. In an analysis on the functions of folklore, Bascom (1954) notes unacceptable to the role of day-dreams or wish-fulfillment in traditional oral narratives (Bascom, 1954). Similarly, in the context of this study's findings, given that all the stories were passed down through the oral tradition, they presumably do not evoke the true ideals of three Teachings, but instead reflect the psychology and understanding of the people at the ARTICLE PALGRAVE COMMUNICATIONS | https://doi.org/10.1057/s41599-020-0442-3 time. What is highlighted here is a glaring double standard in the interpretation and practice of the teachings: the very virtuous outcomes being preached, whether that be compassion and meditation in Buddhism, societal order in Confucianism, appear to accommodate two universal vices-violence in Buddhism and lying in the Confucianism. Attempts to make sense of contradictory human behaviors have pointed out the role of cognition in belief maintenance and motivated reasoning in discounting counterargument (Kaplan et al., 2016;Bersoff, 1999). When it comes to religion, an individual's tolerance of contradictory religious teachings is not due to lower rationality standards but rather due to how such teachings fit the "inference machinery" in a plausible manner (Boyer, 2001). This study takes a step further by showing how "happy endings" can still be accepted as a plausible outcome for deviant behaviors such as lying or committing violence. Generations have passed down these folktales, orally and then in written forms, without ever questioning the incongruence between the upholding of Buddhist, Taoist or Confucian values on one hand, and the absence of punishment on the other. Even if the findings were limited to empirical evidence exclusively within the folkloristic realm, one ought not shy from the troubling implication: the promotion of the ends-justify-the-means mentality when the acceptance of values counter to one's beliefs is correlated with positive outcomes.
Conclusion
The present study, through the Bayesian network analysis of 307 Vietnamese folktales, has reached two notable conclusions. First, folktale characters who commit either lying or violence face negative outcomes in general (Figs. 2-4), but there is a mixed result when the religious values are taken into consideration. In particular, lying for characters associated with Confucianism tends to bring about a positive outcome (Fig. 3). Second, the character whose behaviors are associated with Buddhism tends to have a happy ending even when he/she commits violence (Fig. 4). When we consider the full range of all the coefficients, the statistical pattern indicates there can be cases where the interaction of the deviant acts with all Three Teachings would all lead to positive consequences (Fig. 5).
These findings, first and foremost, speak to the psychology and understanding of the storytellers of the time. As Bascom has succinctly put: "Here, indeed, is the basic paradox of folklore, that while it plays a vital role in transmitting and maintaining the institutions of culture and in forcing the individual to conform to them, at the same time it provides socially approved outlets for the repressions which these same institutions impose upon him." (Bascom,1954, p. 349) Moreover, they could also raise questions about the double standard when people interpret and practice the teachings of Buddhism, Confucianism, and Taoism. Although the three Teachings preach that one ought to cultivate moral characters, it seems in folktales, as well as in life, followers of such teachings are no exception to the two universal vices: lying and violence. PALGRAVE COMMUNICATIONS | https://doi.org/10.1057/s41599-020-0442-3 ARTICLE PALGRAVE COMMUNICATIONS | (2020) 6:82 | https://doi.org/10.1057/s41599-020-0442-3 | www.nature.com/palcomms Moreover, in certain cases, these vices may bring about by positive outcomes, as detected by the statistical technique deployed in this study. Such contradiction calls into questions the complexity of human decision-making, especially beyond the folklore realm. | 2023-02-23T14:27:29.283Z | 2020-05-04T00:00:00.000 | {
"year": 2020,
"sha1": "1431bbbfdabdb14bbc306d134a3664d68c663d12",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41599-020-0442-3.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "1431bbbfdabdb14bbc306d134a3664d68c663d12",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
248867888 | pes2o/s2orc | v3-fos-license | How to ensure full vaccination? The association of institutional delivery and timely postnatal care with childhood vaccination in a cross-sectional study in rural Bihar, India
Incomplete and absent doses in routine childhood vaccinations are of major concern. Health systems in low- and middle-income countries (LMIC), in particular, often struggle to enable full vaccination of children, which affects their immunity against communicable diseases. Data on child vaccination cards from a cross-sectional primary survey with 1,967 households were used to assess the vaccination status. The association of timely postnatal care (PNC) and the place of delivery with any-dose (at least one dose of each vaccine) and full vaccination of children between 10-20 months in Bihar, India, was investigated. Bivariate and multivariable logistic regression models were used. The vaccines included targeted tuberculosis, hepatitis B, polio, diphtheria/pertussis/tetanus (DPT) and measles. Moreover predictors for perinatal health care uptake were analysed by multivariable logistic regression. Of the 1,011 children with card verification, 47.9% were fully vaccinated. Timely PNC was positively associated with full vaccination (adjusted odds ratio (aOR) 1.48, 95% confidence interval (CI) 1.06-2.08) and with the administration of at least one dose (any-dose) of polio vaccine (aOR 3.37 95% CI 1.79-6.36), hepatitis B/pentavalent vaccine (aOR 2.11 95% CI 1.24-3.59), and DPT/pentavalent vaccine (aOR 2.29 95% CI 1.35-3.88). Additionally, delivery in a public health care facility was positively associated with at least one dose of hepatitis B/pentavalent vaccine administration (aOR 4.86 95% CI 2.97-7.95). Predictors for timely PNC were institutional delivery (public and private) (aOR 2.7 95% CI 1.96-3.72, aOR 2.38 95% CI 1.56-3.64), at least one ANC visit (aOR 1.59 95% CI 1.18-2.15), wealth quintile (Middle aOR 1.57 95% CI 1.02-2.41, Richer aOR 1.51 95% CI 1.01-2.25, Richest aOR 2.06 95% CI 1.28-3.31) and household size (aOR 0.95 95% CI 0.92-0.99). The findings indicate a correlation between childhood vaccination and timely postnatal care. Further, delivery in a public facility correlates with the administration of at least one dose of hepatitis B vaccine and thus impedes zero-dose vaccination. Increasing uptake of timely PNC, encouraging institutional delivery, and improving vaccination services before discharge of health facilities may lead to improved vaccination rates among children.
Introduction Differences in childhood vaccination coverage notably contribute to the emergence of inequalities in child mortality and morbidity [1][2][3]. To achieve a proper immunization status, a minimum number of vaccination doses is necessary. Insufficient vaccination doses lead to the presence of a low immune response and poses uncertainty for specific immunities [4]. To assure herd-immunity and disrupt the transmission of vaccine preventable diseases (VPD) the population vaccination coverage needs to be high [5]. Under-vaccination and missing vaccinations remain a common public health problem. In 2019, 19.7 million children under the age of one, did not receive any or an incomplete number of routine vaccinations. Most of those children reside in low-and middle-income countries (LMIC) [5,6]. In India, where this study is located, approximately 38% of children have not completed their vaccination schedule [7].
This study investigates perinatal services as potential determinants of children's vaccination status in the North Eastern Indian state of Bihar. It explores the association of children's vaccination with health care services (HCS) accessed at and after delivery, namely the place of delivery and timely postnatal care (PNC). These two indicators represent opportunities to encourage parents to complete their children's vaccination schedule. Because individual vaccines are likely to differ in supply, vaccination rates, and number of required doses, both full vaccination status and any vaccination uptake of individual vaccines are investigated. In addition, predicting factors for institutional delivery and timely PNC attendance are studied to better understand the usage patterns of healthcare services.
The WHO defines postnatal care as care given to newborns and mothers immediately after birth and during the first 42 days of life and recommends at least four PNC visits, one of them within 24 hours after birth [18,19]. Management of infections and timely vaccination are an essential part of proper PNC [20]. Further, delayed early vaccination was found to be associated with under-vaccination [21,22]. This suggests that the provision of early perinatal services like timely PNC is associated with vaccination coverage. In India, districts with low ANC and PNC rates as well as lower rates of skilled birth attendance were found to have lower rates of full vaccination [8,13,14].
The state of Bihar reports particularly low vaccination rates which lie below the national average. In 2016, less than two-thirds (62%) of all children between 12-23 months received full vaccination against all six major preventable diseases (tuberculosis, pertussis, diphtheria, tetanus, polio and measles), despite the availability of these vaccines free of charge for all children [7].
Moreover, only 64% of mothers who gave birth during the last five years had a PNC check within two days. PNC coverage is the highest for deliveries in private health facilities (81%), followed by deliveries in public health facilities (69%), and the lowest for home deliveries (37%) [23]. In Bihar, during the last five years about three quarters (76%) of births took place in a health care facility, with numbers rising steadily [23]. Given the crucial relevance of the birth setting, following care and the emerging trend for the use of HCS it is worth exploring possible HCS quality improvements and their association with subsequent vaccination outcomes.
Few studies investigated the association of perinatal services and vaccination outcomes [8, 11-14, 16, 17]. This study can add to the existing knowledge on the relationship of perinatal services with children's vaccination rates based on novel data from Bihar, India.
Ethics statement
The study was approved by the ethics committee of the University of Göttingen on October 26th 2016 and the Indian Institute of Technology Gandhinagar. Each participant signed a written informed consent before the start of the interview.
Study setting
The study uses primary data from a survey of recent mothers in Madhepura district, Bihar state, India. The data was collected between November and December 2016 for an endline survey of a randomised controlled trial investigating the impact of a participatory learning and action approach program on health, nutrition, and sanitation outcomes [24]. The sample size was deduced from power calculations in order to ensure enough statistical power for the rigorous evaluation of the trial.
Study design and participants
Out of Madhepura's thirteen sub-districts (blocks), six were chosen and 68 from a total of 95 gram panchayats were randomly sampled. A gram panchayat is a cluster of villages that constitute a local government body's jurisdiction. The 68 gram panchayats comprised 180 villages. 40 villages, in which lists of pregnant women were missing, were excluded. In the remaining 140 selected villages, 1,967 households which were listed in a pregnancy register in 2015 (in local mother-child-centers: Anganwadi centers) were surveyed. The number of households sampled, per village, ranged from 5 to 49, depending on the village size. In 2016, 1,612 households with a child born recently, between 10 and 20 months of age, (average of 16 months) were revisited. The child had to live in the surveyed household to be included for this study. Exclusion criteria were death of the child and uncompleted pregnancy (miscarriage, abort, still birth). 166 children had died by 2016. Most of the attrition was caused by families having migrated or not being at home at the time of the survey. Of the remaining, 1442 participants were able to provide any type of information about children's vaccination status and the questions of all included covariates. Of those, 1,011 households were able to present vaccination cards and were therefore eligible for the main analysis.
Inclusivity in global research
Additional information regarding the ethical, cultural, and scientific considerations specific to inclusivity in global research is included in the S2 File. Table 1 summarizes the national immunization schedule in India. The first dose of tuberculosis (BCG), hepatitis B, and polio vaccine are administered at birth and are followed up with the first dose of DPT vaccine at the age of six weeks. BCG is a single dose vaccine. Hepatitis B and polio require three vaccination doses with a spacing of four weeks. In January 2015, the pentavalent vaccine (haemophilus influenzae type b, DPT, hepatitis B) was introduced in Bihar. However, most children in the sample still received separate doses of hepatitis B and DPT vaccines. At nine to twelve months of age, single dose-measles and Japanese encephalitis-vaccine are administered. The outcomes of the analysis-children's full vaccination status and any-dose vaccination-were defined based on WHO recommendations for the minimum number of vaccination doses [25] and the Indian national vaccination schedule:
Study variables
1. Fully vaccinated: takes the value "1" if the child received one dose of BCG vaccine, at least three doses of polio vaccine, at least three doses of hepatitis B vaccine, at least three doses of DPT vaccine, and at least one dose of measles vaccine [26]. If the child missed one or more vaccine doses, the outcome takes the value of "0". One or more pentavalent vaccine doses account for an equivalent number of hepatitis B and DPT vaccination doses.
2. Any dose: was defined individually for each vaccine against the following diseases: tuberculosis (BCG), polio, hepatitis B, DPT and measles. It takes the value of "1" if the child received at least one vaccination dose against the respective disease and takes the value of "0" if no dose was received.
The two explanatory variables of interest are place of delivery and timely PNC. Place of delivery was coded into three mutually exclusive dummy variables for home delivery, delivery in a private institution and delivery in a public institution based on self-reports of mothers. Timely PNC was defined as having received a postnatal check-up within 24 hours by a doctor, Auxiliary Nurse Midwife (ANM) or General Nurse Midwife (GNM).
All estimations controlled for characteristics of the mother, socio-cultural and economic household characteristics, and characteristics of children. Mothers' characteristics were maternal age (14-23 years, 24-35 years, > 35 years), education level (no schooling, primary school, secondary school or higher), a dummy variable assessing if the mother was involved in the decision making regarding her child's health care (maternal involvement) and membership in a self-help group. All determinants-except the latter-are known to be associated with childhood vaccination [8,[27][28][29] and to be correlated with the place of delivery and PNC [30][31][32][33][34][35]. Membership in a self-help group was included due to the survey design. Household variables included were household size, health insurance, the religion of the household (Hindu, Non-Hindu), and household wealth. Although caste is of reasonable relevance it was not included in our model due to the lack of bivariate association with outcome variables in previous base
PLOS GLOBAL PUBLIC HEALTH
Association of place of delivery and postnatal care with child vaccination in Bihar, India data set analyses. Wealth quintiles were based on an index generated from principal component analysis of household assets and housing quality. Household amenities, durable goods, and assets were measured, scored, and subsequently classified into quintiles ranging from poorest to richest. Questions concerning electricity, toilet facility, household assets (chair, table, bed) and vehicles (bicycle, motorcycle, cart), livestock and land ownership, as well as ownership of electronic devices (radio, mobile phone, land line, refrigerator, watch, electric fan), type of fuel for cooking, roofing and flooring materials were included. Children's characteristics included were sex, age (12 months and younger, > 12 months), and number of older siblings.
Data source and measurement
All information was recorded by trained, local enumerators during structured interviews using electronic questionnaires with the mother of the child in each household. Information about specific vaccinations of the child for each recommended vaccine in the national immunization schedule was obtained from documented evidence by vaccination or health cards. The common Mother Child Protection card (MCP) established in Bihar allows the tracking of perinatal health and is available at all points in the heath system [36]. If the vaccination or health card was not available, responses from maternal recall were recorded.
Bias
Sample selection bias may affect the external validity of the results. The survey only included registered pregnant women and the estimation sample was restricted to observations with vaccination records that were documented in a vaccination or health card. To test the extent of bias of the latter estimation sample restriction, a sensitivity analysis including all children, i.e. with documented vaccination evidence and vaccination evidence from maternal reports, was conducted (Tables E-G in S1 File).
Data analysis
A cross-sectional study design to determine the association of perinatal HCS with full and any-dose vaccination was used. A bivariate model was fitted for each covariate and outcome variable. All variables with a p-value of less than 0.1 were included in a multinominal logistic regression model. To avoid multicollinearity among the explanatory variables, a Pearson's R correlation statistic with a cutoff of r > 0.5 for all pair combinations of covariates was conducted (Table A in S1 File). Adjusted odds ratios (aOR) were estimated using multivariable models to identify the association of timely PNC and the place of delivery with full and anydose vaccination. A p-value < 0.05 was considered as statistically significant. For each outcome, three models were estimated introducing all variables in a step wise fashion. In the first model, the primary explanatory variables were introduced (Table B in S1 File). In the second model, covariates were added (Table C in S1 File), and in the third model, fixed effects at the block (sub-districts/administrative unit) level were included to control for regional characteristics in the six different blocks (Table D in S1 File). Standard errors are clustered on panchayat level. In the sensitivity analysis, the estimation was conducted with the full sample and an indicator for the availability of a vaccination card was included as a covariate (Tables E and F in S1 File).
To better understand the drivers of timely PNC and institutional delivery, a multivariable logistic regression analysis was conducted using previous contact to the health system, socio-economic, maternal, and household characteristics as predictors. The same maternal and household-level variables were used as in the regression on the vaccination status and a dummy for belonging to a Schedule Caste, Scheduled Tribe, or so-called Other Backwards Caste was added. This analysis includes all children, irrespective of their vaccination evidence.
All analyses were conducted using the statistical software Stata 1 13 (StataCorp LLC, College Station TX, USA).
Sample characteristics
The descriptive characteristics of the study population are presented in Table 2. About 85% of children belong to Hindu households. Approximately 21% of children belong to the privileged General Caste category, 79% either to historically disadvantaged Scheduled Castes, Scheduled Tribes or to so-called Other Backward Castes. Households were divided into five wealth quintiles. These wealth quintiles showed differences regarding the vaccination evidence, with the poorest quintile being the most prominent among participants with a lack of vaccination evidence (24.0%). About one fifth (21.7%) of all households had health insurance. Roughly threequarters (74.5%) of the children's mothers never went to school or did not complete primary school and 51% of mothers were between 24-35 years of age. Slightly more than half of the children surveyed were male (53%). Approximately two thirds (64%) of all mothers were involved in decision making concerning their children's health care. Table 2 further shows that 53% of all mothers received ANC at least once. 68% of the children were born either in private or public health facilities. 47% of children and mothers received a check-up within 24 hours after birth.
1,011 participants with vaccination cards were eligible for the main analysis. Several variables had some missing data (birth setting 0.49%, timely PNC 5.14%, ANC 1.48%, child's sex 0%, age of the child 0%, age of the mother 1.98%, education of mother 1.48%, involvement mother in health care decision 0.20%, number of older siblings 5.34%, household size 0%, wealth quintile 5.43%, religion 0.99%, insurance 0.20%, self-help-group 1.19%). Observations with incomplete information in outcome and explanatory variables were removed prior to the analysis by listwise deletion in Stata. The final estimation sample comprised of 809 observations.
Descriptive statistics for full vaccination and any-dose vaccination
Vaccination status of children according to the source of information and for the overall study population is shown in Table 3 for all relevant vaccines under investigation. Comparing the vaccination coverage from the vaccination card and from maternal recall shows a substantially lower vaccination rate for children with vaccination card verification, than without. The difference is particularly salient for hepatitis B and polio vaccines. This divergence might be driven by the quantities of vaccines required, which challenge an accurate recall. In the following analysis, we therefore focus on the study sample with card verification.
The vaccination status breakdown by vaccine for the main analytical sample is depicted in Fig 1. The graph distinguishes between fully vaccinated, any-dose (but not all) and not vaccinated. Looking at vaccines separately, BCG showed the highest vaccination coverage with 97.6%. The lowest coverage for full vaccination was found for the hepatitis B vaccine with 54.6%. However, a comparably large proportion of children was partially vaccinated against hepatitis B (30%). 87% of all children received at least one measles vaccine dose. Half of all surveyed children were fully vaccinated. Less than 1% of the children did not receive any vaccination at all.
Associations of vaccination coverage with place of delivery and timely PNC
Bivariate statistics showed that the historic caste category and wealth quintile were not associated with full or any-dose vaccination status. All other control variables (ANC, sex and age of the child, number of older siblings, mother's age and education, maternal involvement, selfhelp-group membership, household size, health insurance and religion) showed a significant association with full or any-dose vaccination and were introduced to the multivariable logistic regression model. Wealth showed significant association with perinatal health care uptake ( Table 5) and was thus included in the multivariable regression model. Pearson's R correlation statistic did not show collinearity among the included variables (Table A in S1 File).
Results of the logistic regression model are displayed in Table 4. It shows results based on a model with all covariates clustering at the panchayat level. Timely PNC was statistically significantly and positively associated with children's full vaccination status: children who received timely PNC had higher odds (aOR 1.48, 95% CI 1.06-2.08) of being fully vaccinated. Further, children with timely PNC had higher odds of receiving any dose of vaccine plans with multiple doses, namely polio (aOR 3.37 95% CI 1.79-6.36), hepatitis B (aOR 2.11 95% CI 1.24-3.59) and DPT vaccine (aOR 2.29 95% CI 1.35-3.88). There was no statistically significant
PLOS GLOBAL PUBLIC HEALTH
Association of place of delivery and postnatal care with child vaccination in Bihar, India association between timely PNC and BCG or measles vaccination. The estimation results showed no association between the place of delivery and full vaccination status. However, children born in a public facility had higher odds of being vaccinated at least once against hepatitis B (aOR 4.86, 95% CI 2.97-7.95). Unadjusted estimates, estimates including block dummies and results for the full sample of children with vaccination evidence from maternal recall or the vaccination card can be found in Tables B and G in S1 File. The sensitivity analysis confirmed the results of the main analyses.
Several covariates showed associations with the outcome variables ( The associations of previous health care contacts, socio-economic, maternal and household characteristics with the place of delivery and timely PNC uptake are subsequently shown in Table 5. Contact to health care providers during pregnancy in form of at least one ANC visit was significantly positively associated with timely PNC (aOR 1.59 95% CI 1.18-2.15) and institutional delivery in private (aOR 2.38 95% CI 1.56-3.64) and public facilities (aOR 2.70 95% CI 1.96-3.72). Institutional birth was significantly positively and strongly associated with timely PNC (aOR 2.38 95% CI 1.56-3.64 for public facilities, aOR 2.70 95% CI 1.96-3.72 for private facilities). Children from wealthier and smaller families had higher odds to receive timely PNC. Delivery in a private facility was significantly positively associated with the two uppermost wealth quintiles (aOR 4.99 95% CI 1.80-13.82 and aOR 6.06 95% CI 2. 25-16.30) and living in a Hindu household (aOR 2.21., 95% CI 1.14-4.28). Wealth, mother's education, and being of small family size were insignificantly positively associated with deliveries in public institutions. Maternal age, maternal involvement, self-help group membership, belonging to a less disadvantaged caste, and the number of older siblings were not associated with the place of delivery or timely PNC.
Discussion
Community-based data from 809 participants living in rural Bihar state in India was used to examine the associations of institutional birth and timely PNC with full and any-dose vaccination. The results showed that timely PNC was significantly positively associated with children's vaccination status. This contributes to the evidence on the association of PNC and children's vaccination status and to our knowledge, is the first study to specifically examine the association of timely PNC.
PLOS GLOBAL PUBLIC HEALTH
Association of place of delivery and postnatal care with child vaccination in Bihar, India The vaccination coverage for BCG (96.4%) was very high, suggesting a broad level of HCS access. The low levels of literacy, as well as the distributions of age, religion, and caste were in line with district wide statistics [7,37]. The proportion of mothers who received timely PNC (46.8%) was comparable to the district statistics (urban 52.6%, rural 41.1%) [7].
Vaccination coverage
The full vaccination coverage in the study sample (52.5%) was lower than estimates for Bihar based on the National Family and Health Survey in 2015/2016 (61.7%). The vaccination rates of BCG and polio were slightly higher in the study sample than in the aforementioned survey and lower for hepatitis B and DPT (BCG 91.6%, polio 72.9%, hepatitis B 65.5%, DPT 80.1%) [7]. Because the pentavalent vaccine was introduced in January 2015, the supply chain may not have been set up completely at the time of the survey and supply shortages may have caused low rates of DPT and hepatitis B vaccination [38]. This might also be the reason for DPT and hepatitis B vaccine showing the largest discrepancies between card verification and maternal recall in coverage estimates.
Birth setting
Children born in a public or private facility were not more likely to have been fully vaccinated than children born at home. However, the odds of receiving at least one dose of hepatitis B were significantly increased when birth took place in a public facility in comparison to home deliveries. The distinct result for hepatitis B might be due to the novel introduction of hepatitis B to the Indian Universal Immunization Program (UIP). It was introduced in 2011 to the Indian UIP in Bihar and at the time of the survey lower rates compared to other vaccines were commonly observed [38,39]. The main reasons for the low hepatitis B coverage were found to be poor stock management, perceived high costs, fear of wastage of the vaccine, and insufficient knowledge about the vaccination schedule among health workers [40].
Previous evidence suggests that public facilities possibly perform better than private facilities in the administration of vaccines [8]. This is in line with the presented data for hepatitis B vaccine showing that in opposition to delivery in public facilities, delivery in a private facility did not protect from zero-dose vaccination compared to home deliveries. Reasons for this might be the missing governmental mandate to ensure vaccination for the poor in private facilities. Policies or financial incentives for adequate vaccination practices are not present. Supply
PLOS GLOBAL PUBLIC HEALTH
Association of place of delivery and postnatal care with child vaccination in Bihar, India in public facilities, i.e. for hepatitis B or pentavalent vaccine might be better than in private facilities [41].
Other studies showed that institutional delivery is positively associated with full vaccination [12,15,42]. In this study a strong positive and significant association between institutional delivery and vaccination status was not observable. Yet, none of the mentioned studies controlled for timely PNC. This implies that timely PNC might be an essential pillar of adequate service delivery in health care facilities directly after birth.
The choice of place of delivery was seen to be highly correlated with initial contact to health workers through ANC and household wealth. These findings are in line with studies from Nepal, Bangladesh, Pakistan and other parts of India [43][44][45][46]. Hence, although there was no direct association between ANC and vaccination status, there potentially could be an indirect association through institutional delivery. It underscores the importance of early contact of health workers with pregnant women, especially those coming from a low-income background.
Postnatal care
Timely PNC showed a positive and significant association with full vaccination status and a significant positive association with vaccination against individual diseases which require multiple doses of vaccine-polio, hepatitis B and DPT. There was no association with any-dose of BCG or measles vaccination. The lack of association with BCG might be due to the overall high coverage rate of close to 100%.
The results add to the scarce literature on the association between vaccination coverage and PNC [8,11,13,14,16,17]. Vaccinations are an essential component of timely PNC and PNC visits offer an opportunity to parents to learn about vaccination. Research showed that most women in Bihar who had their children vaccinated were motivated and supported by health workers. Mothers are often seen to rely on information given to them during their commonly sparse interactions with health care facilitators [47]. During PNC visits, health workers share information regarding the benefits, relevance and the schedule of routine vaccinations. Greater awareness of the child's health issues among mothers who were provided with (ANC and) PNC was documented [20,42]. Moreover, PNC visits might offer an opportunity for counselling on uncertainties concerning vaccinations. Motivation and information provided in PNC meetings possibly strengthens follow-up in the vaccination schedule.
This would suggest that not only the mere health care facility availability but also quality of service delivery, like timely PNC, matter for uptake and long-term motivation regarding vaccination. Prior research in low-and high-income countries found that delayed early vaccination is associated with under-vaccination [21,22,48]. Our findings suggest that timely PNC could be a reasonable intervention, in the form of a systematic program, to target early vaccination in order to improve full protective vaccination coverage. Implementing structured standardised timely PNC visits might be an effective and efficient measure for HCS quality improvement. Considering that facilitated birth tripled during the last ten years in Bihar, leveraging it to improve routine care in health care facilities could substantially help increase vaccination coverage rates [23].
Further analyses are needed to investigate differences in vaccination service delivery by provider and how they contribute to the heterogeneity in vaccination rates as well as the quality of PNC in health care facilities and during home visits [49].
Given the crucial role that timely PNC plays for complete childhood vaccination, it is important to note that the odds of receiving timely PNC increase not only with previous contact to the health system in form of ANC or institutional delivery, but also with household wealth, thus confirming findings from the previous study region [45,46,[50][51][52]. Children growing up in poor households are less likely to be protected against childhood communicable diseases caused morbidity and mortality adding to their overall disadvantage in life.
Confounding variables
The results of the confounding variables showing association with the vaccination status are well in line with existing evidence. Beliefs shape vaccination behaviour of parents and religion is highly predictive for children's vaccination status. More specifically, children from Hindu households show higher odds for a full vaccination status [8,9]. Moreover inequity in childhood vaccinations by sex was found all over India, with most prominent imbalances in Bihar [9]. Those can be explained by strong preferences for sons over daughters in the study area [53]. Further past studies have shown that resource allocation to children is strongly linked to decision making power of women in the household [54]. Whereas the effect on the nutritional status of the child is a global phenomenon, the effect on the vaccination status was primarily found in South Asia [29]. Our study confirms this finding once more by showing a strong correlation of female empowerment and completed vaccination schedule.
Additionally wealth is a known predictor for vaccination status, children with low socioeconomic status are less likely to be fully vaccinated [9]. Often low quality antenatal, delivery and postnatal care is primarily provided to people from less wealthy households [30,32,35,55]. In conjunction this may lead to impaired provision of vaccination evidence.
Limitations
Following limitations need to be noted. First, only pregnant women registered at an Anganwadi center were included in the study, which potentially affected the representativeness of the population at risk and may have led to a sample selection bias. According to the National Family and Health Survey in 2015/2016, around 70% of Bihari women hold a mother-child health card, indicating registration at public health facilities [7]. Our sample therefore excludes families without access to the public health system, or without motivation or knowledge on the benefits of a registered pregnancy. Second, about 20% of the initial sample was lost in the follow-up survey. While temporary absence of key household members and migration is probably unrelated to health care behaviour, child survival is probably positively linked to both PNC, institutional birth and vaccination of children. This survival bias might hence cause our results to be a lower bound estimate of the true relationship between place of delivery, PNC and vaccination status. Third, the main analysis only covers children with vaccination evidence based on vaccination cards. Mothers self-reporting bears the risk of socially desired reporting and difficulty of remembering specific vaccines. The validity of this method is uncertain [56]. For this reason, we only use recall vaccination information as a robustness check. This skews the sample slightly to the more affluent, non Hindu households as Table 2 shows. Sensitivity analysis by including children with mothers' self-reported vaccination evidence in the sample though confirmed the main results of the restricted sample. However the lack of a 'gold standard' for the recording of the vaccine status is a reoccurring challenge affecting coverage estimates in LMIC. Last, given the nature of a cross-sectional study design we are unable to draw conclusions about the causal relationship between the place of delivery and timely PNC with children's vaccination status.
The study shows the difficulty of capturing accurate vaccination information from households, when no administrative data or other official records are available. Even though card verification, which documents the date of each vaccine reliably is a cost efficient and easy tool to keep track of children's vaccination status, the cards are made of paper and tend to wear.
Providing them of more robust material might help the survival of the card through multiple years. The option to use digital records, instead of the print out card could also help to keep vaccination information available. Multiple rounds of data collection to ensure more accurate recall, collaboration with local health facilities or vaccination teams could be a possibility to improve data quality. Data imputation is a further method which could be considered when dealing with vaccination estimates [56].
Conclusion
This cross-sectional study provides evidence for a positive association of institutional birth and timely PNC with vaccination coverage of infants and children. These findings suggest that health care system quality improvements with respect to vaccination coverage should consider timely PNC at initial health care provider contacts. Timely PNC visits can be a relevant measures for sharing vaccination information and motivate take up. Institutional health care providers should offer the opportunity for early postnatal service for women of all socio-economic groups and follow up on its attendance. Considering confounding factors, district health offices should stress the importance to vaccinate all infants independently off their sex or religious background at institutional health care providers. Differences in quality of PNC services and their effect on parents' motivation, particularly in the case of non-vaccination, require further investigation. | 2022-05-19T15:23:42.137Z | 2022-05-17T00:00:00.000 | {
"year": 2022,
"sha1": "bbc18011b540921187286508576668fe8c53b52b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0000411&type=printable",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f2e2bfd10ed00a239cad9b81ffe257b314a5b0cc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247361485 | pes2o/s2orc | v3-fos-license | Observational study of medical marijuana as a treatment for treatment‐resistant epilepsies
Abstract Objectives Medical cannabis formulations with cannabidiol (CBD) and delta‐9‐tetrahydrocannabinol (THC) are widely used to treat epilepsy. We studied the safety and efficacy of two formulations. Methods We prospectively observed 29 subjects (12 to 46 years old) with treatment‐resistant epilepsies (11 Lennox–Gastaut syndrome; 15 with focal or multifocal epilepsy; three generalized epilepsy) were treated with medical cannabis (1THC:20CBD and/or 1THC:50CBD; maximum of 6 mg THC/day) for ≥24 weeks. The primary outcome was change in convulsive seizure frequency from the pre‐treatment baseline to the stable optimal dose phase. Results There were no significant differences during treatment on stable maximal doses for convulsive seizure frequency, seizure duration, postictal duration, or use of rescue medications compared to baseline. No benefits were seen for behavioral disorders or sleep duration; there was a trend for more frequent bowel movements compared to baseline. Ten adverse events occurred in 6/29 patients, all were transient and most unrelated to study medication. No serious adverse events were related to study medication. Interpretation Our prospective observational study of two high‐CBD/low‐THC formulations found no evidence of efficacy in reducing seizures, seizure duration, postictal duration, or rescue medication use. Behavioral disorders or sleep duration was unchanged. Study medication was generally well tolerated. The doses of CBD used were lower than prior studies. Randomized trials with larger cohorts are needed, but we found no evidence of efficacy for two CBD:THC products in treating epilepsy, sleep, or behavior in our population.
Introduction
Treatment-resistant epilepsy (TRE) affects 30% of epilepsy patients and can progressively impair neurological function and quality of life, and cause injury and death from sudden unexpected death in epilepsy (SUDEP), and cause other seizure-related (e.g. drowning), and indirect (e.g. metabolic disorder from medication side effects) consequences. Among young adult TRE patients, epilepsy kills 3%-6% per decade. 1,2 Most TRE patients suffer ongoing seizures despite disabling side effects from multiple anti-seizure medications (ASMs). There is a dire need for new therapies exploiting novel mechanisms of action.
Cannabis (marijuana) species contain more than 500 compounds, including the cannabinoids D-9tetrahydrocannabinol (THC) and cannabidiol (CBD). THC is the main psychoactive compound and CBD is the main non-psychoactive compound in cannabis. Other cannabinoids and non-cannabinoid molecules in cannabis have diverse biologic activities. 3 Artisanal cannabis strains with high-CBD:THC ratios gained widespread media attention to treat children with TREs such as Dravet Syndrome (DS) and Lennox-Gastaut Syndrome (LGS). 4,5 Small unblinded trials used different combinations of CBD, THC, and other cannabinoids, terpenes and flavonoids that rarely documented the precise contents, purity, or consistency of the products. A good manufacturing process (GMP) quality high-CBD/low-THC (50:1) formulation had comparable safety and efficacy in an open-label study to treat seizures in Dravet syndrome 6 as a 99% CBD formulation with <0.3% THC in open-label and randomized controlled trials. 2,5 Randomized, placebocontrolled trials have established the safety and efficacy of a 99% CBD formulation (Epidiolexâ) to treat convulsive seizures in Dravet Syndrome, drop seizures in Lennox-Gastaut, and seizures in Tuberous Sclerosis Complex, 2,5,7 leading to approval by the US Food and Drug Administration.
Cannabinoids, terpenes, flavonoids, and other compounds vary by strain and may work independently, antagonistically, or synergistically to produce different beneficial and adverse effects. 8,9 One recent preclinical study supported synergistic anti-seizure efficacy of CBD and THC in a kindling model. 9 We assessed the tolerability and efficacy of low-THC/high-CBD (1 T:20C and/or 1 T:50C) formulations to treat convulsive seizures for children and adults with diverse TREs.
Methods
This prospective observational study was conducted at The Center for Discovery (TCFD), a residentially based program located in upstate New York. IRB approval was obtained to collect and analyze data on patients who were certified for medical cannabis (MC). For patients who lacked intellectual capacity, legal guardians consented. Per the New York State (NYS) Medical Marijuana Program, a registered physician completed the NYS Department of Health (NYSDOH) Medical Marijuana course and registered with the NYSDOH Medical Marijuana Program. Per DOH guidelines, enrolled patients were then certified by the registered physician to receive a NYS-approved MC formulation (Columbia Care, LLC). After certification, patients were registered with NYS and designated TCFD as a facility caregiver. The program has a high ratio of clinical staff to patients and routinely collects detailed data on all residents on neurological, medical, and behavioral/lifestyle factors.
Twenty-nine patients were observed over > = 24 weeks (maximal 9 months) after a 90-day pre-intervention baseline phase. Inclusion criteria included enrollment in the residential program, treatment-resistant childhood-onset epilepsy, and all of the following: (a) TRE failure to control seizures despite an appropriate trial of two or more anti-seizure medications (ASMs) at therapeutic doses; (b) video-EEG characterization of current seizures; and, (c) during pre-baseline, > = 1 convulsive (atonic, tonic, tonic-clonic, or focal motor) seizure per month over a 90-day consecutive period (e.g., absence and myoclonic seizures were not counted) ( Table 1).
Patient history
Patients failed an average of 10 ASMs. All 29 patients had convulsive seizures. Five patients were treated with a vagus nerve stimulator (VNS): one active and four explanted or inactive. Six patients used the ketogenic diet, and four patients had neurosurgery.
Psychiatric medications, if used, were stable for 4 weeks prior to enrollment. The one patient with an active VNS had stable settings for a > = 90 days. Exclusion criteria included any of the following: (a) epilepsies associated with progressive or neurodegenerative diseases (e.g., neuronal ceroid lipofuscinosis, progressive myoclonus epilepsies, Rasmussen encephalitis, and tumors); (b) epilepsies associated with an inborn error of metabolism (e.g., mitochondrial disorders); and, (c) felbamate initiated within the past 6 months. Others were excluded if they had extended hospital stays with interruption of MC therapy. The MC certified for all patients was ClaraCeed, a low-THC/high-CBD (1 T:20C) and/or ClaraCeed Ultra, (1 T:50C) tablet that, as per protocol, included a maximal dose of 6 mg THC/per day regardless of ratio. Ingredients included: Microcrystalline Cellulose, Dicalcium Phosphate, Silicon Dioxide, Magnesium Stearate, Talc, Sodium Starch Glycolate, Fractionated Coconut Oil MCT. All patients in this analysis were started on 2 mg/day of THC at a low-THC/high-CBD preparation of MC ratio of 1 T:20C ratio. In all cases, patients included in the study analysis were increased by 2 mg/day of THC every 7 days up to a maximum dose of 6 mg/day of THC of the 1:20 tablet. Patients were on 3 mg of THC and 60 mg of CBD twice daily on study day 15. They were then observed by their clinical team on the dose determined to provide the best balance of efficacy and tolerability.
If the efficacy after full titration of the 1 T:20C was incomplete by week 24, but study medication was well tolerated, patients were moved to a higher CBD:THC ratio, 1 T:50C, and titrated to a maximal dose of 6 mg/day of THC of the 1 T:50C (i.e. 6 mg/day of THC and 300 mg/day of CBD), as tolerated. Safety and tolerability evaluations were conducted at baseline, and every week for the first month or until maximal tolerated dose was achieved, then once a month through the end of the study period. Caregivers and clinical staff routinely document seizure frequency, duration, use of rescue medications, and recovery duration. All data were entered into the patient's record. Patients were also assessed for secondary factors including concomitant ASM levels a minimum of three times during the study period at weeks 1, 12, and 24. The concomitant ASMs remained stable for the first 4 weeks before study onset, except for Felbamate, which was required to be stable for 6 months before initiation. Figure 1 is a flow diagram of the study subjects.
Data records
Twenty-nine patients were observed for a 90-day preintervention review for seizure frequency of at least one countable (i.e. convulsive) seizure per month. Preintervention data included 90 days of seizure history. Mean monthly convulsive seizure frequency was 16.1 (range, 1-106). Pre-intervention baseline data for all subjects included primary and secondary outcome measures over 90 days before first dose of study medication included: (1) seizure type, (2) seizure frequency, (3) seizure duration, (4) rescue medication use, (5) postictal duration, (6) nightly sleep data, (7) bowel movement data, (8) maladaptive behavior data for patients who with behavioral problems and a behavioral intervention plan, (9) blood pressure, pulse and weight, (10) neurological examination, and (11) complete blood counts and comprehensive metabolic tests.
During the study period, seizure type, date, time, duration, intervention use (e.g., PRN/Rescue Medication, ASMs, etc.), and postictal duration were documented for each seizure. TCFD routinely collects data on sleep and bowel elimination for each 24-hour period. All antiseizure medications administered including PRNs were recorded in the medical record. For patients with preexisting behavioral intervention plans, daily behavioral data were recorded; behavioral data were not recorded for those without a behavioral intervention plan. Vital signs, physical, and neurological examinations were conducted at each physician visit. In-person physician visits were held once weekly from the first administration of MC through the achievement of the maximal dose, and biweekly thereafter through a combination of in-person and telephonic visits.
Statistical Analysis
Software All analyses were completed using the R programming language version 4.0 10 and the RStudio integrated development environment. 11 Data cleaning was performed using the dplyr 12 and tidyr 13 packages. Graphics were created using the ggplot2 package. 14 Normality was assessed using histograms and measures of spread (i.e., skew and kurtosis). Data were not normally distributed and required nonparametric analyses. Seizure data were analyzed using a Wilcoxon rank sum test due to the small number and non-normal distribution.
Phases
This study included three phases: (1) pre-treatment; (2) titration; and, (3) stabilization on the maximal dose predetermined using a 1 T:20C formula. The pre-treatment phase included 90-days before the first dose of the study medication. The titration phase comprised the period from the first administration of the study medication until the maximal tolerated dose was determined. The stabilization on maximal dose phase included the first 90days following the maximal dose, unless the patient was removed from treatment before this time. The maximal dose was the most stable dose utilized during the study period on a 1 T:20C formula, that is, the dose of CBD and THC identified as most effective and best tolerated during the titration period (Fig. 1).
We compared the pre-treatment phase and the stable maximal dose phase. The titration phase was excluded due to the variable dosing required for each patient.
Outcomes
The primary outcome was change in convulsive seizure frequency from the pre-treatment baseline to the stable maximal dose phase. Secondary outcomes were seizure duration and seizure recovery time, which were recorded in minutes. Use of rescue medications, behavior data, sleep, and bowel elimination routinely kept records were analyzed as additional measures of treatment efficacy.
The average dose per subject on the maximal dose phase was 5.9 mg THC and 117.2 mg CBD. The average dose per subject in mg/kg per day was 0.11 mg/kg/day for THC and 2.3 mg/kg/day CBD.
Primary Outcome: Figure 2 summarizes the percent change in convulsive seizure frequency and Figure 3 summarizes the percent change in the duration of seizures and postictal periods. The median change from baseline to maximal dose period was À5.3% (IQR = À 39% -28.6%; there was no statistically significant difference average seizure frequency between the maximal and preintervention phases using a Wilcoxon rank sum test with p = 0.99. Six patients (21%) had a ≥ 50% reduction in convulsive seizures. Of that six, four patients (13.8%) became free of convulsive seizures during the maximal dose period. Four patients (13.8%) had a ≥ 50% increase in convulsive seizures during the maximal dose period.
Median change in seizure frequency between baseline and maximal dose period À3 recorded seizures, change in IQR À11. There was no statistically significant difference found in frequency of seizures between the optimal and pre-intervention phases using a Wilcoxon rank sum test (Fig. 2).
A. Median changes in average seizure duration between baseline and maximal dose periods was -2.5 seconds (IQR À12.8 seconds; p = 0.44). B. Change in average duration of postictal state between pre-intervention and maximal dose periods. Median change in postictal state duration was À50.7 seconds (QR -9 secs; p = 0.26).
Excluding the four seizure-free patients, we found no differences in the average duration of seizures (Fig. 3A) or postictal period (Fig. 3B). Figure 4 shows the changes in seizure frequency, seizure duration, and postictal duration. Seven patients had a decrease in all three metrics; 11 patients decreased in two metrics; five decreased in one metric. Six patients had an increase in seizure frequency, duration, and postictal state; four increased across two metrics; 12 increased in one metric. Figure 5 shows the changes in seizure frequency for individual patients with Focal/Multifocal Epilepsies (left), Lennox-Gastaut Syndrome (middle), and Generalized Epilepsies (right). There were no significant difference in reduction of seizure frequency or duration or postictal duration when the three different epilepsy syndromes were analyzed.
Rescue medication
We found no difference in rescue medication use between baseline and maximal dose periods (p = 0.66). Median change in percentage of required rescue medication between the baseline and maximal dose period was À10% (IQR -100%-300%).
Potential influence of clobazam
Since CBD is a potent inhibitor of cytochrome 2C19 and can thereby elevate levels of the active clobazam metabolite, desmethyl-clobazam, we assessed the 12 patients on clobazam. There was no signal of improved efficacy in patients on clobazam for seizure frequency or duration or postictal duration.
Efficacy by syndrome
We assessed patients by the major epilepsy syndromes in our cohort (Focal or Multifocal, Lennox-Gastaut Syndrome, Generalized Epilepsies) (Figure 4). There were no significant differences in these groups, although there was a trend for reduced postictal state in the Focal/Multifocal and Generalized Epilepsy groups, as nine individuals had a > =50% decrease and one had a > =50% increase in postictal state.
Anti-seizure medications
Thirteen anti-seizure medication doses were reduced in seven patients, including benzodiazepines in six patients. Other reduced ASMs included phenobarbital, felbamate, levetiracetam, perampanel, lamotrigine, and zonisamide. For one patient, reductions occurred during the observation period and six afterwards. There were no increases of ASM during the observational period.
Adverse events
Ten adverse events were reported in 6/29 (20.7%) patients, including emesis (n = 2 [6.9%]), gums/nose [3.4%]) and required hospitalization in two patients. Two patients were removed from the study due to serious adverse events: increased seizure frequency and increased maladaptive behaviors.
No adverse or serious adverse events were deemed related to the study drug, and were consistent with patient's histories and diagnoses.
Additional metrics
Twelve patients had behavioral disorders before study onset requiring behavioral tracking plans. Median percentage change in target behavioral episodes was À7.3% (IQR -16.1%-6.5%); there was no difference in average behavioral episodes between the maximal dose and preintervention phases (p = 0.2). Nine had reduced target behaviors and three had increased target behaviors during maximal dose v. baseline periods.
There was no significant change in sleep duration between baseline and maximal dose periods using a Wilcoxon rank sum test with p = 0.33; median percentage change was À2.6% (IQR, À5% to 5%). No patient had a ≥50% change in average duration of sleep.
There was a marginally significant difference in frequency of bowel movements between the maximal dose and pre-intervention phases using a Wilcoxon rank sum test with p = 0.064 a trend of more frequent bowel movements.
There were no consistent or clinically significant abnormalities in blood cell counts or metabolic tests, including liver function studies, in our patients during the study period.
Discussion
This prospective observational study of a low-THC/high-CBD (1 T:20C or 1 T:50C) formulation in diverse TREs revealed that these MC formulations were well tolerated up to a predetermined maximum dose, but did not significantly reduce seizure frequency or duration, or postictal duration when analyzed at the group level. Similarly, at the group level comparing baseline and treatment periods, we found no significant differences in rescue medication use, maladaptive behavioral episodes, or sleep duration, although there was a trend for more frequent bowel movements during treatment. All patients achieved their predetermined maximal doses using a 1 T:20C formula: 27 patients achieved a 6-mg THC with 120 mg CBD daily dose; two patients achieved a 4-mg THC with 80 mg CBD daily dose. Although some patients achieved this dose of 1 T:50C beyond the 90-day analysis period. For most patients on the 1 T:20C formulation, the daily doses of THC were higher than most previous studies, but the dose of CBD was lower, averaging 2.3 mg/kg/day of CBD. By contrast, Epidiolex is approved for Dravet and Lennox-Gastaut Syndromes with target doses of CBD at 10-20 mg/kg/day.
There was considerable variability in convulsive seizure frequency, with some patients experiencing >50% increases or decreases during the treatment period. Similarly, seizure duration and postictal durations varied markedly between baseline and treatment periods. We cannot distinguish spontaneous variability from positive or negative effects of MC among individual patients. It is possible that both natural fluctuations as well as heterogeneous effects of MC contributed to the observed variation. Among those with increased seizure frequency or duration, or postictal dysfunction, all returned to baseline in these metrics after MC was reduced or eliminated, therefore, there were no lasting negative impacts from the trial. This observation can support either regression to the mean or reversible adverse effects of MC, or both.
Of the 29 patients in the study, seven had their ASMs doses reduced during MC therapy, however, only one experience had ASMs decreased during the observational period while six had decreased after the observational period. No patients required increased ASMs while receiving the MC. The potential benefits of decreased ASMs in this population of highly medically complex patients deserves further study.
Although there was no significant difference between rescue medication use between pre-intervention and maximal dose phases at the group level, a third of patients had a ≥50% reduction in use of rescue medications with five of these 18 requiring no rescue medication during the optimal phase. Conversely, four patients who received no rescue medications during baseline required them during the optimal phase, although all had received rescue medications in past. We found no significant difference in sleep duration between the pre-intervention and optimal phases. There was a trend for increased frequency of bowel movements during MC treatment. Since diarrhea is a potential adverse effect of higher dose CBD 5 this could reflect a milder effect, which was beneficial in those with constipation.
Adverse events among the patients on MC during the optimal dose phase was similar to baseline and patient's prior records. Overall, the study drug was well tolerated by most patients. One patient on 6 mg THC and 120 mg CBD daily had increased maladaptive behaviors necessitating removal from the study during the 90-day observation period. The behaviors returned to baseline after the medication was stopped. These behaviors may result from THC since the CBD dose was relatively low and was rarely associated with adverse behavioral changes even at much higher doses. Another patient had increased seizure frequency on MC leading to discontinuation. Seizure frequency for this individual returned to baseline after MC was stopped. Unrelated to study drug toleration, four patients were removed before the completing the observation period because of unrelated hospitalizations. In summary, six patients were removed before completing the 90 days on the maximal dose.
This study was limited by lack of blinding and small sample size. However, the study benefited from unusually structured and near-continuous observation at TCFD before and during the study period, with detailed assessments of seizure activity as well as behavior, sleep, and other measures. Although this study was not powered to identify small but statistically significant reductions in seizure frequency or other measures, we found no trend for reduced seizure frequency or duration, suggesting that a larger sample may not yield a statistically significant difference. Two randomized controlled trials of CBD and the closely related CBDV for focal epilepsy reported was equivalent to placebo. 15 Our study was constrained by the initial availability of only the 1 T:20C dosing formulation, which resulted in a relatively high THC and low CBD dose compared to prior studies. The doses of CBD used in our trial were lower than those that showed efficacy on earlier open-label and randomized controlled trials in Dravet, Lennox-Gastaut, and Tuberous Sclerosis Complex Syndromes; 2.3 mg/kg/day versus 10-> 20 mg/kg/day. [3][4][5][6][7]16 However, the doses used in our study are consistent with doses commonly employed by patients at cannabis dispensaries. For example, in New York State, a 1month supply of CBD at our dosing would cost $385-800 dollars, depending on the manufacturer and also the THC content.
Our data suggest that THC at doses of up to 6 mg daily did not reduce seizure frequency or severity in our patients. There are an infinite number of ratios and targeted doses for THC and CBD, but it may be worthwhile assessing higher ratios of CBD:THC (e.g., 1:50) in populations with well-defined epilepsy syndromes. The trend for reduced postictal duration deserves further study. Few investigations assess postictal duration, which may be a marker of seizure severity and long-term cumulative effects of recurrent seizures on brain function.
Overall, the MC formulations were well tolerated but had no significant reductions in seizure frequency, duration, postictal time, or recue medication use at the group level. Ten individuals continued on a-1 T:50C formulation, with six experiencing a decrease in their ASMs after the study period, which supports potential benefits for a subset of our population. Given the widespread use of MC formulations to treat epilepsy patients with both CBD and THC in state dispensaries, our findings suggest that larger, randomized controlled studies should be conducted to more fully assess their safety and efficacy. Further, our study, one of the first to assess postictal state duration, rescue medication use, and behavioral changes should be replicated as these factors can significantly affect quality of life and be determining factors for use of MC or other therapies in TRE patients.
Limitations
This study was limited by lack of blinding, heterogeneous patient population and epilepsy syndromes, small sample size, and predetermined maximal dose ratios of THC: CBD.
Supporting Information
Additional supporting information may be found online in the Supporting Information section at the end of the article.
Table S1 Percentage Change in Rescue Medication: Total and percentage change in rescue medication administered between pre-intervention and optimal dose period, n = 18. Table S2 ASM Reduction in Dosage: Of the 29 patients in the study, seven patients decreased their dosages of anti-seizure medications while receiving MC: one decreased during the 6 months allotted for the analysis, and six decreased after. | 2022-03-11T06:23:20.275Z | 2022-03-10T00:00:00.000 | {
"year": 2022,
"sha1": "b5c6a064721af975a07e2da46a652784df01181d",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "dd2a1468079bd79f21070bd87e8b90f181c6bf23",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247845521 | pes2o/s2orc | v3-fos-license | The Necessity to Seal the Re-Entry Tears of Aortic Dissection After TEVAR: A Hemodynamic Indicator
Thoracic endovascular aortic repair (TEVAR) is a common treatment for Stanford type B aortic dissection (TBAD). However, re-entry tears might be found distal to the stented region which transports blood between the true and false lumens. Sealing the re-entry tears, especially for the thoracic tears, could further reduce blood perfusion to the false lumen; however, it might also bring risks by re-intervention or surgery. Wise determination of the necessity to seal the re-entry tears is needed. In this study, patient-specific models of TBAD were reconstructed, and the modified models were established by virtually excluding the thoracic re-entries. Computational hemodynamics was investigated, and the variation of the functional index and first balance position (FBP) of the luminal pressure difference, due to the sealing of the re-entries, was reported. The results showed that the direction of the net flow through the unstented thoracic re-entries varied among cases. Excluding the re-entries with the net flow toward the false lumen may induce the FBP moving distally and the relative particle residence time increasing in the false lumen. This study preliminarily demonstrated that the hemodynamic status of the re-entry tears might serve as an indicator to the necessity of sealing. By quantifying the through-tear flow exchange and shift of FBP, one can predict the hemodynamic benefit by sealing the thoracic re-entries and thus wisely determine the necessity of further interventional management.
INTRODUCTION
Aortic dissection is a life-threatening cardiovascular disease with high mortality (Hagan et al., 2000). It is usually treated by thoracic endovascular aortic repair (TEVAR) or open surgery (Dake et al., 1999;Qin et al., 2013;Nienaber et al., 2016). TEVAR is more commonly applied in treating Stanford type B aortic dissection (TBAD), even for uncomplicated TBAD patients, due to its favorite luminal remodeling and its lower mortality in mid-and long-term follow-ups (Qin et al., 2016). Recent studies investigated the risk factors that related to poor prognosis of TBAD after TEVAR Watanabe et al., 2014;Spinelli et al., 2018;Higashigaito et al., 2019) and reported that the thoracic re-entries played a vital role in the post-interventional prognosis Kotelis et al., 2016;Zhang et al., 2018). It was confirmed that the number of tears was associated with true and false lumen (TL and FL) development Kotelis et al., 2016), and it was suggested that the tears in the descending thoracic aorta should be repaired (Zhang et al., 2018). Moreover, an experimental study of TBAD based on an ex vivo platform indicated that the re-entry tears significantly affected the movement of the flap, which then influenced the flow pattern in the FL (Canchi et al., 2018). These studies raised the importance of the thoracic re-entries (Marui et al., 2007;Trimarchi et al., 2010;Tolenaar et al., 2013;Trimarchi et al., 2013;Evangelista et al., 2014;Song et al., 2014;Watanabe et al., 2014;Kotelis et al., 2016;Sato et al., 2017;Zhu et al., 2017). However, its hemodynamic role and the necessity to be occluded remained unanswered, for which quantitative hemodynamic analyses should be involved.
In recent years, hemodynamic computation has become a considerable way to investigate TBAD (Cheng et al., 2008;Chen et al., 2013b;Alimohammadi et al., 2014;Cheng et al., 2015;Sun and Chaichana, 2016;Xu et al., 2018). The flow pattern, the luminal flow exchange via the tears, the relative residence time that was related to thrombosis development, the luminal pressure interaction, etc. were quantitatively studied (Cheng et al., 2015;Xu et al., 2017;Xu et al., 2018;Pirola et al., 2019;Xu et al., 2020). Recently, we proposed a functional indicator to quantify the hemodynamic benefit by TEVAR and to predict post-TEVAR prognosis (Xu et al., 2020). It was based on the fact that the true and false luminal pressures interacted with each other and their difference related to the luminal development. By quantifying the shift of first balance position (FBP) of the luminal pressure difference curve, one can estimate the hemodynamic benefit by implanting the stent-graft (SG). In our previous study, it was confirmed that the shift of FBP was statistically related to the following luminal remodeling.
In this study, we aim to investigate the hemodynamic role of thoracic re-entry and to propose a method to evaluate the necessity of occlusion. The flow exchange via thoracic re-entry and its influence on FBP were investigated, and the relationships between these hemodynamic factors and following luminal development were analyzed.
Patients and Model Reconstruction
This study was approved by the Review Board of the Chinese PLA General Hospital (S201703601). Five patients with TBAD who underwent TEVAR and presented uncovered thoracic re-entries at the first post-TEVAR follow-up were included. The CTA data at initial presentation and multiple follow-ups after TEVAR were collected. Patient-specific models were established via image segmentation and 3D reconstruction, in Mimics 19.0 (Materialise, Belgium). As shown in Figure 1A, model surface smoothing was made, and the smoothed model was mapped back to the images to assess the accuracy of model establishment ( Figure 1A).
To investigate the effects of the uncovered thoracic re-entries to the flow, the re-entry was artificially excluded to mimic the effect of sealing the tears. Thus, in the current study, patientspecific geometric models were generated based on the pre-and post-TEVAR image datasets, and five artificially modified geometric models were created based on the first-time followup model post-TEVAR. The pre-TEVAR and the first post-TEVAR (original and modified) geometric models were employed for hemodynamic computations, while the geometric models for the subsequent follow-ups were used for morphological analysis only, to quantify the luminal development and thus investigate its relationship to the hemodynamic conditions. All of the geometric models are displayed in Figure 1E.
By comparing the FL volume at the first and second post-TEVAR follow-ups (V FL-1 and V FL-2 ), FL remodeling status could be quantified by V FL-2 -V FL-1 ; positive values indicated FL expansion, while negative values indicated FL reduction. By this means, the patient cases were categorized into two groups: FL expansion was found in patients 1# and 2# (group A), who experienced re-intervention to seal the thoracic re-entry post-TEVAR; the other patients (3#, 4#, and 5#) with stable FL remodeling were categorized as group B.
These models were imported into ICEM (Ansys 18.0, United States) for meshing with tetrahedral elements in the core region and prismatic cells (5 layers) in the boundary layer near the aortic wall. The elements in these models varied from 3 to 4.5 million. The meshing sensitivity test was conducted in the previous study, indicating the number of elements used in this study was adequate (Xu et al., 2017).
Numerical Simulation and Boundary Conditions
According to the previous studies, blood was treated as Newtonian fluid, with the dynamic viscosity and density of blood as 0.00365 Pa-s and 1,060 kg/m 3 , respectively. The inlet of each model was assigned as velocity inlet with a flat profile, and the flow waveform was assigned based on a previous study (Dillon-Murphy et al., 2016). The 3-element Windkessel model, which could capture the distal outlet resistance and compliances of the vessel, was assigned at each outlet with the relevant parameters taken from the same study (Dillon-Murphy et al., 2016). The vessel wall was regarded as no slip and rigid, owing to the low distensibility of aorta/arteries in patients with aortic dissection (Chen et al., 2013a;Alimohammadi et al., 2014;Cheng et al., 2014;Cheng et al., 2015). The time step was set to 0.005 s, and the time-step sensitivity test was conducted in one previous study as well (Menichini et al., 2018).
All flow simulations were run on CFX 18.0 (ANSYS, United States) to solve the transport equations-Navier-Stokes equations, together with the continuity equation of incompressible and Newtonian fluid (Eqs. 1, 2), where u stands for velocity, ρ stands for density, μ represents dynamic viscosity, and P denotes pressure. Simulations were carried out for four cardiac cycles for each model to achieve periodic solutions, and the results of the last cycle were selected for post-processing. The convergence of the solution was controlled by specifying a max root-mean-square of 10 −6 .
Morphological Analysis
Several morphological parameters were calculated in this study, including the number of tears, tear size, and the luminal volumes. Figure 2 indicates the volume of TL and FL and their variation during the follow-up. By comparing the FL volume at the second and first follow-up post-TEVAR (V FL-2 -V FL-1 ), patients 1# and 2# presented the potential for further FL enlargement ( Figure 2A) and were thus categorized to group A. For those patients in group B, the FL volume decreased during the whole period of follow-up. The variations of TL volume showed some difference as only TL volume of patient 2# decreased trivially. The TL volume increased for the other case including patient 1#, as displayed in Figure 2B. Table 1 listed the detailed information of the tears including primary tear before TEVAR and the re-entry tears at the firsttime follow-up after TEVAR. The case with the largest primary tear size was patient 4# with 407 mm 2 , followed by patient 3# with an area of 386.0 mm 2 . Both of them belonged to group-B. Furthermore, patient 3# was associated with the largest area of total re-entry tears, and the thoracic re-entry of which was closed in the next few months after the first time follow-up post-Frontiers in Bioengineering and Biotechnology | www.frontiersin.org March 2022 | Volume 10 | Article 831903 TEVAR, as shown in our previous study (Xu et al., 2017). There were 9 re-entry tears in patient 2#, while the area of tears was relatively low (126.3 mm 2 ) due to the small size of each tear. The smallest thoracic re-entry appeared in patient 4# with a size of merely 7 mm 2 .
Luminal Pressure Difference and the First Balance Point
The luminal pressure difference (LPD) from the proximal dissection to the iliac bifurcation was calculated over a cardiac cycle via the same method as our previous study (Xu et al., 2020). A series of cross-slices that were perpendicular to the central line of TL were created, and the averaged pressure of these slices over the cardiac cycle was calculated. The LPD in each slice equaled to P TL -P FL , and the LPD curve along the central line of TL in all patients is shown in Figure 3. Moreover, the FBP, which indicated the first position of equality pressure in TL and FL (PD = 0), was investigated in this study. The distance between the root of the proximal subclavian artery and the iliac bifurcation along the central line of TL was normalized to 424 mm for all cases, and the movement distance of FBP before TEVAR and the first followup after TEVAR was measured in all patients. According to the conclusion of our previous study, the location of the FBP post-TEVAR was related to the prognosis of long-term follow-up. The distal shifts of FBP of these cases in group A were 29.2 and 60.23 mm, while in group B, the shifts were 95.8, 153.9, and 151.73 mm. If the thoracic re-entry was excluded, the FBP would move up proximally for patients 1# (45.1 mm) and 3# (9 mm), as the red arrow indicated in Figure 3. As for patients 2# and 4#, the FBP would move down with a distance of 7.84 and 2.89 mm, respectively. For patient 5#, if the thoracic re-entry was excluded, there was no balance point, which meant the FBP moved down beyond the studied aortic region, and the distance between the FBP and the iliac bifurcation was 174.1 mm. These results indicated that the direction of the shift of FBP varied among different cases on the condition of the exclusion of the thoracic re-entry. In other words, the thoracic re-entry affected the location of FBP and may induce different prognosis for TBAD post-TEVAR.
Flow Exchange Between True Lumen and False Lumen
The flow entering the FL via the primary tear was regarded as a key factor that may relate to the prognosis after TEVAR. Figure 4A and Figure 4B showed the flow rate variations through the primary tear for both groups. The results indicated that the primary tear acted as the inlet to the FL for almost the entire cardiac cycle, especially for patients 3# and 4#, in which the area of the primary tear was relatively large. The flow split ratio, which indicated the percentage of flow rate through the specific tear of surface over the total flow of inlet, through the primary tear for patients 1# and 2# in group A was 41.80 and 16.29%, respectively, while that for patients 3#, 4#, and 5# of group B was 46.81, 10.95, and 44.75%, respectively. The flow entering FL over a cardiac cycle through the thoracic re-entry and the total flow exchange in the descending aorta were calculated for all the cases after TEVAR, as given in Table 2. Negative values indicated that the re-entry tears contributed to negative transportation of the blood toward the FL. Figure 4C and Figure 4D display the flow rate exchange over a cardiac cycle. For the thoracic re-entry in group A, the tear mainly acted as the inlet to the FL during early systole, while it acted as the outlet of FL in the late systole and during the entire diastole. Moreover, the areas of the tear affected the flow exchange. In patient 4#, the area of thoracic re-entry was merely 7.0 mm 2 and the corresponding flow exchange was small, while for patients 1# and 3#, the thoracic re-entry areas were 62.6 and 70.7 mm 2 , respectively, which induced a relatively larger flow exchange. Table 2 quantify the flow exchange through the re-entry tears and the total flow that passes the FL after TEVAR. The results indicated that the thoracic re-entry contributed to positive transportation of the blood toward the FL for patients 2#, 4#, and 5#, with the flow splitting to FL by 2.29%, 2.69%, and 2.29%, respectively, while the thoracic re-entry tear contributed to negative transportation of the blood toward the FL for patients 1# and 3#, with a flow split of 4.40% and 2.91%, respectively. The amount of the flow varied among the cases based on their area of tears and aortic geometry. The total flow splitting to the FL through all re-entries in the descending aorta after TEVAR varied from 6.23% to 15.59% in all patients. The FL flow splitting ratio declined in varying degrees when the thoracic re-entry tears were blocked.
Relative Residence Time
Relative residence time (RRT) is a key hemodynamic parameter that could disclose the potential thrombosis in a Tears along the aorta were counted and measured, while those in the iliac arteries were not included. b Centerline of the true lumen was extracted for each geometric model. The highest position which was presented in the aortic arch region was assigned as the reference point for each model, and the straight line distance and the curve distance along the centerline between this reference point to the centroid of each tear were measured, regarded as the location of the tears. c Indicated the data measured in the geometric models before TEVAR. d Indicated the data measured in the geometric models at the first-time follow-up examination after TEVAR.
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org March 2022 | Volume 10 | Article 831903 FL to some degree. The values of RRT before and after the thoracic re-entry was sealed were compared in this study. It could be observed in Figure 5 that the RRT was higher at the proximal tip of the FL for those models, when the thoracic reentry was excluded. The thrombosis process would be sped up comparing to the original aortic geometry without re-entry exclusion (Armour et al., 2020). As shown in Figure 5, the region where RRT was lower than 5,000 increased significantly when the thoracic re-entry was removed. These cases presented a longer distance between the location of re-entry a Flow exchange was presented as the through-tear flow ratio to the inflow at the inlet of the ascending aorta. b In this column, positive values indicated blood volume transported from true lumen to the false lumen, while negative values indicated reversed flow from the false lumen to the true lumen. tear and the proximal tip of FL. For patient 2#, the second reentry tear was located close to the proximal tip of FL; thus, the difference of RRT distribution was trivial before and after the thoracic re-entry was removed.
DISCUSSION
Re-entry tears played an important role in the prognosis of post-TEVAR patients. The flow would strike the vessel wall in the FL through thoracic re-entry since the blood flow entered into FL around the systolic peak, as shown in Figure 4C, leading to higher risk of FL expansion. However, whether sealing the thoracic re-entries is beneficial to all patients remains controversial. The flow exchange through thoracic re-entry might induce further FL expansion, while sealing these tears by SG might cause additional risk during intervention (Zhang et al., 2018). The necessity to exclude thoracic re-entry should be further clarified. Previous studies showed that TBAD patients benefited from the absence of distal re-entry tears (Zhu et al., 2017) and the number of reentry tears closely related to aortic growth . In our study, we aim to further reveal the hemodynamic significance of the thoracic re-entry, investigate its relation to the prognosis after TEVAR, and propose a method to evaluate the necessity to interfere thoracic re-entry from the hemodynamic perspective.
As a preliminary study, five TBAD cases were studied. Two of them underwent the second intervention post-TEVAR to exclude the thoracic re-entries due to the enlargement of FL. At first, the distal shift of FBP after TEVAR was calculated and compared between group A (enlarged FL) and group B (stable or vanished FL). The shift of FBP in group A was smaller than that in group B (44.72 ± 15.52 mm vs 133.8 ± 32.9 mm), which was consistent with the results of our previous study (Xu et al., 2020), and indicated that the hemodynamic benefit by TEVAR in group B was much greater than that in group A. Then, the influence of thoracic re-entry exclusion on FBP was investigated. As illustrated by Figure 6, it was found that the direction of the shift of FBP by sealing the thoracic reentry was dependent on the hemodynamic role of these tears. If the thoracic re-entry contributed to positive transportation of the blood toward the FL, the exclusion of thoracic re-entry would induce distal shift of FBP ( Figure 6A), while if the thoracic re-entry contributed to negative transportation of the blood toward the FL, sealing it might induce proximal movement of FBP ( Figure 6B). As demonstrated by our previous study (Xu et al., 2020), distal positioned FBP indicated positive effect for luminal development, while the proximal shift of FBP implicated that sealing this type of thoracic re-entry might not bring positive contributions to luminal remodeling. For instance, there are three re-entry tears in patient 1#. The split flow ratio entering into the FL through the second re-entry tears was 11.2%, while it decreased to 7.84% when the thoracic re-entry was excluded. The reduction of flow entering into the FL may reduce the impact of flow on the wall and contribute to positive thrombosis formation in the FL. Thus, the hemodynamic role of thoracic re-entry may be an indicator to evaluate the necessity to seal the re-entry of post-TEVAR TBAD.
In the current study, sealing of the thoracic tear was applied to patients 1# and 2# via re-intervention. Based on the proposed hemodynamic indicator, occlusion of the re-entry in patient 1# might not be beneficial; however, that for patient 2# might have potential to promote positive luminal remodeling. During the long-term follow-ups of these two patients, it could be revealed that continuous expanding of the FL was presented in patient 1#, while significant FL reduction was shown in patient 2#. Although the cases involved in the current study were limited, the results showed the potential of the hemodynamic indicator to better determine the necessity of tear exclusion.
The thrombosis process in the FL affects the prognosis post-TEVAR (Song et al., 2014;Menichini et al., 2018). It was found the thoracic re-entry influenced the fluid environment of FL and thus affected thrombosis development. As shown in Figure 5, the cases with thoracic re-entry exclusion exhibited larger RRT, indicating the blood flow was stagnant at the top of FL, and surface thrombosis would be generated. This was consistent with the previous study which confirmed that thrombosis was more likely to be established in the FL in patients without thoracic re-entry (Armour et al., 2020).
The re-entry tear is an important influential factor to the pressure environment of TBAD; however, other factors also exist. For instance, the FL branches, which were reported to be related to complications (Ge et al., 2017;Liu et al., 2018), may also affect the pressure distribution. However, in the current study, similar result was not found as there were no FL branches involved in patients 1# and 2#, while the dissection of patients 3# and 4# was associated with branches, which indicate the involved branches played an insignificant role for the cases included in this study. A hemodynamic study is a significant supplement to the clinic statistic study, which could provide functional parameters related to prognosis. Previous studies concluded that the thoracic re-entry contributed to positive transportation of the blood toward the FL (Zhang et al., 2018). However, the results in our study showed that the tear might also contribute to negative transportation of the blood toward the FL for some patients.
As a preliminary study, there are a few limitations in this study such as the rigid wall assumption and small number of cases. Due to the complex geometry and the lack of the actual material properties, the existing fluid-structure interaction studies on TBAD often generated the aortic/dissection wall with arbitrary thickness and assumed the mechanical properties of the wall/flap as linear elastic. More accurate simulations are highly dependent on accurate model establishment and material property measurements, which are currently being carried out in our laboratory. Since the TBAD patients who presented thoracic re-entry post-TEVAR and with routinely examined follow-up images were limited in our center, only five cases were recruited in the current Frontiers in Bioengineering and Biotechnology | www.frontiersin.org March 2022 | Volume 10 | Article 831903 study. Continuous data collection is carried out in our center, and more convincing conclusion could be drawn in the future.
CONCLUSION
The current study investigated the hemodynamic significance of the unstented thoracic re-entry of TBAD. The results indicated that i) due to the various morphological condition of each patient, the thoracic re-entry might contribute to positive or negative transportation of the blood toward the false lumen; ii) sealing the thoracic re-entry tears that presented positive flow contributions to the false lumen would induce distally shifting of the first balance position of the luminal pressure difference curve, and vice versa. Comparing to the long-term follow-up results of the luminal remodeling, this preliminary study implicated that a hemodynamic role might be a more effective indicator to determine the necessity to seal the thoracic re-entries. This might contribute to the wise decision-making of reintervention or surgery after TEVAR.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
This study was approved by the Institutional Review Board of Chinese PLA General Hospital (S201703601). Frontiers in Bioengineering and Biotechnology | www.frontiersin.org March 2022 | Volume 10 | Article 831903 9 | 2022-04-01T13:11:42.121Z | 2022-03-31T00:00:00.000 | {
"year": 2022,
"sha1": "8564498cfbc516367cdbf5a88228248d54a2299e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "8564498cfbc516367cdbf5a88228248d54a2299e",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Biology"
],
"extfieldsofstudy": []
} |
14642281 | pes2o/s2orc | v3-fos-license | Antituberculosis Drug Resistance Survey in Lesotho, 2008-2009: Lessons Learned
Setting Drug resistance is an increasing threat to tuberculosis (TB) control worldwide. The World Health Organization advises monitoring for drug resistance, with either ongoing surveillance or periodic surveys. Methods The antituberculosis drug resistance survey was conducted in Lesotho in 2008-2009. Basic demographic and TB history information was collected from individuals with positive sputum smear results at 17 diagnostic facilities. Additional sputum sample was sent to the national TB reference laboratory for culture and drug susceptibility testing. Results Among 3441 eligible smear-positive persons, 1121 (32.6%) were not requested to submit sputum for culture. Among 2320 persons submitted sputum, 1164 (50.2%) were not asked for clinical information or did not have valid sputum samples for testing. In addition, 445/2320 (19.2%) were excluded from analysis because of other laboratory or data management reasons. Among 984/3441 (28.6%) persons who had data available for analysis, MDR-TB was present in 24/773 (3.1%) of new and 25/195 (12.8%) of retreatment TB cases. Logistical, operational and data management challenges affected survey results. Conclusion MDR-TB is prevalent in Lesotho, but limitations reduced the reliability of our findings. Multiple lessons learned during this survey can be applied to improve the next drug resistance survey in Lesotho and other resource constrained countries may learn how to avoid these bottlenecks.
Introduction
Drug-resistant tuberculosis (DR-TB) threatens global TB control and is a major public health concern in many countries. Multidrug-resistant tuberculosis (MDR-TB) and Extensively Drug-Resistant tuberculosis (XDR-TB) are increasingly being found in resource-limited settings [1][2]. Globally, 136 412 cases of MDR-TB or rifampicin-resistant TB (RR-TB) who were eligible for MDR-TB treatment were notified to World Health Organization (WHO) in 2013, mostly by countries in the European Region, India and South Africa. Furthermore, in South Africa, DR-TB and HIV have converged in a deadly syndemic, defined by increased incidences of TB and HIV, endemic transmission of DR-TB strains, high mortality rates, and poor treatment outcomes [3][4]. About 10% of MDR-TB cases in South Africa have XDR-TB [5].
The Kingdom of Lesotho (Lesotho) is a mountainous country completely surrounded by South Africa (Fig 1). Mean temperature ranges are -3°C to 32°C in the lowlands and -8.5°C to 20°C in high altitude areas. Frost occurrence can last between February and November in the high altitude areas [6]. The WHO reported population of 2.1 million people living in Lesotho in 2013. The average life expectancy at birth in 2013 was 48.7 years, a reduction from global norms that is largely attributable to the HIV/AIDS epidemic in the country [7]. The prevalence of HIV among adults 15-49 years old has stabilized at a high level and has not significantly decreased since the year 2000; currently prevalence of HIV stands at 23% [1,7]. TB remains a major challenge to public health in the country. Lesotho is among the 21 countries with the highest incidence of TB, reporting 916 new TB cases per 100,000 population in 2013 [1].
A national MDR/XDR-TB program in Lesotho was started in collaboration with partners in 2007, followed by upgrading laboratory capacity to diagnose TB and the establishment of an MDR/XDR-TB referral Hospital in Botsabelo [8]. According to the drug resistance survey conducted in Lesotho in 1995, 12.3% of all TB cases had any form of resistance, 11% had isoniazid resistance, and 1.6% had rifampin resistance [9]. However, the high MDR-TB prevalence (10%) and the outbreak of XDR-TB in the neighboring KwaZulu-Natal Province of South Africa have generated serious concerns as to the real extent of the MDR-TB problem in Lesotho [10][11].
In 2008, the National Tuberculosis Program (NTP), supported by partners, conducted a drug resistance survey (DRS) to determine the extent and pattern of TB drug resistance in the country. This manuscript presents the main findings of the DRS in Lesotho, discusses the logistical, laboratory and other operational challenges encountered during the survey and provides recommendations for future surveys.
Methods
The Lesotho NTP conducted a cross-sectional survey on TB drug resistance involving sputum smear microscopy, culture, and drug susceptibility testing (DST) among newly diagnosed smear-positive pulmonary TB patients from 23 June 2008 to 31 March 2009. The study was designed to conform to WHO protocol guidelines for a periodic drug resistance survey [11]. Based on the number of new smear-positive TB cases in Lesotho and available data on rifampin resistance in neighboring South Africa, the estimated sample size for the survey was 896 cultures from new smear-positive patients to give a precision within 1%-2% of the true value with 95% confidence. All smear-positive retreatment patients diagnosed during the survey period were included in the survey but did not contribute to the calculated sample size.
Every patient 15 years-old seen at inpatient and outpatient sites during the survey period who had a sputum sample submitted to any of the 17 national TB diagnostic centers, received a smear-positive result and a diagnosis of pulmonary TB was eligible for the survey. Patients, incarcerated at the time of the survey, residents of mental health institutions, and those already receiving TB treatment were excluded. Personnel of all 17 diagnostic centers in Lesotho were instructed to record the demographic information and the results from three sputum smears on an eligibility form and the treatment history on a clinical form for all cases during the survey period. Each eligible patient was assigned a unique identification number (ID) and asked to submit an additional sample to be sent to the Lesotho National Tuberculosis Reference Laboratory (NTRL) for culture. The forms were transported to the NTP office in Maseru where data were entered into two standardized EpiData databases: the 'Eligibility' and 'Clinical' databases. HIV data were not included in this survey, however all patients were offered HIV Testing and Counselling as part of comprehensive package of care and those who turn positive could access treatment.
Lowënstein-Jensen (LJ) cultures for the survey specimens were performed in the NTRL. DST was not done in Lesotho because proficiency testing had not been completed by the time of the survey. Culture-positive isolates were sent from the NTRL to a Supranational Reference Laboratory (SRL) in Borstel, Germany for DST; the results were entered in the 'Laboratory' database. The survey was intended to continue until the required sample size was reached.
We related Eligibility and Clinical databases using survey unique ID for data analysis. CDC staff assisted in data management and validation including finding discrepancies in the data, tracking down missing records, determining patient's DST results, and finalizing the database for analysis. Study staff attempted to locate the original forms, correct the discrepancies, and match records. In cases where we could not identify the original form, an algorithm based on patient's age, sex, district, diagnostic center, and date of registration was used to match a patient's clinical information to their eligibility information.
To assess the limitations identified during data analysis, we evaluated the survey's standard operating procedures (SOPs) including flow of specimen and logistics, laboratory records and other information related to data accuracy and completeness.
Ethics Statement
The study does not involve human subjects above and beyond what is normally conducted during routine medical care for TB suspects, namely the collection of sputum for examination. CDC and the Lesotho NTP determined this activity as public health surveillance rather than human subjects research. A formal written waiver for the need of ethics approval was issued by CDC/DTBE associate director of science.
Results
The national TB services registered 3441 smear-positive persons with pulmonary TB at 17 diagnostic centers during the survey period (June 2008-March 2009). Sputum specimens for culture were collected from 2320 (67.4%) persons (Fig 2). However, 1164 (50.2%) of the 2320 eligible persons who submitted sputum specimens for culture were excluded from the study for the following reasons: 142 (6.2%) had specimens that were discarded because of labeling errors, 131 (5.6%) had specimens that were contaminated, and 891 (38.4%) persons had no clinical information collected (Fig 2). The remaining 1156 (49.8%) persons had both sputum available for culture and demographic and clinical information available for analysis.
Among these 1156 persons, 18 were further excluded from analysis: 10 had no growth on culture, four had specimen cups that leaked and were discarded, one had culture contamination, and three cultures grew non-tuberculosis mycobacteria (NTM). The remaining 1138 persons had positive M. tuberculosis cultures; among them cultures from 1066 were sent to Borstel SRL for first-line DST. Laboratory testing in the Borstel SRL excluded an additional three persons for the following reasons: two with specimens that grew NTM only and one patient's culture was determined not viable. After data review, an additional 79 cases were excluded from analysis because study data could not be matched with DST results (n = 78) and one had no treatment history recorded. The final database for analysis included 984 cases with both complete data and M. tuberculosis DST results from the Borstel SRL, or 28.6% of 3441 eligible persons during the survey period. Among 984 cases in the final database, 786 (79.9%) were new. Main demographic characteristics were similar between 984/3441 individuals with complete data and 2457/3441 individuals who were eligible but did not participate in the survey. Median age was 35 years old in both groups; 58.4% and 57.5% were males in each group, respectively.
We found that 81/773 (10.5%) of new TB patient isolates in Lesotho had M. tuberculosis isolates resistant to at least isoniazid or rifampin and 3.1% of new TB patients had MDR-TB (Table 1). Among retreatment patients, 23.1% had isolates resistant to at least isoniazid or rifampin: 16.4% had any resistance to isoniazid and 19.5% had any resistance to rifampin. Mono-resistance to either isoniazid or rifampin occurred in 8.7% of isolates among retreatment patients, while 12.8% of retreatment patients had MDR-TB. The patterns of resistance to other first-line drugs (streptomycin, ethambutol, pyrazinamide) and key second-line drugs for MDR-TB isolates are presented in Table 2.
Upon careful investigation of logistical, laboratory and other information collected during survey implementation, we determined that three obvious breakdowns in procedures detracted from this survey.
Logistics
We found that 1121/3441 (32.6%) eligible patients had missing specimens because patients were being seen but no sputum samples for DRS were being collected. We also found that 1164 (50.2%) of 2320 patients with sputum samples collected for DRS had missing clinical information. We later determined that this happened because the specimen for DRS was not requested during the patient's first visit to the diagnostic facility. If a patient did not return, no effort was made to follow up with the patient; thus, the specimen for DRS was never collected. According to personnel at the diagnostic centers, patients were not coming back for various reasons including difficult or expensive transportation and bad weather (snow). Furthermore, clinical forms were not completed for some patients who came back to provide an additional sputum sample for DRS. Specific logistical problems that were reported included the shortages in supply of data collection forms, the courier not collecting samples on time, and labels not being attached properly to specimen containers.
Laboratory
The NTRL reported many missing specimens. The main reported reason for missing specimens was the logistical issues encountered during survey implementation. The other laboratory challenges included low sample volumes and leakage of samples during transportation. Winter weather hazards were also a consideration, blocking transport for specimens. In addition, crystallization of the transport medium for sputum samples occurred because of cold weather.
Data management
A large number of the records in the 'Clinical' database did not have a corresponding record in the 'Eligibility' database. This was most often because of inconsistency in assignment of the DRS unique ID and recording sex on the respective Clinical and Eligibility forms. The majority of the Clinical forms with corresponding missing Eligibility forms came from five diagnostic centers (Maseru, Berea, Leribe, Motebeng and Mafeteng) for patients who were registered at the start of the survey. The study coordinator reported that during data collection at these sites, clinical forms were sometimes completed before eligibility forms, increasing the probability of missing forms and inconsistent data recording.
Inconsistencies in assignment of the DRS unique ID resulted in a substantial portion of first-line DST results not matching with patients' clinical information. Because of this, 13% of patients' first-line DST results were excluded from analysis. In some cases we were able to find patient laboratory numbers in district hospital records, and this allowed for abstraction of the missing clinical information. Other data management issues included missing data in the clinical history sections, dates written incorrectly, and skip patterns on the data collection forms that were not followed.
Discussion
The implementation of this drug resistance survey in Lesotho was complicated, and the reliability of the results was diminished by survey problems in logistics, laboratory, and data management. Clinical and laboratory information required for the survey analysis was collected for only one third (984/3441) of eligible individuals, including 786 new TB cases. The estimated sample size of 896 cultures from new smear-positive cases was not reached.
Large nation-wide studies such as drug resistance survey in resource limited settings are challenging [12][13][14]. Laboratory capacity was reported as the most operational barrier for many surveys [14]. Other reported operational barriers included the considerable human resources needed to interview patients and verify classification, and the extensive national and international transport networks required to ship sputum specimens, cultures, and M. tuberculosis isolates within and across national borders [12]. Some desirable and valuable components of surveys-for example, larger sample sizes, better differentiation of subcategories of previously treated cases, HIV testing and DST to second-line drugs-come at great additional expense and workload. In addition, survey data are prone to errors that may to some extent invalidate the findings. Errors, or biases, may be related to selection of subjects, laboratory testing, and data collection or analysis [12]. Lesotho faced these barriers despite the intention to conduct the survey as part of country capacity building. The main weakness of this survey was the large proportion of eligible TB patients seen during the survey period that was not included in the survey either because a sputum sample for DRS was not submitted or clinical information was not collected (66.4%). The major reason for losing eligible patients was waiting to collect an additional specimen for DRS and clinical information until the patent came back for microscopy results. It appeared to be limited to specific locales which could be targeted for special training and more innovative approaches the next time a survey is conducted. In some instances patients could not come back due to harsh terrain and weather as that was a year of severe winter with snow in Lesotho. Another weakness of the survey was failure to collect HIV data. Due to the high HIV prevalence in the country, surveillance of HIV among MDR-TB over time is important and should be included in the survey operations [5,12]. Laboratory challenges including specimen's transportation failures and crystallization of sputum samples were due to winter weather and could be addresses by better organization of sputum transportation during winter seasons. Others laboratory issues including low sample volumes, leakage of samples during transportations, and mislabeling of the tubes for sputum collection were associated with lack or deficiency of training for nurses and related DRS personnel in local health centers. Refreshing trainings along with intensive supervision and monitoring during DRS implementation should be conducted for all levels of the field operations. Appropriate instructions and available training materials for patients at the enrollment facilities could also improve quality and provide the required amount of sputum for the survey. The main reason of losing first-line DST results during the survey was inconsistency in assignment of DRS unique ID to enrolled patients. At some enrollment sites, a local treatment registration number was used as a DRS ID, resulting in losing eligible patients that did not come back for treatment. Clear instructions for data collection and management should be provided prior the survey implementation.
Nonetheless, our findings suggested the possibility of prevalent DR-TB in new patients in Lesotho (Table 1). Efforts by the Government of the Kingdom of Lesotho should be aimed at rapid case detection, thorough drug susceptibility testing, timely initiation of treatment, and infection control measures in TB and HIV treatment clinics in order to interrupt transmission. In addition, the isolates from MDR-TB patients showed resistance to other first-line drugs that were tested in the Borstel laboratory. Although few MDR isolates were tested, the prevalence of resistance to streptomycin and ethambutol was worrisome ( Table 2). First-line drugs are recommended for the treatment of MDR-TB whenever the DST results indicate susceptibility [13]. A high degree of resistance to all first-line drugs will make treatment of MDR-TB more difficult for the national program.
We could not draw conclusions about the prevalence of second-line drug resistance, and thus the prevalence of XDR-TB in Lesotho. Although the XDR-TB in neighborhing KwaZulu-Natal raises concerns about second-line drug resistance in Lesotho, we did not find much (Table 2) [9,10].
The drug resistance survey conducted in Lesotho had multiple limitations that probably detracted from its reliability. The next drug resistance survey in Lesotho should be undertaken based on the lessons learned from this survey's implementation, including (a) collecting clinical data on a single form during the first patients visit, (b) transportation of this form to the laboratory along with available sputum samples, (c) consistency in assigning a unique patient DRS identifier, (d) including HIV testing as per national guidelines, (e) planning for hazardous weather conditions, (f) repeat trainings throughout the survey operations with regular supervision and monitoring of DRS processes, and (g) real time electronic data entry to facilitate ongoing enrolment at diagnostic facilities. | 2017-05-02T05:33:06.590Z | 2015-07-24T00:00:00.000 | {
"year": 2015,
"sha1": "4f7029786654b8f0a20258a3636dbd85188592d5",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0133808&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f7029786654b8f0a20258a3636dbd85188592d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255223314 | pes2o/s2orc | v3-fos-license | Oral Health and Pathologies in Migrants and Vulnerable Population and Their Social Impact: The Good Practices of the Intervention Model of a University Dental Clinic
Numerous studies have shown the high incidence of diseases affecting oral health in vulnerable populations. The Canary Islands is a region particularly affected by the low income of its inhabitants and a high migration rate. Poor oral health habits and limited access to health care have turned these groups into risk groups. The role of the Fernando Pessoa Canarias University (CDUFPC) dental clinic in the health care of these groups has been an example of good professional practice and a fundamental resource in their health care. The present study aims to identify the profile of pathologies as well as the impact on the oral health of vulnerable population groups served by the CDUFPC. This study was developed between September 2019 and July 2022 with a sample of 878 patients, of whom 267 (30.4%) belonged to vulnerable groups referred by institutions and social organizations. The results identified the prevalence of dental caries as the main pathology and the lack of good oral habits and commitment to oral health and care.
Introduction
The World Dental Federation (FDI) defines oral health as "multifaceted and includes the ability to speak, smile, smell, taste, touch, chew, swallow, and convey a range of emotions through facial expressions with confidence and without pain, discomfort, and disease of the craniofacial complex" [1]. Oral health is part of general health, according to the definition of health in [2], and is not only related to the areas of physical, emotional, psychological, and socioeconomic well-being. In this sense, [3] found oral health to be "the first step to well-being", not only as a determining factor of the quality of human life but also as a factor associated with the appearance of systemic pathologies that can affect the life of the patient at both the individual and socio-health levels.
In Latin American countries such as Spain, oral diseases are considered a public health problem because of their high prevalence. Among these diseases, malocclusions occupy third place in frequency, behind only dental caries and periodontal disease [4].
Common lesions and conditions, such as fibrous lesions, benign migratory glossitis, oral candidiasis (acute-chronic), papillomas, ulcers, and vascular lesions, are unknown to the general population because their clinical manifestations tend to go unnoticed in comparison with those of dental caries or periodontal disease, which are not usually given importance, despite their implicit risk and consequences on general health.
Regarding the etiology of caries and periodontal disease, both are related to the action of bacterial plaque on the teeth and the supporting tissues that surround them: gingiva, the periodontal ligament that attaches them to the bone, and the bone itself [5][6][7][8]. Bacterial plaque is a biofilm composed of bacteria, saliva, food debris, and dead cells. In the case of dental caries, the action of the acids produced by these bacteria attacks the hard tissues that make up the tooth (enamel and dentin), causing their decalcification and subsequent destruction. If the acid action on these tissues advances to the deep and internal tissues of the tooth (intradental vasculonervous bundle or dental pulp), it will cause inflammation, necrosis, and infections [9].
Because we know the relevant role of bacterial plaque in the etiology of dental caries and periodontal disease, we understand the importance of education and awareness for the acquisition of hygiene and dental care habits. Oral hygiene is fundamental, especially from an early age, because of its importance in the consolidation of habits that translate into practices to achieve and maintain a healthy life [9]. In this sense, the population's level of knowledge about the risks of dental caries and periodontal disease continues to be low; dental caries is so common that they are not recognized as an infectious disease caused by microorganisms, as they actually are [10]. According to [7], nearly 3.5 billion people worldwide have oral conditions, with caries being one of the most common conditions in permanent teeth and in children (520 million suffer from caries in deciduous teeth). The high morbidity incidence of these diseases represents a problem that generates enormous costs for the social and health care system of the different countries. In addition to affecting the oral health of those who suffer from them, they accompany them throughout their lives, causing pain, discomfort, disfigurement, and even death, as pointed out by [2]. The data are particularly noticeable in the case of vulnerable or different-ethnicity populations [11][12][13].
In Spain, the authors of [14] conducted a study to determine the data on the oral health of the Spanish population, including adolescents and the elderly. The results of this study indicated that 95% of the Spanish population was affected by caries and that this figure rose to 100% in older adults, which explained the total tooth loss in this population group. In the case of children under 6 years, 31% suffered from unhealed dental caries in 80-90% of the cases. In total, 30% of adolescents presented dental caries, while the percentage rose to 90% in young adults.
Considering the multifactorial origin of the two main diseases affecting the oral health of the world's population, we advise that they are perfectly preventable conditions [15]. The consolidation of preventive oral health behaviors and the acquisition of healthy habits should begin at an early age to prevent caries and periodontal diseases from becoming established and remaining active in the oral cavity, causing premature loss of permanent teeth [9].
Although, as we have seen so far, the general quality of oral health of children, young people, and adults in Spain presents worrying figures, the situation of vulnerable groups of low socioeconomic status (ethnic minorities, migrants, or the rural population) has not aroused as much interest, as evidenced by the scarce research on the subject [22]. Among these vulnerable groups [23,24], homeless and/or poor people and migrants can be found [25][26][27][28].
The authors of [29] proposed a theoretical model based on the model of [27] to explain the origin of this vulnerability, and they highlight three fundamental aspects: lack of resources (low income, difficulty in accessing education, marginalization, scarce social support), unhealthy lifestyles, and high morbidity and mortality rates in relation to their health status.
When we speak of vulnerability in relation to health, we refer to the vulnerable population or group that is more prone than others to develop health problems, either because it is exposed to risk or because of its poor physical, psychological, or social condition, which makes it more sensitive to disease [27]. These groups show special living conditions, poor oral hygiene habits, and a lack of knowledge of the short-and long-term health consequences. These characteristics predispose them to suffer from various conditions such as fibrous lesions, benign migratory glossitis, oral candidiasis (acute-chronic), papillomas, ulcers, vascular lesions, and others.
In its 2012 proposal for intervention policies, a scientific commission for the study of social inequalities in health in Spain indicated that social inequalities in health, low social and economic status, and scarcity of environmental resources were the origins of the vulnerability and manifestation of multiple diseases [30,31], including oral diseases.
The authors of [32] evaluated the link between socioeconomic status and oral healthrelated quality of life as a function of age. The findings indicated that the presence of oral problems in people at low socioeconomic levels is significantly high in relation to those at medium or high levels.
A good example of this is the case of the Cartuja and Almanjáyar neighborhoods, two neighborhoods in Granada (Spain) whose oral health inequalities are attributed to socioeconomic differences, unhealthy habits, and poor diet [33]. The intervention proposal for the promotion of oral health conducted by the public health system, in coordination with an early childhood and primary education center in the area, resulted in a significant improvement in knowledge and modification of eating habits, as well as oral hygiene. After 18 months of program implementation, a significant increase in oral health knowledge and healthy food consumption was observed, which was clearly manifested in a decrease in the consumption of pastries and sugary soft drinks. Evidence of oral health interventions in disadvantaged socioeconomic contexts has demonstrated a positive effect on the improvement of knowledge and healthy habits.
Currently, a higher incidence of diseases affecting oral health has been reported in disadvantaged populations, with a high prevalence of pain/discomfort, loss of masticatory function, and aesthetic and phonatory problems [32][33][34]. Ref. [35] presented a longitudinal study with people over 50 years of age after analyzing a sample of 14,000 participants over two years and noted the relationship between dental care and economic hardship in households. Families with economic difficulties sacrificed the oral health of all their members. They thus confirmed the modulating effect of economic status on oral health care. Most oral conditions have a multifactorial etiology (biological, social, economic, cultural, and environmental factors) [9,22]. Poor oral health habits and limited access to health care have been described as the main factors associated with the low quality of oral health and with the prevalence of dental caries and periodontal disease in this group [36][37][38][39][40].
When included in oral health care, oral education is the fundamental pillar of prevention since it promotes changes in habits and behaviors that have a direct impact on the oral health of individuals, with important implications for the economic cost to society [2]. In this sense, the authors of [37] considered the need to intervene in three fundamental pillars: (a) frequency in the use of dental services, (b) education in proper oral hygiene, and (c) reduction of the consumption of sugary foods. Their vulnerable situation and the difficulties of migrants, refugees, or socially disadvantaged groups in accessing healthy, quality, and low-sugar foods is a real handicap. Curious as it may seem, it is still believed that sugar feeds and kills hunger, and these groups have easy access to low-cost, highly sweetened, processed foods because they are much cheaper than better-quality foods [37].
The 2019 FDI General Assembly concluded with a policy statement on access to oral health care for vulnerable and underserved populations, which presented a commitment to lifelong access to adequate oral health care for the underserved and vulnerable population. This document recommended that dental schools provide special training to students on how to address the complex oral health conditions of these groups. This gave universities as training centers the opportunity to assume a relevant role in community education and to work with interdisciplinary teams in disadvantaged areas or areas at risk of social exclusion.
The University Dental Clinic of the University Fernando Pessoa Canarias (CDUFPC) is an educational institution that is committed to the Canary Islands by way of its dental care services through the clinical practices of students in the last two years of their dentistry degree studies.
The Fernando Pessoa Canarias University (UFPC), when it began to provide clinical practices to its students in 2018, has been an example of good professional practices and the institution's commitment to the vulnerable population. These results proved the need to know the impact that the good practices of the CDUFPC have had in relation to the detection of diseases such as fibrous lesions, benign migratory glossitis, oral candidiasis, papillomas, ulcers, vascular lesions, etc., as well as their prevention in vulnerable populations.
Objective
The present study aims to identify the profile of pathologies as well as the impact on the oral health of vulnerable population groups who have difficulty accessing health care and are treated at the CDUFPC.
Sample
A purposive sampling was performed, and the participation was voluntary. Patient characteristics are low-income people or immigrants. In the latter case, patients come to the clinic through specific collaboration agreements with their corresponding watching over associations and/or institutions.
During the period in which this study was developed (from September 2019 to July 2022), the CDUFPC attended to a total of 878 patients, of which 267 (30.4%) belonged to vulnerable groups that accessed the clinic's services, referred by institutions and social organizations through an agreement with the UFPC. The sample size represented a confidence level of 95% with a margin of error of 5%.
In recent years, this has presented a real challenge and a commitment to the social collaboration of the university with organizations and institutions that welcome and help people from vulnerable groups (see Table 1). For data collection, participants were asked to sign an informed consent form or, in the case of minors under 18 years of age, obtain the signature of their legal representative in order to authorize access to their medical records and personal and clinical data. The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of the European Scientific Institute (ECESI) (protocol HESU12/2021).
Data Analysis
The aim of the descriptive analysis was to identify the profile of the people who attended and received care and treatment at the CDUFPC, as well as the set of pathologies and oral disorders they presented.
A clinical record for each patient was created and kept. In this record, the clinic coordinator recorded the visit, diagnosis and treatment administered, as well as the surgeries or interventions performed in each case. The records were put into a spreadsheet and, then were exported and analyzed by using the Statistical Package for the Social Sciences version 25 (SPSS, v.25, Chicago, IL, USA).
The incidence of the treatments and their impact on the awareness and maintenance of restored oral health were analyzed. In addition, observations were made on the presence of oral mucosal pathologies and general dental health.
Results
We treated 8 (3%) pediatric patients between 1 and 14 years of age, 223 (83.52%) adolescent patients between 15 and 20 years of age, 17 (6.37%) young adults between 21 and 40 years of age, and 19 (7.11%) adults between 41 and 70 years of age. See Figure 1. The group of pediatric and adolescent patients was made up entirely of migrants, mostly from the African continent (Morocco, Ivory Coast, Gambia, Guinea, Mali, Sierra Leone, Senegal, Mauritania, Cameroon, and Nigeria). The group of young adults was made up of 10 migrants and 7 people referred by the social services of the municipalities belonging to the northern commonwealth of the island of Gran Canaria. The group of adults aged between 41 and 70 years consisted mainly of Canarian patients referred by the social services of the municipalities of Gáldar, Guía, Arucas, and Valleseco, as well as one patient from Senegal. Table 2 shows the origin of the patients treated, regardless of their age. Regarding the treatments received, after individual study of each case, 41% involved conservative dentistry (removal of caries of greater and lesser magnitude and obturations), 20% periodontics (professional oral cleaning treatments, oral hygiene education, and basic gum treatment), 21% oral surgery (dental extractions), 11% endodontics (removal of the dental pulp and subsequent filling and sealing of the pulp cavity with an inert material), and 7% removable prostheses (custom-made oral prostheses that can be removed and put in place to replace missing teeth). See Figure 2. Of note was the detection of 37 lesions related to oral mucosal pathologies that were diagnosed in the adult group (41-70 years of age) and included fibrous lesions, benign migratory glossitis, black hairy tongue, intraoral lesion due to HSV, gingival enlargement due to drugs, florid cemento-osseous dysplasia, and mucosal ulceration with bone sequestration. See Table 3. It should be noted that in most cases, combined treatments were administered. For example, all patients underwent tartrectomy, excluding only the totally edentulous.
Of the 878 patients seen during this study, 135 treatments were performed with the fabrication of removable prostheses, and only 11 corresponded to people from disadvantaged groups. In the case of removable prosthesis treatments, despite the needs detected, few patients completed their treatment, which demonstrates a low commitment to dental health care. Attendance at the clinic was more a condition for the maintenance of other economic support than a real commitment to oral health care. Only 8.1% of the vulnerable patients who came for treatment with removable prostheses remained in the program.
Discussion
In order to determine the scope of the CDUFPC in its work and social work, we considered that the academic year 2021-2022 was the first in which the fifth year of the degree in dentistry at the UFPC was completed. Most of the clinical practice hours that the students in training spent with real patients were included in the last year. It should be noted that it has been difficult to follow up on the migrant patients who were referred to the CDUFPC by the centers and institutions; the administrative situation and the lack of interest in the care and maintenance of oral health have been a handicap. Many migrants passed through the dental clinic sporadically or in isolation because, in most cases, they were transferred to centers in other parts of Spain. The pressure from the high occupancy of the migrant reception centers in the Canary Islands makes this a frequent practice; in addition, it should be considered that many of these people were deinstitutionalized or placed back into society early in an attempt to relieve the pressure on the centers and institutions, which means that they were not attended to, evaluated, or treated for the first time for their oral pathologies.
Morocco was the main place of origin for the patients who were treated, and according to recent studies, they are the ones with the worst oral health in comparison with people of other nationalities who migrate to Spain, especially women. Moroccan women reported a greater impact on oral health and quality of life. They also presented greater problems in terms of physical pain and psychological discomfort [39][40][41][42][43].
Among the Moroccan population studied [30,44] in different geographical contexts, it was evidenced that the migrant population reported poor oral health indicators in relation to those of the Spanish population. Oral health problems are a public health concern because of their magnitude, severity, and consequences and because they affect people's quality of life. These problems are multifactorial and possess biological, social, and contextual characteristics, which were evidenced in the sample of our study [9,29]. In this sense, the findings derived from the Longitudinal Studies of Immigrant Families Project (PELFI) showed how migrants were at a social disadvantage and presented fundamental deficiencies related to dental care and hygiene habits as well as the quality and type of food. Exposure to risk factors has negative effects on oral health that are more evident in these groups than in non-migrants. The incidence of oral pathologies is proportionally higher in the vulnerable population, as evidenced by the data of this study, given that they generally make less use of oral health care because of economic, social, and educational factors (poor training in health care and promotion) or even, in a significant number of cases, because of their lack of commitment and disinterest in oral care. Data such as those provided by [10] indicated an incidence of dental caries of 56.7% in foreign adults between 35 and 44 years of age, compared to 36.7% found in the Spanish population.
It is noteworthy that young people presented alterations whose solution involved conservative treatment, as opposed to older patients whose lesions or pathologies required prosthetic treatment because of missing teeth. In the same sense, the data indicated that the group of migrant patients with conservative dentistry requirements coincided with those reported in the PELFI report, which highlighted the poor oral hygiene education in their countries of origin [37,38].
In the case of older migrants aged 41-70 years who required dental prostheses, the high mobility of the migrants prevented them from undergoing long treatments, such as the fabrication of dental prostheses. This fact was clearly reflected in the data of this study, where only 4.4% of the 250 patients with partial or total edentulous teeth who came for an estimate finally underwent a removable prosthesis treatment. These data seemed to be consistent with those provided by [45][46][47].
Other studies focused on the African population [45,48] have indicated the need to improve oral hygiene education and healthy eating. We should bear in mind that the efforts made by the social and health administrations should be joint if we want to have an impact on greater access to social and health care, including oral health care.
In parallel and coinciding with the results of our study, the studies of [46,47] with the Moroccan population indicated that 90% of children and adolescents had caries. The results associated these data with a low socioeconomic level that generates inequality, evident in both the prevention and treatment of dental pathology.
Regarding the differences in access to oral health care by country, in the meta-analysis presented in 2018, the author of [49] indicated that sub-Saharan Africa was the region with the least access to oral health care. In increasing order, it was preceded by Southeast Asia, South America, North Africa, Asia, Europe, North America, Oceania, and the Scandinavian region. In some countries, oral health care includes the prevention and treatment of existing diseases, while in others, it is limited to prevention only. In many others, there are no oral health programs that include even education or awareness of oral care. The CDUFPC, through its social and health care program for the oral health care of migrants and vulnerable populations, has been developing a fundamental course of prevention and awareness of oral health care. This is essential to improving the quality of life of vulnerable groups, whose chances of receiving care would be practically nil were it not for this type of initiative and program.
Living conditions change radically for people who migrate to another country, especially in the case of people who do not have a support network (family and friends) in the receiving countries. In those countries where there is a support network for these people, prevention and treatment of oral pathologies are considered especially important. As for the sample of our study, the main problems they presented were caries, periodontal disease, oral pain, problems with removable prostheses, gingivitis, and dry mouth [50].
Oral health should be a priority of health care policies, which currently fail to provide continuity and stability of treatment. The lack of economic resources, healthy oral hygiene habits, and vulnerable population's awareness about the impact of oral health on their quality of life are the main challenges to the care policies that must be developed, especially for the most vulnerable groups.
In 2019, the number of migrants reached 272 million, 51 million more than in 2010 [51,52]. We need institutional involvement for the promotion of policies and actions to improve social and healthcare services for the health and quality of life of this population [9,17,19,29,53].
Along these lines, the CDUFPC has been developing monitoring programs to determine the adherence of vulnerable patients to oral health care and the provision of oral health care for institutionalized elderly people and to promote changes in the oral health care habits in people at risk of social exclusion and raise awareness for the need to monitor and control oral health in vulnerable populations. The actions developed by the CDUFPC are examples of preventive actions necessary for improving the quality of life not only for people in general but especially for the group of vulnerable people. Investment in the implementation of preventive policies on oral health is essential and necessary if we want to offer people acceptable aging and quality of life.
Conclusions
Initiatives such as the one developed by the CDUFPC during its health care and sanitary work are essential, given the high number of migrants and vulnerable groups on the island of Gran Canaria and the lack of human and economic resources regarding access to oral health care provided by the public health system, which is particularly insufficient and limited in addressing the needs of vulnerable groups.
The CDUFPC carries out important socio-healthcare work by attending to migrants and vulnerable groups on the island of Gran Canaria.
The deinstitutionalization or transfer of centers for migrants prevents the control and follow-up of the treatment given to migrant patients.
The group of people between 15 and 20 years of age from Morocco was found to be the group with the highest incidence of oral pathologies (dental caries and periodontal diseases). They also presented fundamental deficiencies related to dental care and hygiene habits, as well as those related to food quality and type.
In the year 2019, the highest number of incidences attended for oral pathology in the CDUFPC was evidenced. These data coincided with the period prior to the confinement due to the pandemic by COVID-19. This fact, together with the decrease in migratory movements during this period, explained the lower attendance of patients, who were exclusively patients suffering from severe pathologies.
It is essential to implement training and awareness programs on oral care and prevention of oral diseases, especially in the case of migrants and vulnerable populations.
Study Limitations
Further studies with large samples are needed to confirm the trend of our results in relation to both access to oral health care and awareness of oral health care.
This was the first study conducted in the Canary Islands on the oral health of the vulnerable population. The CDUFPC is the first educational institution to offer comprehensive oral health care treatments for the migrant population in the Canary Islands. It is a relatively young university, having graduated its first class in dentistry in the 2021-2022 academic year, and because it is a new program with a new faculty, weaknesses have been detected in the control and identification of specific aspects of patient anamneses. With the intention of improving and completing this specific information, the research team has developed a clinical protocol that will enhance both the information and the diagnoses of the patients.
Program stability in following up on the efficacy and controlling the evolution of the treatments was one of the main difficulties of this study. We are aware that the conditions and times of access to the program are limited, but we believe that it is possible to implement a follow-up program that allows migrants to continue their treatment and control in the locations to which they are transferred. In this sense, the university is preparing a protocol with health institutions that will allow collaboration in the follow-up of these patients for the maintenance of restored oral health, control of hygiene measures, evaluation of the health of patients who have received treatment, and the continuation and completion of all treatments required by the patient.
Informed Consent Statement:
Written informed consent has been obtained from patients to publish this work. Data Availability Statement: Data from the study are available from the lead author of this study upon request. | 2022-12-29T16:05:57.244Z | 2022-12-26T00:00:00.000 | {
"year": 2022,
"sha1": "8fbda691a228fbe39ec2332b37b5d1dd88e649f4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/1/353/pdf?version=1672045994",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1eeffa497aedfc4cc6ecdb0fd46f764c47dbac29",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222068257 | pes2o/s2orc | v3-fos-license | Knowledge, attitudes, and perceptions regarding the future of artificial intelligence in oral radiology in India: A survey
Purpose This study investigated knowledge, attitudes, and perceptions regarding the future of artificial intelligence (AI) for radiological diagnosis among dental specialists in central India. Materials and Methods An online survey was conducted consisting of 15 closed-ended questions using Google Forms and circulated among dental professionals in central India. The survey consisted of questions regarding participants' recognition of and attitudes toward AI, their opinions on directions of AI development, and their perceptions regarding the future of AI in oral radiology. Results Of the 250 participating dentists, 68% were already familiar with the concept of AI, 69% agreed that they expect to use AI for making dental diagnoses, 51% agreed that the major function of AI would be the interpretation of complicated radiographic scans, and 63% agreed that AI would have a future in India. Conclusion This study concluded that dental specialists were well aware of the concept of AI, that AI programs could be used as an adjunctive tool by dentists to increasing their diagnostic precision when interpreting radiographs, and that AI has a promising role in radiological diagnosis.
Introduction
Artificial intelligence (AI), in simple terms, can be defined as the acquisition of intelligence by computers or machines to perform tasks that normally require human intelligence. 1,2 A few examples of such tasks are speech recognition, decision-making, and medical diagnosis. A subset of AI, machine learning, can be used to teach machines and computers to analyse certain types of data using various algorithms. 3 AI programs have been developed to analyse data collected from a diverse range of sources, and AI systems have been widely used in the manufacturing sector, the stock market, the medical field, and meteorology, among other domains. 1,2 India is a technologically advancing country that has yet to reach its full potential. Among the age group of 18-60 years, 70% of people use mobile phones in India, while 87% of 1.3 billion Indians have access to an internet connection. Many people, including doctors and scientists, are not yet familiar with the concepts and true potential of AI, and the impact it can have on both our personal and professional lives.
The clinical use of AI programs in the medical profession has gained popularity over the last few years, and its possible applications in dentistry also need proper attention. Applications of AI programs in dentistry are quite interesting, especially in radiology, and AI can be a boon for novice dental practitioners. AI programs can help in the tracing of cephalometric landmarks; in the detection of caries, alveolar bone loss, and periapical pathosis; the auto-segmentation of the inferior alveolar nerve; the analysis of facial growth, and other similar tasks. 4 Studies have reported the use of AI in the early screening of oral cancer and cervical lymph node metastasis, as well as in the diagnosis and treatment planning of various orofacial diseases. [4][5][6][7] Nonetheless, stakeholders' opinions vary regarding the future of AI. While many think that AI will create many opportunities in the fields of medicine and dentistry and will pave a new way towards a great future, others still believe that AI is unreliable and will not even be able to replace radiologists in the future. 8 This study presents the responses of dentists from central India to a survey regarding their knowledge, attitudes, and perceptions regarding the future of AI.
Materials and Methods
This study was approved by the institutional ethical Interested participants entered basic details about themselves and then filled out the questionnaire. Responses were made on a single webpage with a single "submit" button that only allowed a single submission through a unique link.
The survey consisted of a questionnaire regarding respondents' recognition and attitudes towards AI and the possible future of AI in radiological diagnosis ( Table 1). The questionnaires were broadly divided into 3 sections (knowledge, attitudes, and future). The first part of the survey asked 4 questions about respondents' fundamental knowledge of AI. The second part of the survey contained 4 questions inquiring about dental specialists' current attitudes towards AI. The last part of the survey asked 7 questions about the possible future of AI in radiological services among dentists in India.
The data collected were statistically analysed using PASW Statistics for Windows, version 18 (SPSS Inc., Chicago, USA). The level of significance was set at p<0.05. The chi-square test was applied and frequency distributions of responses (i.e., the percentage of respondents who agreed) were presented as bar and pie charts.
Results
A total of 250 dental specialists completed the survey questionnaire.
Knowledge: There was a remarkable knowledge of AI among dentists. Of the 250 respondents, 171 (68%) were already familiar with the AI framework (Fig. 1). Although 181 dentists (72%) accepted that AI has useful medical applications, only 106 (42%) had a basic understanding of how to integrate AI into their dental practice. Moreover, 136 dentists (55%) accepted that AI can speed up the healthcare system and minimize errors and can provide a large quantity of high-quality data without emotional or physical fatigue in a timely manner (Fig. 1).
Attitude: Most dentists (87%) would like to use software that would be useful for radiological diagnosis (Fig. 2). Although 15% of dentists fully agreed that AI can make better diagnoses than a human doctor, 45% were not sure. In the event of a difference of opinion in diag- What according to you are the advantages of using artificial intelligence (AI)?
Fig. 2. Perceptions regarding artificial intelligence (AI).
Would you like to use a software/program that can be helpful in radiological diagnosis?
Are you familiar with the concept of artificial intelligence (AI) and its uses? nosis, only 7% of participating dentists stated that they would follow the AI's prediction, while 59% of the participating dentists would rely on their ability to diagnose and 34 were not sure. Furthermore, 168 of the participating dentists (67%) stated that they would recommend AI to their fellow practitioners (Fig. 3).
Future: One hundred seventy-one dentists (68%) believed that AI will come to the rescue to evaluate minute details on X-rays that they sometimes miss (Fig. 4). A total of 172 dentists (69%) agreed that they would use AI for dental diagnosis and treatment planning and 181 (72%) stated that they would utilise AI algorithms for medical diagnosis in the near future. One hundred twenty-six (51%) dentists agreed that the key function of AI is to interpret complicated radiographic scans, and 72 (29%) indicated that AI would be valuable for diagnostic purposes, while 29 (12%) and 21 (8%) of the participating dentists agreed that AI would be used for making treatment deci-sions and direct treatment, respectively (Fig. 5). One hundred dentists (40%) favoured the use of AI in specialised dental clinics (centres for radiology, prosthodontic clinics, and orthodontic clinics), 77 (31%) approved of the use of AI at university hospitals, 49 (20%) at public health centres and 24 (9%) for primary care at private clinics. A total of 157 (63%) dentists believed that AI has a future in India, while 161 dentists (64%) agreed that AI will help budding dentists in their diagnosis and decision-making.
Discussion
This was the first survey-based study of the knowledge, attitudes, and perceptions of the future of AI in among the dental community in India. The recognition of AI among the dental community was quite high and most of the dentists were familiar with it. Most of them agreed that AI will be beneficial in dentistry and 51% stated that it will help in interpreting complicated radiographic scans, while Do you think artificial intelligence has a future in dentistry in India?
Will you recommend fellow practitioners to implement artificial intelligence in their clinical practice?
Fig. 4. Evaluation of radiographs by artificial intelligence (AI).
Do you agree artificial intelligence will help to evaluate minute details in radiographs which sometimes are missed by practitioners? Fig. 5. Utility of artificial intelligence (AI) in dentistry.
In which field of dentistry do you think AI will be most useful?
speeding up the processes. A survey-based study by Oh et al. 1 was done among Korean medical practitioners to determine the awareness and attitudes of medical practitioners towards AI. In the study done by Oh et al. 1 in Korea, out of 669 participants, only 6% were familiar with the concept of AI, 83.4% agreed that AI is useful in the medical field, and 43.9% agreed that the diagnostic ability of AI is superior to that of humans. The respondents stated that the advantages of using AI are its ability to quickly obtain vast amounts of clinically relevant, high-quality data in real time (62.3%), speed up processes in health care (19.1%), and decrease the number of medical errors (9.6%).
In this current study, 68% of dentists were familiar with the concept of AI, while 69% were optimistic that AI can be used in diagnosis and treatment planning and 63% affirmed that AI can have a future in India. The majority (68%) of dental specialists agreed that AI will be useful in evaluating minute radiographic details missed by practitioners and 64% stated that AI will help budding dentists make radiological diagnoses. Pakdemirli 9 stated that AI has been a source of great innovation and a prominent topic of discussion within radiological societies and ground-breaking research in recent years. It is promising for the future for healthcare; despite its risks and potential quality assurance issues, tremendous changes are sure to occur in terms of how radiological services will be delivered in the future. This study revealed that 63% of dentists were sure of AI having a future in India and 51% of dentists agreed that the major beneficial task of AI is interpretation of complicated radiographic scans. Similarly, Hwang et al. 4 reported that the diagnostic accuracy of deep learning algorithms in the medical sector is reaching the standards of humans, transforming aided diagnostics from a "second opinion" method to a more interactive process. Hosny et al. 10 stated that AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. Similarly, Wong et al. 11 suggested that AI has the potential to change the landscape of modern clinical radiology and that it will be necessary to keep up with future developments.
The higher efficiency provided by AI will allow radiologists to perform more value-added tasks, becoming more visible to patients and playing a vital role in multidisciplinary clinical teams. 12 Park et al. 13 stated that the use of AI is expanding quickly beyond text-based and image-based dental work, and as the use of AI in the entire medical field increases, the role of AI in dentistry will also greatly expand.
Mupparapu et al. 14 pointed out that dentists could benefit from the added luxury of having a second opinion in nanoseconds using AI technologies that could bolster the diagnosis and eventually help patients, and stated that the intention of AI was perhaps never to replace healthcare providers.
Dentists may have varying opinions regarding the utility of AI. The main limitation of our study is the limited number of participants, who were mainly specialists practicing in central India; therefore, further studies should be carried out at a larger scale to increase the statistical accuracy.
In India, dentists favoured AI and agreed that it can aid in radiological diagnoses. With its promising potential opportunities, most of us agree that it can assist in the analysis of complex radiographic scans, improve diagnostic precision, minimize errors, and potentially lead to the more precise and reliable detection of various maxillofacial disorders. In the future, more studies should be carried out with larger samples to validate the accuracy and usefulness of AI programs in various dental specialties. | 2020-10-01T05:06:40.693Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "bead574cbc78454a4cb0cd6a67f5ddfaa7aa4012",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5624/isd.2020.50.3.193",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bead574cbc78454a4cb0cd6a67f5ddfaa7aa4012",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
6268494 | pes2o/s2orc | v3-fos-license | Training US health care professionals on human trafficking: where do we go from here?
ABSTRACT Some 21 million adults and children are labor-trafficked or sex-trafficked through force, fraud, or coercion. In recognition of the interface between trafficking victims and the healthcare setting, over the last 10 years there has been a notable increase in training of health care professionals (HCPs) on human trafficking (HT) and its health implications. Many organizations have developed curricula and offered training in various clinical settings. However, methods and content of this education on trafficking vary widely, and there is little evaluation of the impact of the training. The goal of this study was to assess the gaps and strengths in HT education of HCPs in the US. This mixed-method study had two components. The first component consisted of structured interviews with experts in human trafficking HCP education. The second portion of the study involved an analysis of data from HCP calls to the National Human Trafficking Resource Center (NHTRC). The interviews captured trainer-specific data on types of HT training, duration and frequency, key content areas, presence of evaluation approaches and indicators, as well as an assessment of barriers and strengths in HT training for HCP. NHTRC call database analysis demonstrated increasing trends since 2008 in calls by HCPs. Overall findings revealed the need for standardization of HT training content to assure correct information, trauma-informed and patient-centered care, and consistent messaging for HCPs. Evaluation metrics for HT training need to be developed to demonstrate behavior change and impact on service delivery and patient-centered outcomes for HT victims, according to our proposed adapted Kirkpatrick’s Pyramid model. HT training and evaluation would benefit from an agency or institution at the national level to provide consistency and standardization of HT training content as well as to guide a process that would develop metrics for evaluation and the building of an evidence base. Abbreviations: AAP: American Academy of Pediatrics; ACF: Administration for Children and Families; CME: Continuing medical education; ED: Emergency department; HCP: Health care professional; HEAL: Health, Education, Advocacy, and Linkage; HHS: United States Department of Health and Human Services; HT: Human trafficking; IOM: United States Institute of Medicine; MH: Mental health; NHTRC: National Human Trafficking Resource Center; SOAR: Stop, Observe, Ask, and Respond to Health and Wellness Training
Introduction
Some 21 million adults and children are labor-trafficked or sex-trafficked through force, fraud, or coercion [1]. Visibility of this global phenomenon of human trafficking (HT) has reached the attention of health care professionals (HCPs) [2] as they provide care to victims of HT in hospital emergency departments [3][4][5], community health centers [6][7][8], migrant and refugee care centers, and adolescent health care centers [9]. In recognition of this interface between trafficking victims and the healthcare setting, over the last 10 years there has been a notable increase in training of HCPs on HT and its health implications. Many organizationsfederal, state, local, non-governmental, academic, and professional societieshave developed curricula and offered training in various clinical settings [10]. However, methods and content of this education on trafficking vary widely, and there is little evaluation of the impact of this training.
Current state of training on human trafficking
Training on human trafficking for HCPs has grown in parallel with the literature on the health effects of trafficking. A 2003 European study was the first to demonstrate the health risks and consequences of trafficking in women and adolescents [11]. In the years following that publication, more evidence has emerged about the health consequences of HT, the types of HCPs encountering trafficking victims, and the gaps in HCP knowledge about the problem [12][13][14]. In recognition of the importance of medical education on trafficking, medical professional societies and academics have called for HT awareness among family medicine practitioners [15], midwives [16], nurses [17,18], dentists [19], pediatricians [20], emergency department (ED) physicians [3][4][5] obstetricians-gynecologists [21], psychiatrists [22], and public health practitioners [23]. The American Academy of Pediatrics (AAP) has placed HT training in the top ten policies to be supported by its Board [24]. The AAP has further developed guidelines for pediatricians who may encounter victims of child sex trafficking and commercial sexual exploitation in their health care settings [9]. The American Academy of Family Physicians passed a resolution for HT awareness and education for practitioners of family medicine [25] as has the American College of Emergency Physicians [26]. Similar statements have been generated by the American Medical Association to encourage member groups and sections, as well as the Federation of State Medical Boards to raise awareness about HT and inform physicians about the resources available to aid them in identifying and serving victims of HT [27]. Moreover, the United States (US) Institute of Medicine (IOM), National Academy of Medicine, has released guidelines for confronting the commercial sexual exploitation and sex trafficking of minors [28]. Across the globe, academic medical centers and nonprofit organizations have undertaken initiatives to educate healthcare providers. For example, the International Organization for Migration, the National Human Trafficking Resource Center, the Massachusetts Medical Society, Children's Healthcare of Atlanta, Mount Sinai Emergency Medical Department, American Medical Women's Association, and Christian Medical and Dental Associations, have developed curricula on HT for HCPs [10]. In order to unify these national efforts, HEAL (Health, Education, Advocacy, and Linkage) Trafficking, a network of interdisciplinary professionals working on the intersection of public health and trafficking was founded in 2013. In addition to connecting the health experts working on HT, they have created an online compendium of medical literature and educational resources for HCPs on HT, and its Education and Training Group has worked closely with federal efforts on the topic [29].
Federal action on training on human trafficking
In 2008, the Office of the Assistant Secretary for Planning and Evaluation (ASPE) in the US Department of Health and Human Services (HHS) acknowledged that there were challenges, barriers, and promising practices in addressing the needs of victims of HT [30]. In 2010, the ASPE released an issue brief, 'Medical Treatment of Victims of Sexual Assault and Domestic Violence and Its Applicability to Victims of Human Trafficking', which outlined several recommendations including the need for comprehensive screening practices, the importance of examination of protocols, and the content of effective training [31]. Since the 2008 ASPE National Symposium on the Health Needs of Human Trafficking Victims, HHS has committed to the Federal Strategic Action Plan (SAP) on Services for Victims of Human Trafficking in the United States (US) [32]. Released The call for more training has led to recent proposals by advocates to the US Congress to support HT training for HCPs. The Trafficking Awareness Training for Health Care Act of 2015, initially proposed in 2014 (HR 5411) will complement HHS's anti-trafficking efforts to engage the health care community by increasing information, awareness, and training for HCPs, not only in hospitals and community clinics, but in health professions schools, including schools of medicine, nursing, dentistry, and social work. The Trafficking Awareness Training for Health Care Act of 2015 was passed as a part of the Justice for Victims of Trafficking Act of 2015 [34]. Also, in 2015, S.1446, SOAR to Health and Wellness Act of 2015, was proposed, which would expand and further codify the SOAR training, diversify the reach of those facilities and individuals to be trained, and increase the types of training to be offered [35].
Current status of evaluation of human trafficking training for healthcare providers
There have been few evaluation studies of HT training for HCPs. Most studies have conducted pre-testing and immediate post-testing, and a few examples are provided here. As part of a randomized control trial, emergency medical providers in major pediatric hospitals in the US in the San Francisco Bay area were trained in HT and then evaluated following that brief educational intervention. The results showed increased ED provider knowledge and selfreported recognition of HT victims [14]. In another study, using curriculum from Caring for the Trafficked Persons, a handbook developed by the International Organization for Migration (IOM) [36], training was conducted in seven countries in three regions (the Middle East, the Caribbean, and Central America) [37]. They identified training needs and misperceptions among HCPs about HT which would be useful in designing further programs for identification, care, and referral of HT victims. These studies looked at the training of HCPs in low-and middle-income countries, but not in high-resource countries. As part of its SOAR to Health and Wellness Training, HHS conducted a pilot of a series of HT trainings for 180 HCPs in six US cities in 2014 and then evaluated the training as a pre-test, posttest, and follow-up at three months, and participants across sites demonstrated a statistically significant increase in knowledge and attitude change on postpresentation evaluation [38]. Other recent studies have shown that HCP behavior changes as a result of education; those HCPs with training in HT were more likely to report HT, have encountered a victim in their practice, and have greater confidence in their ability to identify victims [20].
Methodology
The goal of this study was to assess the gaps and strengths in HT education of HCP in the US. This mixed-method study had two components. The first component consisted of structured interviews with experts in human trafficking HCP education. The second portion of the study involved an analysis of data from HCP calls to the National Human Trafficking Resource Center (NHTRC).
Interview analysis
For the expert interview portion, a convenience sample of 24 US-based experts was identified through snowball recruitment within the HEAL national network. All interviewees were actively engaged in HT education of HCPs for at least two years. All 24 individuals representing various US organizations and institutions were contacted initially by email explaining the nature of the study, how they were chosen, and inviting them to engage in a phone interview guided by a questionnaire on the topic of HT training for HCPs. All potential interviewees were instructed that their participation would be voluntary, not incentivized, and that information gathered would be anonymous. The interview captured trainer-specific data on types of HT training, duration and frequency, key content areas, presence of evaluation approaches and indicators as well as an assessment of barriers and strengths in HT training for HCP (Appendix 1). The interview questions were open-ended and the interview was conducted via telephone. Of the 24 invitees, 11 (46%) participated in phone interviews between May and June 2015. The non-responders among the original 24 had been contacted twice for participation; when no response was forthcoming, their names were excluded. The interviews lasted 40-70 minutes and detailed notes were taken during the interview. The data were analyzed for trends, and conclusions were drawn based on the composite sample. A content analysis to determine common themes was performed on the open-ended question portion of the expert interviews [39].
NHTRC database analysis
Founded in 2007, and supported by grants from HHS, Polaris was charged with establishing and maintaining a NHTRC hotline for any caller in over 200 languages (interpretation via tele-translation services). The NHTRC provides information about antihuman trafficking resources to victims, concerned citizens, and service providers, including HCPs. In particular, a HCP may call the hotline for guidance about how to screen for trafficking as well as to find local resources for a potential trafficking victim. Call information captured typically includes the state in which the caller is located, the caller's self-reported category or profession (e.g., medical professional, mental health professional, law enforcement, victim, or community member), the reason for calling (e.g., reporting a tip, referral, or general information), and the 'awareness method' of the caller (how they knew about the hotline, e.g., internet, prior knowledge, training, or word of mouth). Analysis of HCP calls to the NHTRC center provides an aggregate sense of the national trends in HCP awareness and behavior.
This project queried the Polaris Hotline database, through collaboration and support of Polaris staff, to determine various characteristics of the calls, such as overall trends in calls, including that for HCP and Mental Health (MH) providers, HCPs' reasons for calling the hotline, and geographic trends. The data were shared in an aggregated, de-identified manner and represent numbers of signals in the form of phone calls, emails, and online tip reports received by the hotline. Of note, the timeframe for call analysis begins 1 January 2008 as this is the time period for comprehensive Polaris data collection. MH provider calls were first tracked starting in 2012, so the data for this subgroup of HCPs are only available from 2012 onwards.
The authors conducted three levels of data analysis on the NHTRC data (1) Total hotline annual calls from 2008-2014 were compared to those by HCPs (including MHs); statistical analysis using SAS © 64bit 9.4 statistical software, was used to analyze differences in call volumes between total calls and HCP calls; (2) The nature of HCP calls over that same time period; and (3) Geographic trends in HCP calls, including calls in the three months following the HHS SOAR initiative pilot training.
For the statistical analysis in stage 1, we first assessed whether there was a change per year in the number of HCP calls and the overall number of calls. We used the regression model: where y t = number of calls in year t,and t = 0,1,. . .,7 for years 2008-2014. The parameter 100% × [exp(β) -1] can be interpreted as the rate of change per year in the number of calls.
To assess differences in rates of change between HCP calls and non-HCP calls over time, we created a dataset with 14 observations corresponding to each combination of year (2008-2014) and type of call (HCP/non-HCP). We then ran the regression model: where x = 1 for HCP calls, = 0 for non-HCP calls, t = years since 2008 = 0,1,. . ., 7, and y xt = number of calls of type x in year t. The expression 100% x [exp(β 2 ) -1] can be interpreted as the rate of change per year in non-HCP calls.
The expression 100% x [exp(β 2+ β 3 ) -1] can be interpreted as the rate of change per year in HCP calls.
Thus, to test whether the rate of change in HCP calls is the same as for non-HCP calls,we tested the hypothesis H 0 : β 3 ≠ 0.
Results
Findings are categorized in two areas: results of interviews and results of the Polaris database analysis.
Interview analysis
The results of the study were derived from interviews of eleven individuals currently conducting US-based training on HT for HCPs. Interviews revealed that the experience, approach, and content of such training varied widely. Two organizations had recognized the need for such training as early as 2002 and began that year, whereas others started HT courses as late as 2014. One quarter (27%) had been conducting HT training for HCPs since 2012. All training included core content, such as the global estimates of prevalence of HT, risk factors for HT, characteristics of victims and their traffickers, and basic identification of signs and symptoms of a possible HT victim within a health care setting. Case studies were commonly used to illustrate the key points. Most HT training material was developed in-house (91%), with some reliance on materials and resources developed by others.
Training venues included hospital-based medical grand rounds (55%), organizational offices (36%), as well as national medical and professional specialty conferences. The format was usually a live presentation, but in some instances training utilized on-site video sessions [40], on-demand internet based slide presentations, and live webinars. The number of participants at the training sessions ranged from 15-700. Most (73%) chose to provide on-site training in order to maximize interactions through such means as question-and-answer sessions, small group discussions, or presentations by HT survivors. Some training consisted of a single session with one presenter, as in a guest lecturer for medical grand rounds (55%); while other sessions were formulated as a panel of multidisciplinary speakers, including child abuse pediatricians, nurse practitioners, dentists, psychiatrists, psychologists, obstetricians, gynecologists, emergency medicine specialists, as well as licensed clinical social workers and PhDs with expertise in domestic violence / intimate partner violence. Some presentations (45%) included HT survivor presentations and stories in order to provide a first-hand perspective of the experience and impact of HT on an individual life; the others used case studies to illustrate those points.
The length of training sessions varied greatly. Availability of HCP time to set aside for training was one large barrier to delivering training. For example, one session type consisted of a focused, 20-minute presentation; whereas most were half-day (46%) or whole-day seminars and workshops. At one institution, HT training for HCPs was said not to be taken seriously until grant funding was awarded. The promotion by the AAP and the IOM Report [24,28] was noted to help increase awareness of the need for HT training for HCPs. Given local needs and increasing awareness about HT among HCPs, most presentations were repeated periodically over the years for different types of HCPs. About 82% had been conducted more than five times since inception. For those organizations and individuals who had been providing training over a few years, some degree of course adaption and adjustment occurred over the years, based on feedback from course participants as well as increasing experience in anti-HT activities by the trainers themselves. In some instances, the initial training focused on child abuse content and then over the years, it expanded to include commercial sexual exploitation of children and domestic violence or intimate partner violence. Many participants cited the need to conduct training of trainers as the demand for HT training for HCPs has increased and cannot be met by the original trainers or panelists; however, the challenge is to maintain quality control of presenters and presentations. Some degree of funding support for training was available; 63% (7/ 11) mentioned that they had received or had access to small grants to help cover costs of HT training; the others had in-kind support, such as access to a training site or panelists who volunteered their services.
Even though HT training for HCPs reported by our cohort was conducted as early as 2002, evaluation of training impact was generally lacking or underdeveloped. Pre-and post-tests were administered in only one-fifth (18%) of all presentation formats. These short tests were administered at the time of the training and captured immediate changes in knowledge and or attitude by HCPs about HT. In one instance there was a three-month follow-up; otherwise, most HT training had no formal impact measurement, although there were anecdotal reports of HT cases identified by a trainee, subsequent to training. Apart from such anecdotal reports, from our cohort and literature review, we identified that further studies are needed to demonstrate the long term HT training impact on HCP behavior and patient outcomes [20].
When queried about potential improvements in current training approaches, interviewees identified and commented upon the needs for: standardization of training, field-tested metrics for training impact in order to develop an evidence base, access to funding support, and incentivization of HCPs for training. Moreover, standardization of training material content was seen as important, especially when non-HCPs are organizing training on the health aspects of HT. The development of guidance, tools, and impact metrics (short-term and long-term targets and indicators) were mentioned as current needs. Some interviewees stated that successful applications for funding required quantitative evidence of training effectiveness. Lastly, they expressed the need for incentives for HCPs to initiate training. Two main incentivization categories were proposed: (1) continuing medical education (CME) credits awarded for participation in training; and (2) state requirements for HT training for professional licensure or re-licensure. All participants stated that HT training needed to move beyond knowledge to skill development and application.
Given these observations, the interviewees agreed that an authoritative national body should lead the charge and oversee the standardization of curriculum content, the development of robust metrics for impact evaluation, and a method to encourage broader participation by HCPs in HT training. Options discussed were the HHS ACF Office on Trafficking in Persons, with support by specialty societies and professional organizations, such as the American Medical Association, the American Academy of Pediatrics, the American Academy of Family Practitioners, the Society for Adolescent Medicine, the Christian Medical and Dental Associations, and the American Public Health Association. The researchers posit that human trafficking HCP curricular oversight could also be a role for HEAL Trafficking.
NHTRC database analysis
Call data have been captured by Polaris through the NHTRC hotline since 1 January 2008. For the timeframe studied (1 January 2008 -31 May 2015), 1826 calls were received from HCPs out of a total of 108 650 calls. The 1826 calls from HCPs included the 439 calls from mental health providers (MHs), which were reported separately beginning in 2012. Thus, overall HCPs represented about 1.7 % of all callers. Among the 40 caller types categorized by Polaris from 2008 to 2015, calls by HCPs have consistently ranked between 14th and 16th place, with MH ranking 24th or 25th. The total number of hotline calls increased steadily since inception of the database by NHTRC (Figure 1), and more general HCPs than MH providers have called over the years (Figure 2). The number of calls by medical professionals has always outnumbered those from mental health professionals, which may be a reflection of a lower number of mental health clinicians nationwide. Interestingly the percentage increase in HCPs calls from 2012-2014 was 71.29%, which is higher than the overall percentage increase of calls to the hotline over that same time period of 58.65%. Overall, the there is a statistically significant rise in both general and HCP calls, by year, from 2018-2014 (p < 0.001), and while HCP calls show a greater increase than general calls, the rate of increase is not statistically significant in comparison to general calls.
The types of NHTRC calls for HCPs are categorized as: general information requests; direct services referral requests; tip reports; training and technical assistance (T and TA); at-risk; crisis; and unrelated. For definitions of these categories, see caption to (Table 1). These numbers may reflect the larger absolute number of HCPs and larger general populations.
Lastly, we examined the quantity of HCP calls from states in which the federal SOAR pilot training was conducted. We found that there were fewer calls per month in the three months right after the training, compared to the eight months before the training. However, there may be multiple explanations for this, including a greater awareness of local resources following training, and a decreased need for national technical assistance.
Main findings
Analysis of interviews with human trafficking HCP educators revealed a breadth of training modalities and content, which mirrors a recently published literature review on the topic [10]. The process of curricula development was described as organic. Training had similar content areas, including global estimates of prevalence of HT, risk factors for HT, characteristics of victims and their traffickers, basic identification of signs and symptoms of a possible HT victim within a health care setting, and suggested initial response by the HCP. When queried about potential improvements in current training approaches, standardization of training, metrics to evaluate and develop the evidence base for training impact, funding opportunities, survivor integration, and incentives to encourage training were common themes. Analysis of the 2008-2014 HCP data from the National Human Trafficking Resource Center (NHTRC) is one way of capturing national behavior change among HCPs. In other words, by calling the NHTRC Polaris hotline, a HCP is engaging in an activity that comes as a result of awareness of the problem of trafficking. While HCPs may contact local service organizations directly, the NHTRC call specialists are analogous to poison control centers, in that they provide technical expertise to guide a clinician through a specialized clinical case, human trafficking. Call specialists can outline case-by-case victim resources, provide guidance for screening questions to ask patients, and can speak to the patient directly in over 200 languages (via tele-interpretive services). While HCP calls have paralleled the general national hotline trend data, over the last couple years, the relative increases in HCP calls have surpassed that of the general population. These trends indicate that HCPs are increasingly aware of trafficking and may indicate some level of behavior change.
Recommendations
• Content and delivery: Standardize content of training so that key information is correctly and consistently provided to all HCPs and training participants, regardless of venue, format, or level of experience among HCPs. Content should include primary, secondary and tertiary HT prevention, as well as public health impact using the socioecological model grounded in an understanding of links to other forms of intentional violence including intimate partner violence, child abuse, and community violence. The survivor voice should be included to provide firsthand perspective [41]. Moreover, all forms of trafficking should be covered, including labor, sex, and organ trafficking. The training should be victim-centered, culturally-relevant, evidencebased, gender-sensitive, and trauma-informed. Content delivery should not be limited to formal didactic presentations, as medical education literature [42] has shown that more interactive and innovative strategies, such as simulated patients, flipped-classrooms, faculty modeling, and roleplaying, are often more effective [43]. • Evaluation and metrics Develop evaluation metrics specific to HT training for HCPs such that changes in knowledge, attitude, and practice, as well as patient outcomes can be reliably and reproducibly measured, as well as compared across training types for generalizability [44]. This step will help to build the evidence base for effectiveness and impact of HT training for HCPs and allow adaptation and improvement when promising practices are identified. This measurement should go beyond knowledge acquisition, trainee behavior change, and process analysis to long- Tips: This category includes calls received from individuals who wish to report tips related to human trafficking victims, suspicious behaviors, and/or locations where human trafficking is suspected to occur. Potential human trafficking tips received by the NHTRC are reviewed by hotline supervisors and regional specialists before being passed on to the appropriate local, state, or federal investigative and/or social service agency equipped to investigate and/or respond to the needs of victims. Not all tips are reported to law enforcement, and any reports made respect callers' preferences regarding confideniality. Reporting decisions are based on a variety of factors, including the callers' needs and wishes, and the needs and wishes of victims.
Training & Technical Assistance (T&TA): T&TA requests include but are not limited to: specialized information; programmatic and project support; phone consultations; materials reviews; and trainings and presentations.
At-Risk: This category refers to calls referencing related forms of abuse and exploitation that may put individuals or specific populations at risk for human trafficking, such as labor exploitation, domestic violence, sexual assault, child abuse, and runaway/homeless youth. term impact analysis on trafficked patientsnamely, patient-centered outcomes [45,46]. See adapted Kirkpatrick model ( Figure 5). • Oversight: For standardization and evaluation, the optimum choice of agency or institution to provide US-based oversight should be one that has a national scope and authority, a specific interest and expertise in HT training for HCPs, and no conflict of interest. A federal agency such as HHS ACF would be well positioned to provide such oversight and guidance. They could work in conjunction with professional specialty societies (e.g., AAP, AMA, Society for Adolescent Medicine, APHA, CMDA) and HEAL Trafficking. • Research: Conduct and publish research using rigorous study designs, methodologies, and outcome measures that demonstrate practices that lead to provider practice change and improved patient outcomes [47]. A national oversight committee or commission could facilitate the dissemination of the findings and convene periodic forums to discuss the implications and next steps. [49]. Other entities could explore the incorporation of HT training into health professional school curricula, and residency training, as well as the best approach for offering CME. • Collaboration: Develop a 'Quality Improvement Collaborative' (QIC)a network of groups which conduct training and which would be willing to coordinate an evidence base regarding the above mentioned recommendations. The development of QICs could allow for a gathering of invested individuals to learn, reflect, and share strategies [50], ultimately towards the end of enhanced research, innovation, advocacy, and funding around topics of HT training for HCPs. • Funding: Use data and published research to advocate for funding for initiatives around HT training for multidisciplinary HCPs. This funding should not take away resources from existing service provision for survivors.
Limitations
This paper used a mixed methods approach to understand the overall state of medical education for HCP on HT. Our HT HCP educator interview analysis has the richness of a breadth of perspectives, however due to snowball recruitment, is not generalizable. There are many ongoing training sessions on HT for HCPs, and it was not feasible to list all those and then sample from that list. Our analysis of National Human Trafficking Resource dataset was limited by its aggregate form, so we were only able to present general trends.
Conclusions
Human trafficking is a health issue, and as such HCPs have the potential to play a critical role in human trafficking victim prevention, identification, and care [51]. While some HT training for HCPs began over 10 years ago, most HCP are unfamiliar with how to care for trafficking survivors. The Transtheoretical Model states that awareness and knowledge precede behavior change. Furthermore, behavior change comes in stages, from pre-contemplation to action [45,52]. The pinnacle of successful medical education is improved patient outcomes [45]. Our review of National Human Trafficking Resource Center Data confirms the growing awareness of trafficking among HCPs, and possibly indicates some level of behavior change. Our interviews with HT educators demonstrate that the standardization and evidence on medical education of HT has many opportunities for growth. The field is wide open for progress to be made by scholars, clinicians, and other practitioners. Given that victims of HT will interact with a variety of clinicians throughout their care, HT training should include the whole range of HCPs. Standardization is essential to ensure that content is correct, consistent, relevant to HCPs, and effective in practice. Training should incorporate an evidencebased, patient-centered, trauma-informed approach, with proven effectiveness. Moreover, training should incorporate orientation on gender, cultural competencies, and survivor input. Steps need to be taken to explore how such training might be integrated into settings such as health professional school curricula, residency programs, and possibly to determine whether HT training could be a requirement for licensure or re-licensure of HCPs. Identification of HT by HCPs and treatment protocols will need to be researched and funded in order to expand that evidence base [53,54]. Post training evaluation must go beyond the immediate measurement of changes in knowledge and attitudes to assess for patient-centered outcomes. Metrics with indicators and targets need to be developed, field tested, and disseminated for general application where HT training is conducted. Training effectiveness measured by changed HCP behavior, along with improved patient outcomes, i.e. earlier identification of and support for HT victims, are the ultimate goals. The larger initiative is best served by the establishment of a national oversight body which has both authority and convening power. That body can guide these processes, convene forums for identifying promising practices, assist QI collaboratives, and facilitate linkages with states, academia, and professional societies, to improve HCP engagement with trafficked persons presenting in clinical settings. term 'victim' to refer to individuals who were trafficked [32]. | 2017-10-27T10:08:06.838Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "be1136005adcb1fa0f56bb4789897d3ea7cbfec7",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10872981.2017.1267980?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fe9cc9c64022a68f554bfb3956fd22fd4576d18",
"s2fieldsofstudy": [
"Law",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242089152 | pes2o/s2orc | v3-fos-license | Inventing Voice of Identity through L2 Poetry Writing: A Construct of Mindfulness-Based Strategy in Remote Learning
Voice of identity can be found in an expressive writing such as poetry writing. In l2 learning context, poetry is a medium to learn a new language in a creative way. Language is developed through mindfulness and creativity. As language develops, l2 learners make use of their sensitivity to language and create or produce the language that represents the voice of identity themselves. This article focuses on how l2 poetry writing can invent voice of identity in the l2 learners by using the construct of mindfulness-based strategy. In the pandemic context where offline education is not possible, mindfulness-based strategy is best implemented in l2 remote learning since it has four main principles in language learning such as finding novelty in learning, being sensitive to context, actively engaged in the present moment, and stimulate opennes and multiperspectives. By online learning, poetry is taught with mindfulness concepts and they are guided mindfully on how they become language learners creatively and effectively by writing their poetry about their life during the pandemic. The data collected are poetry and reflection diary and analyzed through the construct of poetry in mindfulness framework. The results indicated that poetry writing can represent voice of l2 learner’s identity; their new perspectives of the world, reflection and sensitivity to context, choices of life, present moment engagement, opennes and multiperspectives
INTRODUCTION
The concept of mindfulness was firstly introduced by Ellen J Langer in 1989. The core principles of mindfulness are discover the new things and make new distinctions. The essence of mindfulness process in noticing new things and drawing novel distinction are built from the flexible state of mind. When mind is flexible, the new things are entering the mind and novelty can be found. This practice of mindfulness is revelant with the L2 language learning in the sense that L2 is the new language that is learned by the students. By having mindfulness concept the idea of accepting new things becomes the core of language learning. However the process is not only a matter of accepting new things but also learning to make a novel distinction, how it is different from the language learned previously, how it is possible to be done and how it is built from within the learners.
According to the theory of language there are three approaches to learn language namely the environmentalist approach, innatist approach and interactionist approach. In environmentalist approach, language is treated as a behaviour to be taught. In this approach, language is considered as habits and learners are exercised through the habitual actions. According to Moafian et.al (2019), the environmental approach presented the strategies of how children learn some basic principles of language. However this approach could not give solution to children how to learn difficult grammar.
In innatist approach, language is built within the learners through Language Acquisition Device. In this approach, learners are the subject of learning in the sense that they are using different mental strategies to meet the system in the target language (Moafian et.al, 2019). The learners are stimulated to use their minds to see, to reflect, to analyze, and to produce. However this approach neglects the function of language and how it is operated in the process of learning.
In interactionist approach, according to Halliday, language is the means of communication.
There are three functions of language. First, the interpersonal function which means the language is used to communicate with others. Second, the ideational function which means language is used to communicate the ideas. Third, textual function in which language is used as linguistic tool of ideational and interpersonal functions to make texts both verbal and written (Moafian et.al, 2019).
Mindfulness, on the other hand, is a new concept in language learning that covers all the principles of language learning. Langer's conceptualization of mindfulness have been elaborated previously. The two core principles are noticing new things and finding novel distinctions. In the context of language learning, the two principles are the main keys to learn second or foreign language. But other than those two are engagement and flexibility. The other two relate with the interaction with the other language users and also to be flexible with the context of communication. These four principles are the main foundations in language learning, because language is in a dynamic state of development. One should be actively engaged and flexible on communication.
Mindfulness has become a key in learning process. To be able to learn, mindfulness takes place at the first stage. In the context of language learning, one can access language and understand language only if they are mindful. In this context, L2 is accessed when learners are able to accept new language with the new system that allows them to process new language and respond them. It is also a room for drawing novel distinctions in the sense that it becomes a new knowledge in their mind. In this research, poetry is used as a tool and medium of expression to reveal voice and identity. Poetry, in this L2 learning is a gate way to introduce the new language to L2 learners. Through poetry also their voice of identity is revealed. Poetry is used as a medium to explore creativity and productivity. Mindfulness strategy is used to build a bridge between the language learners and the L2. However, the use of poetry and mindfulness in L2 language learning in the pandemic has rarely been studied. In this context, poetry is used to be a medium of learning L2 by using mindfulness. The voice of identity is revealed through poetry. The novelty of this research is that the mindfulness is used in poetry writing to reveal the voice of identity especially in the remote learning context within the pandemic of covid 19.
Mindfulness leads language learner to find themselves and to develop self-awareness, then to language awareness. Self-awareness develops the language awareness because it stimulates sensitivity, curiosity and creativity. Mindfulness strengthens the process. Mindfulness is proposed by Ellen J. Langer (1989). There are two major foundations of Mindfulness according to Langer. First of all, being aware of something new. Second, giving attention on a unique difference. According to Langer, mindfulness is diversed based thinking and context bound (Davenport and Pagnini, 2016). When we are in a mindful state, we can think in diversity, to many different contexts, to many different solutions, to many points of views (Langer, 1993).
Mindfulness can be applied to many different contexts. In this state of mind, we accept new things, different point of views, new contexts. The awareness of novel point of views will promote new findings, to reconstruct the previous established knowledge and point of views. In this context, learning English as a foreign language is learning something novel, new. If learners are introduced or exposed to new things, new perspectives and new context, it is likely that they are capable of making meaning in a more interesting and involving way. According to Moafian et.al (2019) by having a courage to find novelty in everything, the language learners can welcome any new things and information and adapt them in the storage of the brain. One of the new patterns and information can be gained through language.
Poetry in this context is a product of literature that can be used to hear the learner's voice of identity. As Perrine (1982) stated that poetry is a multi-perspectives way of saying things, emotionally, intellectually, and imaginatively. However in this context, poetry is seen as a product of literary text that symbolizes learners' voice of their experience and thoughts.
The structure of poetry are built from lines, stanzas, and figurative languages such as metaphors, imagery, irony, hyperbole, vision and voice. They are selected carefully and arranged beautifully. Therefore the creative process of poetry is never easy. A research by David I Hanauer in 2012 showed that there are four principles that he used to make language learning happens naturally. First, by autobiographical writing, second by emotional writing, third by personal insight, and fourth by authentic public access. His findings show that ESL/EFL literacy instruction based on these prinsiples produces a very different educational experience. The students are directed to really express and explore themselves. The main purpose is to help students extend their language by using their true personal expression. This makes language experience as a personal, emotive and expressive resource. However this article does not explain in detail how the language learning happens from the participant's perspective. In fact, this is the real things that should be explored.
Second, a study done by Shapiro (2001) who found that poetry and mindfulness are medicine of language learning because poetry and mindfulness together create peacefulness, calmness, and positivity. In this research the students are given alternatife to learning, that they are allowed to listen, feel, and discover in different ways. However this article doesn't give much information about the students' perspective on their learning.
In 2016, Atsushi Iida conducted a research on how to portray voice of identity in second language haiku. The study found that the identity of the students is revealed through the haikus they wrote, from which they communicate their life stories. The voice can be heard, felt and valued as human's life experience and perspectives. It also revealed the use of poetry as a means of literacy learning.
In 2020, Piscayanti et.al conducted a research on mindfulness in Poetry writing class. This research revealed that mindful learning is very effective to be applied in Poetry class since it promotes self-awareness through the mindful journal. Poetry is used as a voice of identity. As a student wrote, "From mindful learning in poetry I learned that I have ways to express myself better." This voice is very honest about how the learner perceived poetry as expression channel.
A creative process of poetry is never easy. A research by David I Hanauer in 2012 showed that there are four principles that he used to make language learning happens naturally. First, by autobiographical writing, second by emotional writing, third by personal insight, and fourth by authentic public access. His findings show that ESL/EFL literacy instruction based on these principles produces a very different educational experience. The students are directed to really express and explore themselves. The main purpose is to help students extend their language by using their true personal expression. This makes language experience as a personal, emotive and expressive resource. However this article does not explain in detail how the language learning happens from the participant's perspective. In fact, this is the real things that should be explored.
Second, a study done by Shapiro (2001) who found that poetry and mindfulness are medicine of language learning because poetry and mindfulness together create peacefulness, calmness, and positivity. In this research the students are given alternative to learning, that they are allowed to listen, feel, and discover in different ways. However this article doesn't give much information about the students' perspective on their learning.
In 2016, Atsushi Iida conducted a research on poetic identity in second language writing. The study found that the identity of the students is revealed through the haikus they wrote, from which they communicate their life stories. The voice can be heard, felt and valued as human's life experience and perspectives. It focuses on poetry writing as a tool of meaningful learning in EFL classroom.
The gap between previous research with the context of this research is that L2 poetry writing as a voice of identity is rarely studied. Actually, poetry in L2 context is a representation of learners' voice. This a tool where the voice of identity can be represented. The novelty of this research is to explore voice of identity that is born from mindfulness learning. Therefore this research tries to explore how voice of identity is revealed through L2 poetry writing. The benefits of this research are as follows. First, by knowing the voice of identity, the learners can learn to know themselves better and language is produced personally, contextually and meaningfully. Second, the benefit of this study can be used as a sample to teach poetry in L2 learning context especially in revealing the voice of identity.
METHODS
The research design of this study is qualitative research with narrative inquiry method. This method was used to record human experiences through their personal stories (Webster & Mertova, 2007). In this research context, the voice of identity is explored through poetry. Mindfulness intervention in the creative process helps to stimulate the voice of identity to be heard.
Respondents
The respondents are 10 EFL learners in poetry course in English Language Education Ganesha University of Education taught by mindfulness. They were chosen through purposive sampling technique. The research object is poetry writing from which voice of identity can be identified, analyzed and interpreted.
Instruments
The instruments used are writing journal, poetry, and three poetic analysis combined with the construct from mindfulness framework.
Data collection procedures
The procedures of research are as follows. First, mindfulness intervention was given during the creative process of poetry writing, which covers three stages namely pre-writing, whilst writing and post-writing. In this research, students poets are given mindfulness treatment in remote learning mode in order to stimulate their thinking. In the process, the students were given series of stimulation on how they perceived moments and experience in their life which relate to their voice of identity. The students are given free writing stages to create narration of their life (past, present, and future moments). The second was to reflect on their narrative and to choose the best moment and poetic experience to be explored more. The last stage is to produce the poetry from which they have reflected their voice of identity inside.
Data analysis
According to Hanauer's (2010in Iida, 2016 methodological guidelines for poetic identity investigation. The data analysis was categorized under three classes of analysis namely context of writing, content and literary choices.
The first category is the context of writing. In the L2 learning, context is very important. It helps the students to understand where the language is produced and how it is produced. According to Hanauer (2012) the process of language learning involves their responses to understanding of contexts. The second category is content analysis. According to Hanauer (2010in IIda, 2016 the poetic identity includes the content in the poem such as chronological moments, inspirations, feelings, and thoughts. This is to consider what is the content presented, why is it presented and how it is presented. The third category is linguistics analysis of figurative language. Poetry writing represents the idea to construct meaning through language. These categories then combined with the construct of poetry with mindfulness framework which is formed by categories of novelty in learning, being sensitive to context, actively engaged in the present moment, and stimulate opennes and multiperspectives.
FINDING AND DISCUSSION
From the analysis of the poetry from three categories combined with mindfulness framework there are some important voices that are born from the L2 poetry writing in this research. First is the new perspective of the world voice in which the voice is born from the search of identity to the self. Mindfulness is used as intervention when the poetry is written in many stages which helps the construct of voice revealed in the poetry. Second, the reflection and sensitivity to context. The voice of reflection can be heard from poetry since the identity can only be felt if they are reflected from the sensitivity to context. Third, choices of life. How life is viewed and how perspective is built are the voices of identity that can be brought into the meaning of life. Choices of life are there to show that identity exists in the voice of poetry. The last is openness and multiperspectives voice which help to construct the identity in many different points of views.
New perspective of the world
New perspective of the world is the core principle of mindfulness strategy which is used wisely in L2 poetry writing process. It is stimulated through stages of writing. The following is a poetry which can be used as an example of representing the voice of identity, in this context is building new perspective of the world. Here we can see that the student wants to make a new perspective of the world. His idea can be read from the content, that he previously wanted to be an adult. Yet now he is in adulthood, but he wants to have his childhood back. What happens in the adulthood that he wants to bring back the world to childhood world. The idea of comparing childhood and adulthood is to gain new perspective within the writer's self. Adulthood is an area of identity search. Readiness to face adulthood is the issue. However, he could not escape from the childhood that is trapped in his memory, of freedom, of playfulness, of many beautiful carefree memories. The world of adulthood on the other hand, built a new perspective on how different it is with the expectation he has.
Childhood (Student 1)
The mindfulness process to reveal the voice of poet is clear here because this poetry can not be born alone without his understanding of himself as adult who wants to be back to childhood. The mindfulness process while writing this is started with reflection on the past and present, continued to the future. The choice of language use here is direct and simple. Not metaphorical and poetic, because the issue brought is adulthood versus childhood that needs more direct choice of words.
In 2016, Davenport and Pagnini found that mindfulness practice include observation, articulation, and presentation stimulates students to be aware of the various ways that can be made to a spesific alternative and the possible answers to any issue. In this poetry, observation is done within self in which he reveals the comparison between childhood and adulthood in such a way. Articulation and presentation through poetry is a perfect choice to reveal the voice. L2 learning to contrust identity appears in the poetry writing since identities are multiperspectives, dynamic, and constantly reconstructed through interaction of community in real engagements. Adulthood is perceived as a result of community interaction which forms identity of being adult. Wenger (1998( in IIda, 2016 stated that learning is contextual and our identity have been shaped through the participation in community.
Another poetry which builds the new perspective of the world is as follows.
No
There's a stone for the things forgotten And you won't look back on When you choose the one without choice Ask, believe the rain It'll dissolve the stone where the forgotten things was hidden
No
The leaves' acapela promise to bring back yours The leaves' chanting will spin and open your handcuffs They raise up you from drowning Lonely Being a stone No This poetry is a unique perspective about stone and its mystery. The poet is trying to give voice of how memories and histories are kept in a stone. Stone is a metaphor for history that is kept and hidden. Sometimes it can be seen sometimes it is still hidden. Sometimes the rain erased it. The voice that the poet offers here is the possibility of many, when the stone is treated as dead thing, or when the stone is respected as memory keeper. The nature here supports the idea of mystery around the stone.
However the voice of nature and stone here are presented with non-judgmental perspective, as it can be read at the ending "lonely/being a stone/no//" which means that the poet does not want to clearly define whether the stone is mistaken or not. Mindfulness is the state of being aware of the present moment, without reflection on the past and without the anticipation of the future nor to solve problems neither to prevent the unpleasant factors of the present. This goes along with the research by Teasdale (1995in Tarrasch, 2015 which found that mindfulness is the state of being aware of the present moment, without the need to criticize or evaluate the moment, and without judgment. The voice of this poetry reveals the nonjudgemental perspective of the world. In this context, L2 voice of identity is revealed through mindful attention to the nature, the context and the sensitivity to capture the meaning behind everything including stone. This finding is supported by a research in 2016 by Wang and Liu who found that mindfulness can be cultivated and benefits L2 learners in relation to their foreign language learning.
Reflection and Sensitivity to context
Reflection and sensitivity to context are essentials in L2 learning context. In poetry writing, the two are stimulated through mindfulness based strategy and resulted in poetry as below.
Stressed Out (Student 3)
When the rain falls again I am made like I am in blindness Everything cannot be seen Even you hide, might be behind the curtain Everything is out of mind For the first time I am made helpless Even my breath is hard to get out of my nose It makes me hopeless Like I am being cursed I am trapped and stressed There is nothing I can hope I beg whoever it is Helped this poor human p-ISSN 1858-0165 Available online at http://journal.unnes.ac.id e-ISSN 2460-853X Get me out of this torture And God that is watching me Please crush this situation To make me escape the tightness of this atmosphere From this poetry it can be seen that sensitivity to context is very deep. In the beginning it is written "when the rain falls, I am made like I am in blindness". What voice is he trying to speak about. What made this happens. The context of rain, which made him feel trapped, stressed, and tortured can be the metaphor of his problem, his voice of insecurity and his hope to be helped out of the problems. Poetry helps his voice out of the deep chaotic feeling.
The style of language is not metaphorical yet more natural as a conversation to God. The ideas are represented through denotation rather than connotation. However this does not decrease the essence of poetry, since the meaning can be created mindfully. Mindfulness happens here since the voice is naturally expressed and presented. According to Hanauer (2012), L2 learning happens because every personal experience is expressed and the natural voice of poetic identity is heard. Poetry allows this authenticity happens. Therefore L2 language learning happens naturally.
In 2020, Stevenson found that as mindfulness is framed an inner intelligence, it is not a result of observational learning, but a result of our internal intelligence system. However it should be activated, stimulated and developed.
Choices of Life and Present moment engagement
The choices of life and present moment engagement are very important in the voices of identity in L2 learning context. In this poetry the poet wants to share the needs to talk, the voice to express herself and her thought, however it seems to be silenced when the poet comes with the man. This simple poetry does not come with many metaphors, instead the poet use denotation more rather than connotation. The choice of life and the engagement with present moment is more real with the simple language than with the metaphoric language. With this arrangement, the poet chooses her voice to be as clear and direct as possible, avoiding the figurative language.
The identity of the poet is revealed clearly because she has used the language to express her voice. According to Hanauer (2012), when you use the second language as a tool to represent your voice as ideas and experiences, it changes the perspective of language. This is the point where L2 is a tool and becomes a personal touch and an 'originally-owned' language. Though the language is simple and not metaphorical, but the idea of 'owned' language happens, therefore the students' language acquisition must be better if taught by mindfulness and introduced with poetry writing.
Mindfulness leads to freedom of choices and this is very important in L2 learning. In 2011 Sherretz did research and shows consistency with Langer research (1997) who found that mindfulness matters in the sense that when freedom of choice is given and information is different, the individual is forced to be mindful observer.
Openness and multi-perspectives
Openness and multi-perspectives are also the principles of mindfulness which is very relevant to the voice of identity in L2 learning. The poetry which shows this is as follows.
Eternity of love (Student 5)
You are my moonlight Illuminate every dark knights Like a flickering starlight Which adorns the vastness of the dark sky Your softness is like a pure white cloud Your smile is like the morning sun Illuminating the bottom of the heart Your faithful never fades even though time is getting worn out Your charm is like a flower that blooms when spring arrives Become a colour for the world My world that never feels empty Thanks to your presence Which always soothes the heart No day passes without a spark of your warm love All stumbling blocks are in the way Will not be able to change everything I feel to you Even one day Your body is no longer here The one I want to believe Your eternal love will always accompany you Either you or I first met that destiny Don't worry about it Let's hold hands tight Wade through this river of life together In this poetry the poet wants to be open and speak about her voice to thank her lover of his presence. The voice of identity as a partner in love clearly be seen from the choice of words, lines and stanzas. The feeling is expressed directly and openly without much metaphoric expressions. The openness and perspective that is built within this poetry is simply understood by the simple lines and stanzas.
From this poetry, it can be understood that emotional identity of the poet is revealed. Mindfulness in this context happens naturally to deal with emotional intelligence. In 2017, Cotler et.al found that there is a significant effect on the growth of emotional intelligence by combining both direct instruction and mindfulness. L2 poetry here is an example of how voice and identity can be stimulated through mindfulness. According to Hanauer (2016) poetry has the power to make L2 meaning making practice much intimate, individual and close. It allows L2 writers to invent themselves during the process.
Another poetry that can reveal the emotional identity is this poetry.
Old Wounds (Student 6)
I'm not the one who is wrong He is also not wrong, but you You made all this make old wounds come again Your diatribe imprint on the heart It is okay, if it makes you good You always see everything like trash It is okay, if it makes you happy You always underestimate someone Because your money goes blind Because of the position, you become deaf Even ties have become dust But you remember... Wealth and position will not last long who you think of as rubbish Some time you will also become trash The voice of opennes can be felt here, however the voice is like a blame on someone. The perspective of the poet is like a victim of action committed by the subject referred to in the poetry. The linguistic perspective is sharp, cynical, and sarcastic. The language use is full of emotions, with condensed composition. This is a result of mindful process of writing.
According to Langer and Moldoveanu (2000) mindfulness can cause awareness of the present, build sensitivity to context, and stimulate diversity. This strengthens Langer's ideas that this multipoint of views decreases our tendency to refer to the past. In 2020, McKay and Walker found that mindfulness had a powerful positive link with positive mental health, personal happiness and positive character. This can lead to creativity and productivity.
It goes along with Piscayanti (2017) who found that Mindful Learning is very effective to be used in Poetry class because it has the power to make learners more sensitive, productive and innovative. The students are encouraged to invent their own language on their self-contexts. Cooper and Boyd (1996) in Wang and Liu (2016, p.143) proved that mindfulness promotes new things, innovative dynamic learning. This finding is strengthened by Davenport and Pagnini (2016) who found that Mindful Learning can help learners to have innovation, invention, creativity, and collaboration power.
CONCLUSION
This article focuses on how L2 Poetry Writing can invent voice of identity in the L2 learners by using the construct of mindfulness. The results indicated that poetry writing can represent voice of L2 learner's identity; their new perspectives of the world, their reflection and sensitivity to context, their choices of life and present moment engagement, opennes and multiperspectives. Voice of identity is stimulated and engaged in a direct interaction with the mindfulness within self, in which the L2 learners learn to hear the voice of themselves, trying to figure out what feeling is that, how they are constructed and how they are presented through poetry. Furthermore it is the process to 'own' the language through poetry writing. The moment when L2 learners can produce poetry in L2 is the moment where they really "produce" their own language.
The current study was done with very limited exploration of how the poetry becomes the invention of L2 language and how that happens.
However this study convinced that L2 poetry writing can be an effective tools to be used to present the unheard voices and identities of L2 learners. In the future it needs more exploration on how mindfulness can deepen and sharpen the productivity of L2 in terms of poetry writing. | 2021-11-04T15:27:51.636Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "de4d35f5ab00558afd0704bcb0345d31ca7a3004",
"oa_license": "CCBY",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/LC/article/download/32613/12067",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bf71e63ba5969413f5e4eaab8451c7a0eae92f27",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": []
} |
256795362 | pes2o/s2orc | v3-fos-license | Instrumental and Non-Instrumental Measurements in Patients with Peripheral Vestibular Dysfunctions
Vestibular dysfunction is a disturbance of the body’s balance system. The control of balance and gait has a particular influence on the quality of life. Currently, assessing patients with these problems is mainly subjective. New assessment options using wearables may provide complementary and more objective information. Posturography makes it possible to determine the extent and type of posture dysfunction, which makes it possible to plan and monitor the effectiveness of physical rehabilitation therapy. This study evaluates the effectiveness of non-instrumental clinical tests and the instrumental mobile posturography MediPost device for patients with unilateral vestibular disorders. The study group included 40 patients. A subjective description of the symptoms was evaluated using a questionnaire about the intensity of dizziness using the Dizziness Handicap Inventory (DHI) and Vertigo Syndrome Scale—short form (VSS-sf). The clinical protocol contained clinical tests and MediPost measurements using a Modified Clinical Test of Sensory Interaction on Balance. All patients underwent vestibular rehabilitation therapy (VRT) for four weeks. The non-instrumental measurement results were statistically significant, and the best was in the Timed Up and Go test (TUG). In MediPost, condition 4 was the most valuable. This research demonstrated the possibilities of using an instrumental test (MediPost) as an alternative method to assess balance.
Introduction
Maintaining balance is the result of the complex integration and coordination of multiple body systems (vestibular, visual, motor, auditory, proprioception), which are centrally processed in the brain [1]. Biomechanically, postural balance is the ability to keep the body's center of mass (COM) within the base of support with minimal sway [2]. Any impairments of those body systems may cause vestibular disorders.
The economic burden from vestibular disorders is estimated to amount to USD 64,929 across the lifetime of each patient, or a total of USD 227 billion for the population of the USA over the age of 60 [3]. The main causes of increased direct health care costs due to vertigo and dizziness are the many hospital admissions, unnecessarily repeated primary and specialist care consultations, and excessive use of imaging diagnostics. Finally, the patients are often discharged without establishing the etiology, and, therefore, they are not prescribed the appropriate therapy [4]. Moreover, the indirect costs due to vertigo and dizziness are noted. There is a lack of autonomy, a fear of falls, and related changes due to the forced sedentary lifestyle. A reduced capacity to work or needing assistance in activities of daily living is observed.
The rehabilitation of balance dysfunction is based on exercises, which stimulate compensation and habituation processes on different levels [5]. Hall et al. recommended vestibular rehabilitation for peripheral vestibular hypofunction in the Clinical Practice Guideline of the American Physical Therapy Association (APTA) [6]. Early and active vestibular rehabilitation therapy (VRT) is essential to achieve compensation. The results should be confirmed by the patient's subjective self-assessment, such as questionnaires and objective measurements. The quantitative measure of the severity of the disease can be assessed using questionnaires such as the Dizziness Handicap Inventory (DHI) or Vertigo Symptom Scale (VSS), which attempt to evaluate the multifactorial nature of these disorders [7,8].
The effectiveness of rehabilitation can be assessed with clinical tests, the most popular of which are the Timed "Up and Go" test (TUG), the Dynamic Gait Index (DGI), the Berg Balance Scale (BBS), the Tinetti test (TT), and the Functional Reach test (FR) [9][10][11][12]. Clinical balance tests usually consist of static and dynamic tasks, while some even attempt to estimate the risk of falls. To complete the examination, the score is measured in points or time, which is then stratified into different subpopulations according to the risk of a fall.
The use of non-instrumental measurements has certain advantages, including that they are easy to interpret, do not require expensive equipment, and are easy to perform in an outpatient setting. The disadvantages are that some of the tests focus only on categorizing the risk of falling or on gait assessment, while others focus on static balance. To achieve a comprehensive evaluation of a dizzy patient, several tests are necessary, which is timeconsuming. There are some difficulties in selecting a complete balance assessment tool that would meet all a clinician's expectations. Moreover, some authors have stated that clinical tests are not sensitive enough to detect subtle changes, either worsening or improving the patient's balance abilities [2,13]. The evaluation of non-instrumental tests can be subjective and might be biased by the expectations of the physician or patient regarding improvement after therapy, so objective measurements are needed.
Static posturography, also called stabilometry, has been used for over 35 years and is performed to assess only static conditions. The subject stands on a fixed or tilted platform on a fixed support base. There are some variants of stabilometry, such as the one-or two-foot stance, firm or foam surface, or open and closed eyes. The other type of posturography is computerized dynamic posturography (CDP), wherein a force platform has been combined with visual stimuli as a means of determining the relative importance of the various sensory inputs critical for balance, namely, vision, somatosensation, and vestibular sensation. CDP detects postural sway by measuring shifts in the center of gravity (COG) as a person moves within their limits of stability [14]. Objective measurement of static posturography can be done with inertial sensors [15,16]. The development of modern mobile posturography based on wearable sensors provides the means to register small, sensitive changes in the functioning of postural control [13]. Currently, there are different commercial systems that use mobile devices to diagnose and rehabilitate balance disorders or detect falls in the elderly [17,18]. These devices have also found a role in the treatment of various neurological diseases, such as Parkinson's disease, multiple sclerosis, or Alzheimer's disease. However, there is a lack of standardization in data acquisition, mathematical models, and algorithms used to process the data.
Balance impairments are one of the leading causes of falls. The consequences of falls may be directly linked to an increase in mortality-such as a lower limb or pelvis fracture. Furthermore, suffering a fall can also cause "post-fall" syndrome, a psychomotor regression condition responsible for psychological, postural, and gait dysfunction, mainly in elderly people. Some clinical tests used in this study (TUG, DGI, Tinetti) assessed the risk of falls. However, clinical assessment is subjective and is not sensitive enough to identify early balance dysfunction. Instrumental measurements may make it possible to detect early subclinical postural changes in daily conditions. Wearable devices can be used for longterm monitoring for preventive and recovery strategies; thus, individualized strategies for Sensors 2023, 23,1994 3 of 12 fall prevention could be created (e.g., the use of mobility aids, changing environmental hazards, or rescue interventions) [19].
The aim of the study was to evaluate the usefulness of non-instrumented and instrumented measurements in patients with peripheral vestibular dysfunctions.
Materials and Methods
The study included 40 patients, 20 women and 20 men with a mean age of 56.8 ± 14 years old, complaining of vertigo and balance disequilibrium, who were diagnosed at the Balance Disorders Unit, Otolaryngology Department, Medical University of Lodz.
The inclusion criterion was a lack of spontaneous compensation within one month after unilateral peripheral impairment, which was confirmed using the results of videonystagmography (VNG).
Exclusion criteria were central vestibular signs in VNG, bilateral peripheral vestibular loss, disorders of the motor system, and coexisting neurologic disorders.
Patients were interviewed for history related to balance dysfunction and coexisting diseases according to the self-assessment survey and the questionnaire about the intensity of dizziness using the DHI and the Vertigo Symptom Scale-short form (VSS-sf). The DHI is a 25-item self-report questionnaire that assesses the impact of dizziness and balance dysfunction on the quality of life. The maximum score is 100 points (severe, moderate, and mild handicap with 61-100, 31-60, and 0-30 points, respectively) and a minimum score of 0. There are three subscales: Physical (P), Functional (F), and Emotional (E). Although it was originally written in English, DHI has been translated into many languages, e.g., Polish, German, Norwegian, Brazilian, and Spanish [20]. The other common, widely used questionnaire is the Vertigo Symptom Scale, published by Yardley in 1992. The objective of this scale is to measure the frequency of balance and vertigo symptoms and autonomic/anxiety symptoms [21]. Currently, a short form of VSS (VSS-sf) is used. The 15-item VSS-sf is divided into 2 subscales: vertigo-balance (VSS-V), which refers to vestibular symptoms, and autonomic-anxiety (VSS-A). A general result of ≥12 points means severe dizziness/vertigo [22].
All patients underwent otoneurologic examination, including five clinical tests: the Timed Up and Go test, the Dynamic Gait Index, the Berg Balance Scale, the Tinetti test, and the Functional Reach test. The characteristics of the tests are presented in Table 1. VNG examination (Ulmer SYNAPSIS 2008) was performed, including a caloric test, kinetic stimulation with torsion swing, and positional and oculomotor tests (saccadic, smooth pursuit, optokinetic). Peripheral unilateral vestibular impairment was diagnosed when there was asymmetry of the vestibular response in a bithermal water caloric test (44 • and 30 • C by Fitzgerald-Hallpike) and canal paresis (CP) was >22%. Central vestibular signs in VNG were diagnosed when there were abnormalities in saccades (prolonged latency, hyper or hypometrics), smooth pursuit (low gain, morphology), or an optokinetic test (low gain) on incorrect morphology recordings [28].
Postural stability was measured using the portable, battery-powered MediPost system with one sensor mounted on the trunk at the L5 level [16]. Mobile posturography allows for a more direct measurement of COM, which is strongly correlated with the center of pressure (COP) [29]. The system consisted of the ESP32 system, a Wi-Fi radio module, and the tri-axis inertial measurement unit (IMU; STMicroelectronics LSM9DS1) based on a microelectromechanical system (MEMS) that included an accelerometer, a gyroscope, and a magnetometer (manufactured by University of Technology, Lodz, Poland). This kind of IMU is particularly suitable for measuring low angular speeds (low sway frequencies). The device was synchronized and controlled using a computer program via a Wi-Fi network. A sampling frequency of 200 Hz is used on the IMU device. Then a low-pass filter is also implemented on the IMU, after which the signal is represented with 20 samples per second. The samples are sent after the measurement is completed. The following step is used to determine the device's angular position using the Madgwick approach [30]. A detailed description of the system can be found in our previous publication [16].
Determining the angular position of the MediPost device allows for the computation of the following measures that were used for further analysis: the total length of trajectory, which is the length marked by a projection of the patient's center of pressure excursion ( All patients underwent a rehabilitation program for four w lasted 60 min. The rehabilitation program was supervised by a All patients underwent a rehabilitation program for four weeks, and each session lasted 60 min. The rehabilitation program was supervised by a physiotherapist [31]. In accordance with APTA guidelines, they performed VRT based on Cawthorne-Cooksey exercises, which involved improving posture coordination and spatial orientation, as well as optokinetic training [32]. One particular vestibular exercise included augmented sensory feedback, and the target was to identify activity limitations and the patient's restrictions. All participants were fully informed about the aim of the study and the test procedure, and they gave informed consent. The study design was approved by the Ethics Committee of the Medical University of Lodz (RNN/136/16/KE, 10 May 2016).
Statistical Analysis
Statistical analysis was performed using the R Project for Statistical Computing (ver. 3.6.3). The data were expressed as means ± standard deviation (SD) and checked for normality with the Shapiro-Wilk test. The paired Student's t-test was used for the questionnaires and clinical test data, and log-transformed data of the mobile posturography was used to compare the groups. The differences were considered significant at p-value < 0.05. The relative difference between the pre-VRT and post-VRT measures was correlated using Pearson's rank correlation.
Questionnaires
The total result of the DHI questionnaire was 53.9 points before VRT, which decreased by 33% to 36.3 points after VRT (p < 0.001). The result is statistically significant (p < 0.001). Improvement was visible in all subscales of the DHI, with the greatest change in the emotional subscale: 17.3 before VRT vs. 10.4 after, which is a 40% improvement.
In VSS-sf, the mean score for the whole group at the initial examination was 19.7, which decreased significantly to 11.9 points (p < 0.001) after VRT, a 40% reduction in the subjective intensity of symptoms. The results of the VSS-sf confirmed the improvement in the patient's physical perception of vertigo in the VSS balance subscale, as well as the emotional burden related to vertigo in the anxiety subscale. The difference was greater in the balance subscale by 43%, which is related more to the physical perception of vertigo (Table 2).
Clinical Tests
The clinical tests revealed a statistically significant improvement in postural stability in all functional trials ( Table 3). The greatest change was noted in TUG, by 31%, and DGI, by 13% (12.4 vs. 8.5 s, respectively, p < 0.001, and a mean score of 53.9 vs. 36.8 points, respectively; p < 0.001). In the initial TUG test, 45% had a result of >12 s, which is interpreted as a high fall risk. After VRT, this group decreased to only 5% of patients at the final evaluation. The total time to complete TUG after intervention improved by almost 4 s (mean 12.4 vs. 8.5, respectively, p < 0.001). Before VRT, 55% of the subjects were categorized as likely fallers, with a DGI result of ≤19 points, while after VRT, only 22.5% were in that category. Significant differences were also found in BBS and TT (mean score of 49.9 vs. 52.5, p < 0.001, and 29.2 vs. 32.8 points, p < 0.001, respectively). The percentage improvement for those tests was 5% for BBS and 10% for TT. In the FR test, the average results improved by 3 cm, which is 12% more than at the initial evaluation. The fall risk decreased by half in 17.5% of subjects who reached more than 24 cm (Table 2). Before VRT mean ± SD; After VRT mean ± SD; condition 1: eyes open, firm surface; condition 2: eyes closed, firm surface; condition 3: eyes open, foam surface; condition 4: eyes closed, foam surface. The paired Student's t-test for logarithmically-transformed data ***-significant at the <0.001 level, **-significant at the <0.01 level, *-significant at the <0.05 level, ns-no statistical significance. Figure 2 presents the percentage distributions of patients in the non-instrumental tests. In the DHI questionnaire, almost 88% of patients before VRT were classified as severe and moderate handicap, whereas after VRT, mild handicap was noted in 40%. Initially, almost 78% of the group was classified as severe on the VSS-sf scale, while after treatment, 50% was in the mild category. The DGI test can divide patients into high and moderate risk of falls, which was 55% and 22.5% of the subjects before VRT, respectively; however, after therapy, 50% of patients fell into the no fall risk category. A high fall risk was noted in 45% of patients in TUG; however, after VRT, it changed to 95% of patients in the low fall risk category. The BBS test categorized the population into three groups: wheelchair-bound, walking with assistance, and independent. In this study, one of the exclusion criteria was a motor disorder, which influenced the distribution of patients in this test by eliminating patients in the wheelchair group; the distribution of the BBS did not change after therapy. Initially, in the TT test, 15% and 17.5% presented a high and moderate fall risk, respectively; after VRT, 82.5% had a low risk of falls.
Results of the Instrumental Measurements MediPost Posturography
Four posturographic measures were selected for analysis: the length, surface, maximum angular velocity, and mean angular velocity of COP displacement in time. The greatest differences between evaluations were observed in sensory condition 4, the most difficult-eyes closed on foam. Those results were statistically significant at p < 0.01 in all analyzed measures. A decrease in all measures was also observed in condition 3, except for maximum angular velocity, where there were no differences after the intervention. Similarly, no differences among all analyzed measures were observed in the least sensory-challenging trial-condition 1, eyes open on a stable surface (Table 3). was noted in 45% of patients in TUG; however, after VRT, it changed to 95% of patients in the low fall risk category. The BBS test categorized the population into three groups: wheelchair-bound, walking with assistance, and independent. In this study, one of the exclusion criteria was a motor disorder, which influenced the distribution of patients in this test by eliminating patients in the wheelchair group; the distribution of the BBS did not change after therapy. Initially, in the TT test, 15% and 17.5% presented a high and moderate fall risk, respectively; after VRT, 82.5% had a low risk of falls.
MediPost Posturography
Four posturographic measures were selected for analysis: the length, surface, maximum angular velocity, and mean angular velocity of COP displacement in time. The greatest differences between evaluations were observed in sensory condition 4, the most difficult-eyes closed on foam. Those results were statistically significant at p < 0.01 in all analyzed measures. A decrease in all measures was also observed in condition 3, except for maximum angular velocity, where there were no differences after the intervention.
Correlation of Improvement between Instrumental and Non-Instrumental Measurements
The results of the correlation between the questionnaires, clinical scales, and objective measurement are presented in the matrix (Figure 3). A clinically significant correlation was observed between the TUG test and MediPost in condition 3 for all measures, conditions 1 and 4 for the length of trajectory and surface of COP, and condition 1 for mean angular velocity (p < 0.01, respectively). Similarly, no differences among all analyzed measures were observed in the least sensory-challenging trial-condition 1, eyes open on a stable surface (Table 3).
Correlation of Improvement between Instrumental and Non-Instrumental Measurements
The results of the correlation between the questionnaires, clinical scales, and objective measurement are presented in the matrix (Figure 3). A clinically significant correlation was observed between the TUG test and MediPost in condition 3 for all measures, conditions 1 and 4 for the length of trajectory and surface of COP, and condition 1 for mean angular velocity (p < 0.01, respectively). A moderate positive association between before-VRT mobile posturography for all measures for condition 2 and the after-VRT outcomes of VSS and a moderate association between conditions 3-4 and after-VRT outcomes of the DHI questionnaire were noted. Negative associations can be observed between conditions 1 and 3 in terms of the length of trajectory, the surface of COP, maximum and mean angular velocity, and the after-VRT outcomes of DGI, BBS, and the strongest association with the Tinnetti after VRT scoring. This correlation matrix is presented in the Supplementary Material. A moderate positive association between before-VRT mobile posturography for all measures for condition 2 and the after-VRT outcomes of VSS and a moderate association between conditions 3-4 and after-VRT outcomes of the DHI questionnaire were noted. Negative associations can be observed between conditions 1 and 3 in terms of the length of trajectory, the surface of COP, maximum and mean angular velocity, and the after-VRT outcomes of DGI, BBS, and the strongest association with the Tinnetti after VRT scoring. This correlation matrix is presented in the Supplementary Material.
Discussion
This study revealed the results of non-instrumental clinical tests and instrumental measurements achieved using the novel MediPost mobile device. Mobile posturography evaluation shows an improvement in postural stability in the study population, with differences depending on the test conditions. Those results are useful in assessing the effectiveness of vestibular rehabilitation in patients with peripheral vestibular dysfunction.
Our group of patients suffered from vertigo, dizziness, and balance unsteadiness due to the lack of spontaneous compensation after unilateral peripheral impairment. The patients' stability was evaluated twice-before rehabilitation and one month after, in the same way, using non-instrumental (clinical tests) and instrumental tools (mobile posturography). A complete evaluation of patients with instability should include a psychological aspect of the disease, which was checked in this study using DHI and VSS-sf.
Based on a few high-quality randomized controlled trials, moderate to strong evidence exists that vestibular rehabilitation is safe, effective management for unilateral peripheral vestibular dysfunction [31,33], and the results of this study are in line with this statement. Multiple studies confirmed improvements in postural stability after VRT [6,34,35]. In patients with vertigo and balance disorders, Brown et al. demonstrated a reduction in the overall DHI score and shortened TUG time after VRT, which was also observed in our study [36].
We conducted, a few clinical tests to investigate many aspects of stability, such as static and dynamic balance, assessment of gait, and fall risk. The TUG and DGI tests evaluate dynamic balance and gait quality, while TT and BBS focus on static and dynamic tasks [2]. The FR test assesses only one condition of dynamic stability-the anterior displacement within the limits of stability. Among all the aspects of balance that were evaluated, we noted the greatest improvement in dynamic tasks in the TUG test by 31%. This test is less complicated than DGI, which influenced the result. Initially, the group of patients obtained quite good results in BBS and TT, and for this reason, the improvement here was not as visible as in the tests with dynamic components; however, the results are still statistically significant. Many studies stated that the improvement in clinical tests means a lower risk of falls and, thus, better postural stability [32,35,37,38].
Patients with peripheral vestibular deficits often show instability during quiet stance tasks. It is well established that removing visual inputs by closing the eyes or reducing the efficacy of lower-leg proprioceptive inputs by destabilizing the support surface, e.g., by using foam [39], increases the sensitivity of quiet stance trials toward detecting vestibular deficiencies. Lacour et al. also stated that vestibular inputs are more crucial to keep balance in more challenging postural tasks on unstable supports with eyes closed or with moving surroundings [40].
The instrumental measurement performed using MediPost mobile posturography localized at the L5 level, assessed postural stability in different conditions where the noted grade of improvement varied, as stated in the literature [13,[39][40][41]. The results of condition 4 (standing on foam, eyes closed) were statistically significant for all MediPost measures, which is consistent with other studies [5,29,42]. The results of condition 1 (firm surface, eyes open) and condition 2 (firm surface, eyes closed) are clinically unsatisfactory.
In the literature, recovery in undemanding balance conditions was generally seen over weeks and months. However, recovery of dynamic postural function took more time, and compensation was incomplete in more challenging postural conditions [13,40]. One of the Sensors 2023, 23, 1994 9 of 12 inclusion criteria in this study was a lack of spontaneous compensation within one month after unilateral peripheral impairment. During this period, spontaneous compensation could occur to some extent, as is seen in the results of less challenging conditions 1 and 2. Basta et al. analyzed mobile posturography of daily life mobility and concluded that proprioceptive input had a greater impact on postural control than visual input during the two-leg stance tasks for all age groups [41].
The main goal of VRT is to restore or improve dynamic functions [5]. Weight shifting in stance is used to improve center of gravity control and balance recovery [31]. Balance with eyes closed and when somatosensory input is altered by standing on foam invokes changes in the base of support. Those conclusions were also observed in this study, where in the instrumental measurements, the "more difficult" conditions 3 and 4 could be used to diagnose vestibular dysfunction. Allum et al. noted that only two of the four two-legged stance tasks that are usually performed (with eyes closed both for normal and foam support) are worth recording for balance screening [39].
Our study analyzed the relationship between clinical scales and objective measurement. The only significant correlation was noted between the TUG test and Medipost in several conditions and for some measures. By contrast, O'Sullivan et al. observed a correlation between BBS, TUG, and accelerometry [38]. The mean scores of non-instrumental measurements of this study population also represent a relatively high-functioning group, and the ability to maintain balance between eyes open and eyes closed should be well within their control. Izquierdo et al. did not find a correlation between the DHI and static balance measurement; however, a greater correlation was noted in SwayStar, which could show that dynamic balance is perceived as more disabling for the study group.
The remaining analysis generally shows no clinically significant correlations between instrumental and non-instrumental measurements, even though an improvement is noted in both categories, which was also concluded in other publications [43][44][45][46]. Mbongo et al. did not observe correlations between DHI and dynamic posturography in patients with unilateral vestibular loss [44]. Yip and Strupp did not find correlations between the DHI and caloric tests, cervical/ocular vestibular-evoked myogenic potentials, and posturographic measures [45]. The lack of correlation between the above-mentioned methods may be explained by a subjective bias in clinical scales performed by the physician and the selfperformed scales by the patient, insensitivity to mild impairments (ceiling effects), and poor reliability. Objective measures are free of such bias. Furthermore, the clinical tests are related in that they all assess various aspects of balance and mobility, but they have different domains of balance, such as the risk of falls, gait performance, and dynamic aspects of balance. Therefore, although an improvement may be noted, a correlation might not be observed. In the literature, it is stated that the influence on the correlation of those measure methods has unaccounted factors mostly at the socio-behavioral level [34,45]. Both non-instrumental and instrumental measures have their own strengths and limitations, and it is important to use a combination of both types of measures to gain a comprehensive understanding of an individual's balance dysfunction.
The cost and size of mobile devices make them easy to use at home for continuous monitoring and home rehabilitation, which, as a result, opens several valuable prospects for clinicians in telemedicine and telerehabilitation. The low-cost tools for VRT monitoring and screening compared to classic static or dynamic posturography can make this a more available method. In the future, the technological migration of wearable sensors from the laboratory to the domestic environment is needed [47]. The wearable nature of such systems may open the way to assess balance not only in static conditions but also during dynamic daily activities where normal posturography cannot be used.
Conclusions
Clinical tests and posturographic measurements using the mobile MediPost system provide an assessment of patients with peripheral vestibular dysfunctions. This research demonstrated the possibilities of using an instrumental test (MediPost) as an alternative method to evaluate balance deficits. Ongoing development and testing of inertial sensors are necessary before employing the technology as a replacement for current clinical tests. There are some limitations in this study. The study group was homogenous and involved only patients with unilateral vestibular dysfunction. Furthermore, we performed instrumental measurements only in a quiet stance. Further research should include larger populations of patients with balance problems with age-matched controls and more dynamic tests. Informed Consent Statement: Informed consent was obtained from all subjects involved in this study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions imposed by the funding institution. | 2023-02-12T16:18:40.421Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "eb2e52c02f8b4426f2244dcca94b61fe784e440e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/4/1994/pdf?version=1676021669",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3be37f210aa58633288d69b8c129bcd4eebd7359",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
225928581 | pes2o/s2orc | v3-fos-license | Association between different screening strategies for SARS‐CoV‐2 and deaths and severe disease in Italy
The WHO recommends testing any suspected person with Severe Acute Respiratory Syndrome CoronaVirus‐2 (SARS‐CoV‐2), in order to limit the spread of the epidemic. In Italy, some Regions opted for extensive testing, whereas others limited tests to selected subjects. To assess the influence of different strategies, we examined the incidence of death and severe cases in Italy.
| ME THODS
We retrieved daily data on new cases of SARS-CoV-2 (ie, individuals with positive test results), number of tests (reverse transcription polymerase chain reaction; RT-PCR) performed, deaths, and admissions to Intensive Care Units (ICU) in each Region, from 24 February to 18 March 2020, obtained from the Health Ministry website. 2 Demographic, socioeconomic and healthcare organisation data were retrieved from the National Institute of Statistics (ISTAT). 3 As an index of different screening strategies, the number of positive test results/total tests (P/T) ratio as of 7 March 2020, was considered. The subsequent evolution of the epidemic was assessed through the cumulative number of deaths and of new severe cases, between March 23 and 25, inclusive; the latter were defined as a composite of death and admission to ICU. This work is based on publicly available data, needing no ethical approval.
The association of those two outcomes with the number of P/T ratio at 7 March was assessed using a linear regression model. For each confounder significantly associated with outcomes, multivariate linear regression models were applied to assess the independent contribution of P/T, assuming two-sided P < .05 as significant.
Analyses were performed on SPSS (SPSS-Inc, Chicago, IL, USA) 25.0. Deaths and severe cases were associated with higher mean personal income and lower density of General Practioners (GPs); deaths were also associated with higher population density (
| D ISCUSS I ON
A higher number of tests/positive test results ratio, reflecting a more aggressive screening strategy for SARS-Cov-2, was associated with lower rates of death and severe disease in Regions of Italy. Although the association was maintained after adjusting for some confounders, the possibility of other unaccounted confounders cannot be excluded. Moreover results are specific for the time range considered.
Results obtained in different countries cannot be directly compared, because of the inevitable impact of many confounders.
However, it should be noted that South Korea, which had a very aggressive testing strategy, managed to limit mortality much more effectively than other countries, despite a very early epidemic outbreak. In Europe, Germany, with a lower positive/total test ratio, had much better outcomes of Italy, France, UK and Spain, all with more conservative approaches to testing. 4 In the US, of the first two States originally reached by the epidemic, Washington, which adopted a more aggressive testing strategies, with a lower positive/ test ratio, contained the overall number of deaths more effectively than New York State. 5 Testing per se is not a therapy, or a preventive tool for infections.
Nevertheless, a more extensive testing strategies allows for the identification of a greater number of asymptomatic or oligosymptomatic cases, which can then be appropriately isolated, containing the spread of the contagion. Unfortunately, the actual implementation of tracking and isolation of cases cannot be easily quantified; the number of tests, or more appropriately the positive/total test ratio, can be considered as a proxy of such isolation strategies. It is also possible that an early detection of infection allowed for a more effective treatment of the disease, although the choice of therapies in milder or early cases of COVID19 is still very controversial. 6 During a pandemic, policy-makers often have to take very difficult decisions on the basis of very little evidence. Considering available data, the use of financial and organisational resources for a rapid development of testing capacity appears to be a good option for countering the COVID19 pandemic.
FU N D I N G I N FO R M ATI O N
This research was performed as a part of the institutional activity of the unit, with no specific funding. All expenses, including salaries of the investigators, were covered by public research funds assigned to the unit.
DATA AVA I L A B I L I T Y S TAT E M E N T
The corresponding author had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. | 2020-04-30T09:04:12.672Z | 2020-04-23T00:00:00.000 | {
"year": 2020,
"sha1": "d2b9695ccafb05581462de070fbfbcb4e15d6b28",
"oa_license": null,
"oa_url": "https://doi.org/10.1111/ijcp.13867",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "9f7e1d99807614c3b3b9d4675b04beef77128299",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4060666 | pes2o/s2orc | v3-fos-license | Population-specific material properties of the implantation site for transcatheter aortic valve replacement finite element simulations
Patient-specific computational models are an established tool to support device development and test under clinically relevant boundary conditions. Potentially, such models could be used to aid the clinical decision-making process for percutaneous valve selection; however, their adoption in clinical practice is still limited to individual cases. To be fully informative, they should include patient-specific data on both anatomy and mechanics of the implantation site. In this work, fourteen patient-specific computational models for transcatheter aortic valve replacement (TAVR) with balloon-expandable Sapien XT devices were retrospectively developed to tune the material parameters of the implantation site mechanical model for the average TAVR population. Pre-procedural computed tomography (CT) images were post-processed to create the 3D patient-specific anatomy of the implantation site. Balloon valvuloplasty and device deployment were simulated with finite element (FE) analysis. Valve leaflets and aortic root were modelled as linear elastic materials, while calcification as elastoplastic. Material properties were initially selected from literature; then, a statistical analysis was designed to investigate the effect of each implantation site material parameter on the implanted stent diameter and thus identify the combination of material parameters for TAVR patients. These numerical models were validated against clinical data. The comparison between stent diameters measured from post-procedural fluoroscopy images and final computational results showed a mean difference of 2.5 ± 3.9%. Moreover, the numerical model detected the presence of paravalvular leakage (PVL) in 79% of cases, as assessed by post-TAVR echocardiographic examination. The final aim was to increase accuracy and reliability of such computational tools for prospective clinical applications.
Introduction
Patient-specific computational models of cardiovascular procedures allow virtual simulation of the interaction between devices and the specific individual implantation site, taking into account anatomical and physiological information from the subject (Taylor and Figueroa, 2009). These models are becoming an important tool to support cardiovascular device development, in particular for testing new designs in clinically relevant boundary conditions (Schievano et al., 2010b), but also as a clinical pre-procedural assessment methodology to prospectively aid the decision making process (Bosi et al., 2015;Cosentino et al., 2015;Schievano et al., 2010a;Wang et al., 2015). However, use of such methods in clinical practice is still limited to individual cases, mainly due to lack of large scale validation studies and the need for more accurate methodologies to capture the patient-specific mechanical response to device deployment. Indeed, whilst cardiovascular imaging enables accurate representation of the 3D anatomy, current techniques do not allow acquisition of the patientspecific in vivo mechanical characteristics. Response to device deployment depends not only on the material properties of the implantation site itself, but also on the presence of surrounding structures (Kim et al., 2013), thus limiting in some contexts the value of ex-vivo data from arterial tissue (Avril et al., 2010;Badel et al., 2011;Cabrera et al., 2013;Flamini et al., 2015;García-Herrera et al., 2013;Li et al., 2008;Ning et al., 2010 Nolan, 2012;Veljković et al., 2014). In addition, non-invasive, inverse computational methods, based on simultaneous acquisition of pressure gradients and diameters, (De Heer et al., 2012;Hamdan et al., 2012;Karatolios et al., 2013;Masson et al., 2008;Schlicht et al., 2013;Schulze-Bauer and Holzapfel, 2003;Smoljkić et al., 2015;Wittek et al., 2013;Zeinali-Davarani et al., 2011), are limited to describe the patient-specific behaviour during the cardiac cycle, but not at overload due to device expansion (Bosi et al., 2015;Bosi et al., 2016a,b).
Transcatheter aortic valve replacement (TAVR) is an established technique to treat severe aortic valve stenosis in high surgical risk patients (Zajarias and Cribier, 2009). TAVR is an ideal setting to advance the field of patient-specific modelling, as the substrate of the TAVR population, with highly calcified implantation sites, present fewer variations in terms of mechanical properties compared to other cardiovascular sites or patient groups (Pham et al., 2017). TAVR outcomes depend on appropriate patient assessment (Kalogeras, 2012;Ruparelia and Prendergast, 2015), and complications such as paravalvular leak (PVL) (Azadani et al., 2009;Tamburino et al., 2011) and onset of conduction abnormalities leading to permanent pacemaker implantation (Binder et al., 2013;Bleiziffer et al., 2010) remain common, therefore warranting a patient-specific computational approach to enhance patient selection (Schoenhagen et al., 2011;Taylor and Figueroa, 2009;Vy et al., 2015). A few patient-specific computational models are already available in the literature for TAVR (Bianchi et al., 2016;Capelli et al., 2012;Gunning et al., 2014;Morganti et al., 2014;Sirois et al., 2011;Sturla et al., 2016;Wang et al., 2015;Wu et al., 2016), but choice of the material parameters for the implantation site remains open (Tseng et al., 2013).
The aim of this work was the development of a computational framework for TAVR simulations, which included patient-specific anatomical site and population-specific mechanical response, based on a retrospective clinical study. The implantation site material parameters were adjusted in order to minimise the error between computational prediction and clinical results in terms of implanted stent diameter as measured from post-procedural fluoroscopy images. The finally obtained computational model had increase accuracy and reliability for prospective future clinical applications.
Materials and methods
Pre-procedural clinical images from a selected TAVR population were processed to create patient-specific finite element (FE) models and simulate the intervention. Post-implantation fluoroscopy images were used to tune the material properties of the FE implantation site model and echocardiography images to validate the computational results with clinical outcomes. The FE analyses were performed using Abaqus 6.14/Explicit (Dassault Systèmes Simulia Corp., Providence, RI, USA) under the hypothesis of quasi-static conditions.
Patient population and image analysis
Fourteen patients (age at intervention = 79.3 ± 8.0 years, 9 males; Table 1), who underwent successful TAVR with the Edwards Sapien XT device at the Heart Hospital (London, UK) between October 2013 and November 2014 were retrospectively selected for this study. One patient received the 23 mm device, nine the 26 mm and four the 29 mm. In all patients, the Sapien XT device was implanted in sub-coronary position, a third below the annulus of the native aortic valve (AV) according to guidelines ).
The stent expansion diameter was measured from the postimplantation fluoroscopy images -acquired in a plane parallel to the axis of the stent in a lateral projection -at valve level. Although circular cross-section can be assumed for the Sapien device deployed by means of a high pressure balloon (Tseng et al., 2013), the computational model was first reoriented in the same projection as the fluoroscopy images, before measuring the projected distance at the level of the TAVR valve for comparison. PVL was assessed immediately post-TAVR by echocardiography and was present in twelve cases -nine trivial and three mild. The position of the jet was evaluated by dividing the aortic valve cross-section in thirds according to the valve cusp positions (right coronary, left coronary and non-coronary cusp).
TAVR FE model
The aortic roots and leaflets reconstructed from CT were meshed with 4-node shell general-purpose elements with reduced integration (1 mm average size) after sensitivity analysis (Finotello et al., 2017), whilst the calcific plaques were discretised with 4node tetrahedral elements (0.5 mm average size). The implantation sites were constrained at their distal and proximal extremities to avoid rigid motion. Tie constraints were applied between the inner aortic surface and the external edge of the leaflets. The same constraints were applied to the calcific deposits and their respective leaflets or the ascending aortic wall when present.
Literature data from experimental tests of ex-vivo specimens (Billiar and Sacks, 2000;Dunmore-Buyze et al., 2002;Durmaz et al., 2010;Grande et al., 1998;Hamdan et al., 2012;Maleki et al., 2014;Walraevens et al., 2008;Weinberg et al., 2009) (Table 2) were used to set-up the initial implantation site material model, considering the characteristics of the TAVR patient population: old age, highly calcified and stiff implantation site, with no visible deformations during the cardiac cycle. Considering that the mechanical response to stent implantation depends on the arterial tissue, but also on the surrounding structures (Kim et al., 2013) and that the contribution of the two cannot be discerned from in vivo data, a simple linear elastic law was adopted rather than a more realistic and complex description (heterogeneous, non-linear, anisotropic) (Gasser et al., 2006;Holzapfel and Gasser, 2001), for both arterial wall and leaflets, with the stiffest range of available properties as the most representative for TAVR patients: E artery = 22.6 MPa (Hamdan et al., 2012) and E leaflets = 8.7 5 MPa (Weinberg et al., 2009). The same considerations led to the choice of tissue thicknesses (t): t artery = 2.8 mm (Dunmore-Buyze et al., 2002) and t leaflets = 2 mm (Grande et al., 1998).
Device geometries were generated from micro-CT scans (Metris X-Tek HMX ST 225 CT, Nikon Metrology, Belgium). The zigzag ele-ments and vertical bars of the stents were meshed using beam elements with a rectangular section profile (0.6 mm radial thickness and 0.38 mm circumferential width), whilst a circumferentially wider rectangular section was assigned to the larger bars (1.15 mm circumferential width). After sensitivity analysis, the average length of the beam elements was 0.7 mm. The stent cobalt-chromium alloy (MP35N) was modelled as a homogeneous, isotropic, elasto-plastic material ( Table 2). The biological valve mounted into the TAVR device was neglected in the FE model as the interaction between the stent and the implantation site was the focus of this study (Bailey et al., 2016). The balloon used to perform balloon valvuloplasty (BAV) just before the TAVR procedures and to expand the Sapien XT device, is a non-compliant PET balloon, with nominal inflation pressure of 5 atm (0.507 MPa). The balloon was designed in the expanded configuration and meshed with 4-node membrane elements (average size 0.5 mm in the longitudinal direction and 0.38 mm in the circumferential direction). PET was described as a homogeneous, isotropic, linear-elastic material (Table 2).
A general contact algorithm was adopted between the different parts of the system with hard contact property. A preliminary simulation was carried out to open the central portion of the aortic leaflets and allow insertion of the balloon and device. The balloon and stent models were placed coaxially to the patient-specific implantation site models. The balloon, constrained at its distal and proximal ends in circumferential and radial direction in order to mimic the bond to the catheter, and in circumferential and longitudinal directions at the central circumference to avoid rigid motion, was deflated to allow insertion into the patient-specific implantation sites and replicate BAV by inflating the balloon to nominal pressure, and deflating it again. The stent models were then crimped onto the balloon to the size of the delivery catheters using radial displacements applied to a coaxial cylindrical surface (surface elements, average size 0.5 mm). The stent expansion was simulated by inflating and deflating the balloon as done for BAV.
A previously described Matlab (MatWorks, MA, US) function (Bosi et al., 2015) allowed quantification of the interaction between the device and the implantation site as a surrogate measure for PVL. The post-TAVR FE model was cut along the length at every 0.5 mm, and the cross-sectional images were analysed to identify the areas lacking contact. Only continuous gaps along the length of the stent were then considered to indicate potential PVL (Fig. 2). Position of PVL jets from the available echocardiography images was compared to the contact gaps in the corresponding computational cross-section.
Statistical analysis of aortic root material parameters
A statistical sensitivity analysis was performed to assess the influence of the unknown material parameters adopted to describe the implantation site model on the simulation results using a design of experiments (DOE) approach (Design-Expert software 10, Stat-Ease, Inc. Minneapolis, USA). DOE allows estimation of the effects of the variation of one or more factors on single or multiple output responses and determine which factors have a significant effect on the response. In this study, a two-level fractional factorial design was adopted to investigate the main effects and/or interaction effects of six input factors (E artery , t artery , E leaflet , t leaflets , E calcium and Yield calcium ) run at two levels each (minimum and maximum values from literature data) on one outcome output response (the stent diameter after balloon deflation). The material and thickness parameters initially adopted for the implantation site FE model were used to set the upper level for the two-level fractional factorial experiment, while the range identified from the literature was chosen to set the lower level, i.e. E artery = 3 MPa, t artery = 1 mm (Walraevens et al., 2008), E leaflets = 4 MPa (Weinberg et al., 2009), t leaflets = 0.5 mm (Billiar and Sacks, 2000). For the calcific deposits the minimum value were chosen as E calcium = 100 MPa (Ebenstein et al., 2009), and Yield calcium = 0.1 MPa (Gastaldi et al., 2010).
Patient 13 was selected as considered representative of the TAVR population, with average degree of calcification (824 mm 3 , compared to the average of 726 ± 503 mm 3 ) and without particularly irregular anatomy; moreover, in the first run of simulations with the initial material properties, patient 13 had a diameter difference between simulated and actual value of À5.6%, the closest to the average error for the population.
Sixteen simulations were performed, with different combinations of input values, as indicated by DOE, with resolution IV (Saleem and Somá, 2015). A Pareto Chart was used to display the standardised effect (t-value) of each input term, i.e. factor or combination of factors, on the outcome parameter. T-values above the Bonferroni limit identified effects with almost certain influence, whilst those above the t-value limit indicated effects with possible influence. Analysis of Variance (ANOVA) was performed to assess the effects of the factors and factorial interactions on the output response and refine the values of only the significant factors. The results of the statistical analysis led to a new set of material parameters that were implemented in the FE model to re-run the patient-specific simulations. The new computational results were then compared with the diameters measured from fluoroscopy images and with echocardiography clinical outcomes.
Results
Patient-specific FE simulations were successfully completed for all 14 cases; Fig. 3 shows an example of the expansion phases of the valvuloplasty balloon and of the stent-balloon system.
In the first set of simulations, with the stiffest material properties and highest thicknesses used to describe the implantation site, the average diameter at the end of balloon deflation was 23.4 ± 1.3 mm. Compared to the fluoroscopy image measurements (average diameter 24.7 ± 1.6, Table 1), there was a mean difference of À5. 3 ± 5.7%, with maximum error of À13.8% recorded for patient 3. The Bland Altman plots (Fig. 4a) show that most FE stents were under-expanded compared to the clinical counterparts, thus suggesting that the material chosen for the TAVR implantation site was too stiff.
The Pareto chart for patient 13 shows the t-value for each effect (Fig. 5), including factors and interaction of factors, where the Bonferroni limit was 3.8 and t-value limit was 2.2. The most significant factor was the leaflet thickness, followed by the Young's modulus of the arterial wall and the leaflets, the combination of the first two, and the arterial thickness (Table 3). Standard error was 0.15 for every factor. The model F-value was 15.93 implying statistical significance, i.e. 0.02% chance that an F-value this large could occur due to noise.
Given the DOE results, a new set of refined material parameters was found (Table 4) and FE analyses for all patients were run again accordingly. The stent diameter % difference was 2.53 ± 3.88% with nine cases of over-expansion and five cases of under-expansion, thus centering the distribution (Fig. 4b). The maximum overexpansion error was 11.2% (patient 11) and the maximum underexpansion error À2.7% (patient 2).
Two examples are reported in the figure to highlight PVL detection: in patient 14, the algorithm did not find any continuous gap along the length of the virtually implanted stent (Fig. 6a); a partial gap is highlighted by the red asterisks in the proximal portion, but is interrupted in the middle portion of the stent. Indeed, post-TAVR echocardiography did not show PVL. On the contrary, for patient 2 (Fig. 6b), the post-TAVR implantation echocardiography highlighted one trivial jets of PVL in the non-coronary cusp (NCC). The corresponding Matlab graph showed two channels starting from the distal portion of the stent and coming together in one towards NCC. Overall, the computational framework correctly indicated presence/ absence of PVL in 79% cases (n = 11) -4 under-expanded and 7 over-expanded. The other three patients (4, 11 and 12) presented a trivial jet at echocardiographic examination, with patient 11 showing the largest error in terms of stent diameter prediction (11.2%). All patients who did not have PVL were correctly identified by the code.
Discussion
In this work, a computational framework for TAVR implantation of the Edwards Sapien XT device was developed and tested in 14 retrospective cases. The implantation site computational model was designed based on the patient-specific anatomy and refined in terms of material properties to replicate the TAVR population mechanical response to device implantation using clinical data.
Patient aortic root and calcified leaflet morphology can nowadays be retrieved from pre-assessment CT. However, information on each patient mechanical response to device expansion cannot be extracted from in vivo pre-procedural clinical investigations and literature data from ex-vivo experimental tests are usually adopted in computational models. To further refine the selection of the material properties of the different biological structures (arterial wall, aortic leaflets and calcific deposits) for the specific TAVR population, elderly patients with severe aortic stenosis, we first initialized the FE model with the stiffest and thickest data available from literature. Then, a statistical approach conventionally used in engineering to optimise product design was adopted for the first time in this context to investigate the main effects of the six unknown material parameters used to describe the implantation site in a selected patient, considered representative of the entire cohort of patients. It would be interesting to run this DOE analysis on a set of many TAVR cases to quantify the differences from the optimized parameters found and to simultaneously optimize the material for more than one case, thus hopefully reducing even more the error from clinical measurements. The leaflet thickness was the most significant factor affecting the computational results, followed by the Young's modulus of the artery and the leaflet.
A second set of simulations for all patients using the refined material parameters for the implantation site resulted in lower differences between the computational results and the clinical stent diameters, thus showing the outcomes of the optimisation process. The population-specific material parameters identified in this study for this population of TAVR cases will be further tested in the future on a larger set of patients to capture potential further variability and prove reliability also using different devices.
A post-processing Matlab code was used to automatically analyse the computational results in terms of geometric interaction between implanted TAVR stent and deformed implantation site. Based on purely geometric information, it was possible to derive some considerations on the potential development of jets of PVL, on the hypothesis that PVL is caused by a suboptimal apposition of the device onto the anatomical site. The algorithm was able to capture the presence/lack of PVL in 79% of patients, thus attesting its specificity. Moreover, the code was able to identify PVL jets location origin in 67% of cases. In clinical practice, PVL severity is assessed with measurements from echo colour Doppler (Sinning et al., 2013) in a scale from trivial, mild, moderate to severe, depending on regurgitant jet dimensions and length, during the diastolic phase. This quantification is technically challenging and highly operator dependent as different cross-sectional view of the device might result in highly different degrees of PVL, both in terms of severity, and in terms of position. With the purposely developed code, we aimed to provide an objective quantification of PVL, derived purely from geometrical consideration; although this parameter is difficult to quantify in clinic, the location was considered recognized if the code found the jet in the same third of the aortic valve, since no more precision is achievable from echocardiography images. It has to be underlined that the computational results provided merely static geometrical information about the interaction stent-implantation site, while the PVL jet might move during the diastolic phase, thus making even more difficult for the operator to report its exact location. In the three cases in which the code inappropriately identified PVL, there was overexpansion (two), but also one case of under-expansion of the stent model compared to the actual diameter. Therefore, there is no clear association between the ability to predict PVL and correct prediction of the implanted stent diameter.
In terms of limitations for this study, the first consideration concerns the measurements from fluoroscopy images, prone to calibration errors and on the assumption of cross-sectional circularity. Simultaneous biplane fluoroscopy images could improve the measurement acquisition and would allow 3D reconstruction of the (Cosentino et al., 2014). DOE was carried out on a selected patient: this could be repeated for other patients to test the parameter settings of the calcified aortic root/LVOT derived from the patient cohort, and average the values of the parameters to minimise further the errors. In the specific clinical setting of severe aortic valve stenosis, a population-specific approach to model the mechanical response of the implantation site to device deployment was considered acceptable as small variations are present in this groups of patients. In the future, advanced image modalities combined with computational modelling may allow for further personalisation of the model.
Additional computational fluid dynamic analysis could help study the local flow conditions and quantify the severity of PVL. In the future, refinements of the Matlab algorithm will improve identification of PVL location and introduce a measure for the degree of regurgitation by analysing gap areas and geometrical complexity. The methodology, if successfully validated, would allow the evaluation of PVL severity without using additional high-computational-cost analyses.
Simplicity and speed of computation have been the main drives for the model here developed, which is not meant to derive accurate localised stress/strain information in the arterial wall/stent, but is designed to provide fast, clinically meaningful predictive information (e.g. stent diameter and possible onset of PVL) built from routinely acquired clinical diagnostic data. The small discrepancy between the computational results and the clinical measurements achieved with the described model in this specific patient population, demonstrates that, despite simple, the computational framework could be used to this purpose.
Conclusions
In this work, we developed a patient-specific computational framework to virtually simulate TAVR procedures. Two main objectives were achieved: the tuning of a set of material/thickness parameters able to describe the implantation site response to TAVR for the TAVR patient population, and the validation of the numerical model over a small cohort of patients. This computational framework could be used on one side to aid the design and test of new TAVR devices in validated implantation sites, and, on the other, to enhance the assessment of patients selected for TAVR. | 2018-03-21T02:41:49.054Z | 2018-04-11T00:00:00.000 | {
"year": 2018,
"sha1": "c1939cad32dd5814898d618d160c212cff4e1b2b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jbiomech.2018.02.017",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b9254940c7ed4a6434a12724cf6550468d9e75e",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
221319694 | pes2o/s2orc | v3-fos-license | Ultra-faint dwarfs in a Milky Way context: Introducing the Mint Condition DC Justice League Simulations
We present results from the"Mint"resolution DC Justice League suite of Milky Way-like zoom-in cosmological simulations, which extend our study of nearby galaxies down into the ultra-faint dwarf regime for the first time. The mass resolution of these simulations is the highest ever published for cosmological Milky Way zoom-in simulations run to $z=0$, with initial star (dark matter) particle masses of 994 (17900) M$_\odot$, and a force resolution of 87 pc. We present initial results from these simulations, focusing on both satellite and near-field dwarf galaxies. We find that the simulated dwarfs and ultra-faint dwarfs (UFDs) reproduce the observed structural and dynamical properties of galaxies with $-3<M_V<-19$. We predict the vast majority of nearby galaxies will be observable given the surface brightness limits of the Vera Rubin Observatory's co-added Legacy Survey of Space and Time (LSST). We additionally show that faint dwarfs with velocity dispersions $\lesssim5$ km/s result from severe tidal stripping of the host halo. These simulations allow us to investigate quenching of UFDs in a hydrodynamical Milky Way context for the first time. We find that the majority of the UFDs are quenched prior to interactions with the Milky Way, though some of the quenched UFDs retain their gas until infall. Additionally these simulations yield some unique dwarfs that are the first of their kind to be simulated, e.g., an HI-rich field UFD, a late-forming UFD that has structural properties similar to Crater 2, as well as a compact dwarf satellite that has no dark matter at $z=0$.
INTRODUCTION
In recent years, many simulations have focused on the dwarf galaxy regime to test our understanding of galaxy formation. Not only are dwarf galaxies the closest galaxies to the Milky Way, but their smaller potential wells make them more sensitive tests of our physical models.
Most dwarf galaxy simulations have focused on galaxies with M star 10 5−6 M , in the mass range of the Milky Way's "classical dwarf" satellite galaxies. With these simulations, we have greatly improved our un-derstanding of galaxy formation, thanks to advances in resolution, the detailed modeling of relevant physical processes, and a consideration of observational biases. For example, simulations in a ΛCDM universe can now explain the number, distribution, and central densities of classical Milky Way satellites (e.g., Zolotov et al. 2012;Brooks et al. 2013;Brooks & Zolotov 2014;Wetzel et al. 2016;Sawala et al. 2016;Tomozeiu et al. 2016;Santos-Santos et al. 2018;Garrison-Kimmel et al. 2019a). Various simulations explain both the diversity of dwarf galaxy star formation histories in the Local Group as well as average mass-dependent trends (e.g., Benítez-Llambay et al. 2015;Wetzel et al. 2016;Wright et al. 2019;Buck et al. 2019;Digby et al. 2019;Garrison-Kimmel et al. 2019b). Additionally, many simulations reproduce a variety of other scaling relations in this mass range, such as the stellar mass-halo mass, Tully-Fisher, and mass-metallicity relations (e.g., Munshi et al. 2013;Vogelsberger et al. 2013;Shen et al. 2014;Christensen et al. 2016Christensen et al. , 2018Brook et al. 2016;Brooks et al. 2017;El-Badry et al. 2018;Santos-Santos et al. 2018). While many open questions remain, our ability to model galaxies in the classical dwarf regime has dramatically improved in the last decade, and we have successfully explained myriad properties of observed galaxies.
The advent of digital sky surveys has led to the rapid discovery of dozens of new dwarf galaxies around the Milky Way (see Simon 2019, for a recent review), largely in the regime of the ultra-faint dwarfs (UFDs; M V fainter than −8 1 , and M star 10 5 M ). In this subclassical regime, however, our understanding is incomplete, and more work must be done to replicate the successes seen in simulating higher mass dwarfs.
Given the pace of discovery, there is still a large uncertainty in the number and distribution of these faint dwarfs. Different assumptions about survey completeness and the underlying halo distribution lead to estimates differing by nearly an order of magnitude in the predicted number of satellites (Simon & Geha 2007;Tollerud et al. 2008;Hargis et al. 2014;Newton et al. 2018;Jethwa et al. 2018;Drlica-Wagner et al. 2020;Nadler et al. 2020). Predictions for the Milky Way satellite distribution are influenced by uncertainties in the connection between halos and galaxies, such as the relationship between stellar mass and halo mass in small halos (e.g., Garrison-Kimmel et al. 2017;Munshi et al. 2017;Read & Erkal 2019;Rey et al. 2019), or the surface brightnesses-and therefore detectability-of galaxies in low-mass halos (e.g. Bullock et al. 2010;Wheeler et al. 2019). Differing assumptions about which halos can host galaxies can even lead to a "too few satellites" problem (Kim et al. 2018;Graus et al. 2019), in which there are more Milky Way satellites than theoretically expected, reversing the decades-old Missing Satellites Problem (Klypin et al. 1999;Moore et al. 1999).
The star formation histories (SFHs) and quenching mechanisms of UFD galaxies are also uncertain. It has been suggested that ultra-faint dwarf galaxies are fossils of reionization (Bovill & Ricotti 2009), having been quenched via gas heating during reionization (e.g., Bullock et al. 2000;Benson et al. 2002;Somerville 2002). Early quenching is consistent with observations of some UFDs (e.g., Brown et al. 2014;Weisz et al. 2014), but all UFDs with constrained star formation histories are close to the Milky Way or M31, complicating any interpretation. Previous simulations of isolated UFDs (e.g., Fitts et al. 2017;Jeon et al. 2017;Wheeler et al. 2019) are consistent with reionization quenching. However, simulations must be able to simultaneously explain the apparent early quenching of most UFDs, along with the existence of UFDs hosting recent star formation, such as Leo T (Irwin et al. 2007; see also Rey et al. 2020).
Other properties of the newly discovered nearby faint dwarfs are becoming clearer, including their kinematics (e.g., Kleyna et al. 2005;Muñoz et al. 2006;Simon & Geha 2007;Wolf et al. 2010;Koposov et al. 2011;Kirby et al. 2013a), morphology and structure (e.g., Martin et al. 2008;McConnachie 2012;Muñoz et al. 2018), and metallicity and chemical composition (e.g., Simon & Geha 2007;Frebel et al. 2010;Norris et al. 2010;Vargas et al. 2013;Kirby et al. 2013b;Ji et al. 2020). As our knowledge of faint galaxies increases, the emerging view is that below the mass of classical dwarfs, galaxies trend towards increasingly ancient and dark matterdominated stellar systems. Even UFD galaxies seem to be in many ways a natural extension of more luminous systems to lower mass, with any clear physical division likely to be driven by the details of reionization (Bose et al. 2018;Simon 2019). Nonetheless, even among the faintest dwarfs, there is a great deal of galaxy-to-galaxy diversity, including in kinematics, sizes, and star formation histories, that has proven challenging to reproduce in existing simulations. Now that dozens of new galaxies have been discovered around the Milky Way, it is crucial to test our galaxy formation models in this fainter regime, and to ensure that we can still match and explain the properties of observed dwarf galaxies. However, while there are a wealth of Milky Way simulations resolving classical dwarf galaxies, there is a paucity of simulations capable of resolving down to the UFD range.
It is important, therefore, to run new simulations capable of resolving the Milky Way's fainter satellites. However, it is computationally expensive to achieve the resolution necessary to resolve down to the UFD range while simultaneously placing galaxies in a cosmological context allowing for gas inflow and outflow as well as tidal interactions with larger galaxies. As alternatives, several groups have undertaken direct simulation of very small dwarf galaxies in non-cosmological contexts (e.g., Read et al. 2016;Corlies et al. 2018;Emerick et al. 2019). Other groups have simulated cosmological regions at high resolution, but have stopped at high redshift (e.g., Wise et al. 2014;Jeon et al. 2015;Safarzadeh & Scannapieco 2017;Macciò et al. 2017), or used the results as ini-tial conditions for later host-satellite simulations . Finally, there have been several simulations of field dwarfs in cosmological environments, achieving analogs to dwarf galaxies far from the Milky Way (e.g., Simpson et al. 2013;Munshi et al. 2017Munshi et al. , 2019Oñorbe et al. 2015;Wheeler et al. 2015Wheeler et al. , 2019Jeon et al. 2017;Fitts et al. 2017;Revaz & Jablonka 2018;Agertz et al. 2020a). Cosmological simulations have made significant strides in resolution (e.g., Agertz et al. 2020b;Renaud et al. 2020b,a), but have not achieved the mass resolution required to reliably study the properties of galaxies with M star 10 5 M in a Milky Way context (e.g., Zolotov et al. 2012;Brooks & Zolotov 2014;Sawala et al. 2016;Wetzel et al. 2016;Simpson et al. 2018;Buck et al. 2019;Garrison-Kimmel et al. 2019a). To test whether our models still match observations given our burgeoning Milky Way census, it will be necessary to achieve higher resolution in a Milky Way context.
To this end, we introduce the DC Justice League suite of Milky Way zoom-in simulations, run at high ("Mint") resolution sufficient to begin probing analogs of the faintest Milky Way satellites. While our studies of spatially resolved galaxies are limited to larger UFDs, these simulations serve as a crucial step forward in our study of the Milky Way environment. We will describe the global properties of galaxies as faint as M V ∼ −3 and the resolved properties of dwarfs with M V −5. We present two simulations run from z = 159 to z = 0, with present-day Milky Way halo masses of 7.5×10 11 M and 2.4×10 12 M , allowing us to bracket the suspected lower and upper limits of the Milky Way's mass, respectively. We use these simulations to show that we can match dwarf galaxy properties simultaneously across 6 orders of magnitude in luminosity, including lower luminosities than ever before studied around a fully cosmological Milky Way simulation. We further take advantage of these new simulations to study the star formation histories and gas properties of UFDs around the Milky Way. We focus in particular on the question of what quenched star formation in UFDs, which in previous simulations could not be studied in the context of the Milky Way. Through case studies, we finally show how much of the variety seen in faint galaxy properties arises naturally in our simulations.
The paper is organized as follows: in Section 2 we describe our simulations. In Section 3 we present the basic properties of the Milky Way-like galaxies. We then discuss the properties of the dwarf galaxies in Section 4. In Section 5 we show that reionization is responsible for quenching the majority of UFD galaxies, even around the Milky Way. We present several case studies of interesting galaxies in Section 6. We discuss our results in Section 7, including limitations of this work. We summarize our results in Section 8.
SIMULATIONS
The simulations used in this work were run using ChaNGa (Menon et al. 2015), a smoothed particle hydrodynamics (SPH) + N-body code. ChaNGa includes the hydrodynamic modules of Gasoline (Wadsley et al. 2004(Wadsley et al. , 2017 but uses the charm++ (Kalé & Krishnan 1993) runtime system for dynamic load balancing and communication to allow scalability up to thousands of cores. ChaNGa also incorporates an improved gravity solver that is intrinsically faster than Gasoline.
The simulations were run using the "zoom-in" technique (e.g., Katz & White 1993;Oñorbe et al. 2014), where smaller regions within large, dark matter-only volumes are resimulated at higher resolution with full hydrodynamics. The zoom-in technique allows for very high resolutions in the regions of interest, while still capturing large-scale gravitational tidal torques. The zoom regions were selected from a 50 Mpc, dark matter-only volume run using the Planck Collaboration et al. (2016) cosmological parameters. The high-resolution regions are largely uncontaminated by low-resolution particles out to 2 R vir for each host. However, since the zoom regions are non-spherical we find galaxies out to ∼2.5 R vir in the present day. Gas particles are split from the dark matter particles according to the cosmic baryon fraction Ω bar /Ω m = 0.156. The present-day central halos were chosen to be Milky Way analogs; they are isolated and have virial masses bracketing the range of observationally constrained estimates (≈ 0.5 − 2.5 × 10 12 M ; e.g., Wilkinson & Evans 1999;Watkins et al. 2010;Kafle et al. 2014;Sohn et al. 2018;Eadie & Jurić 2019). The simulations have a gravitational spline force softening of 87 pc, minimum hydrodynamical smoothing length of 11 pc, and dark, gas, and (maximum) initial star particle masses of 17900, 3310, and 994 M , respectively. These constitute the highest mass resolution of any cosmological simulations ever run of Milky Way-like galaxies. At z = 0, the two simulations contain approximately 10 8.3 and 10 8.8 particles; in total, they required approximately 14 million and 120 million core hours, respectively. Despite their large computational expense, the simulations were possible owing to the excellent scaling of ChaNGa.
The Milky Way simulation suite presented here serves as a complement to the MARVEL-ous Dwarfs, a suite of four high-resolution zoom-in regions of field dwarf galaxies formed in low-density environments (Munshi et al. in prep). The Milky Way simulations we discuss here are nicknamed the "DC Justice League," named in honor of the female United States Supreme Court justices. While there are four Milky Way zoomin simulations in the suite, we discuss two in this paper that have been run at the above-described resolution; we term these "Mint" resolution. The two have been nicknamed "Sandra" and "Elena." Lower resolution versions of these simulations (run at 175 pc resolution, dubbed "Near Mint") have been presented elsewhere (Bellovary et al. 2019;Akins et al. 2020;Iyer et al. 2020), but we are introducing these high-resolution simulations here for the first time, with spatial and mass resolutions within a factor of ∼2 of the MARVEL-ous dwarfs (Munshi et al. in prep).
Star particles represent simple stellar populations with a Kroupa (2001) initial mass function (IMF) and an initial mass of 30% that of their parent gas particle. We use the "blastwave" form of supernova feedback (Stinson et al. 2006), in which mass, metals, and energy from Type II supernovae are deposited among neighboring gas particles. We distribute 1.5×10 51 erg per supernova, then turn off cooling until the end of the snowplow phase (the extra energy above 10 51 erg is designed to mimic the energy injected into the local ISM by all feedback processes coming from young stars, including high energy radiation). The simulations also incorporate feedback from Type Ia supernovae, mass loss in stellar winds, and iron, oxygen, and total metal enrichment (Stinson et al. 2006), a time-dependent, uniform UV background (Haardt & Madau 2012), and metal cooling and diffusion in the interstellar medium (Shen et al. 2010). We discuss the effect of feedback models on our results later in the paper (Section 7).
Star formation in these simulations is based on the local non-equilibrium abundance of molecular hydrogen (H 2 ; Christensen et al. 2012). The recipe follows the creation and destruction of H 2 both in the gas-phase and on dust-grains, as well as dissociation via Lyman-Werner radiation. We include both dust-shielding and selfshielding of H 2 from radiation, as well as dust-shielding of HI. The probability of forming a star particle of mass m star from a gas particle of mass m gas is where X H2 is the is H 2 abundance and t dyn is the local dynamical time. The star formation efficiency parameter, c * 0 = 0.1, is calibrated to provide the correct normalization in the Kennicutt-Schmidt relation ). This star formation prescription successfully reproduces the low velocity dispersion of star-forming gas, which is critical to forming the kinematically cold young stars seen in the Milky Way's agevelocity relation (Bird et al. 2020).
We also model supermassive black hole (SMBH) formation, growth, feedback, and dynamics based on local gas conditions (Tremmel et al. 2015(Tremmel et al. , 2017Bellovary et al. 2019). SMBHs form in cold (T < 2 × 10 4 K), primordial (Z < 10 −4 and X H2 < 10 −4 ), and dense (n H > 1.5 × 10 4 cm −3 ) gas, with a seed mass of 5 × 10 4 M . Black holes grow by accreting gas using a modified Bondi-Hoyle formalism that includes a term for momentum supported gas, and by merging with other black holes. Black holes are allowed to move freely within their host galaxies, while explicitly modeling unresolved dynamical friction; this freedom can lead to delayed SMBH mergers (Tremmel et al. 2018a,b) and off-center black holes in dwarf galaxies (Bellovary et al. 2019). Akin to supernova blastwave feedback, SMBHs deposit thermal energy in surrounding gas when they accrete gas, and we turn off cooling in the heated gas for the length of the SMBH time step (usually < 10 4 yr), with a feedback coupling efficiency of 0.02. We assume accretion is Eddington limited, with a radiative efficiency of 0.1.
We identify halos using Amiga's Halo Finder (Gill et al. 2004;Knollmann & Knebe 2009), which identifies a halo as the spherical region within which the density satisfies a redshift-dependent overdensity criterion based on the approximation of Bryan & Norman (1998). We use AHF for all halo properties unless otherwise stated. Galaxies are defined as all stellar content residing within halos 2 , and satellite galaxies are galaxies residing within subhalos. We trace all main progenitors with at least 100 particles at z = 0 back in time, and include in our final sample those galaxies with at least 10 star particles and 1000 dark matter particles prior to mass loss due to interactions with the central halo. This corresponds to a peak dark matter halo mass of M peak ≥ 10 7.25 M . For resolving structural properties of the galaxies, we require at least 50 star particles. Table 1 shows the number of galaxies in our sample that meet our resolution criteria, along with the basic properties of the two Milky Waylike halos.
Further analysis was performed using the pynbody analysis code (Pontzen et al. 2013). Galaxy magnitudes and luminosities are calculated by interpolating on a grid of metallicities and ages, using Padova simple stellar population isochrones (Marigo et al. 2008;Girardi et al. 2010) 3 . We make no corrections for dust extinction; we expect dust to have little impact in the dwarf galaxy regime focused on in this work. (16) 10 (5) Note-The name of each simulation, the virial mass (Mvir), virial radius (Rvir), and stellar mass (Mstar; defined within 3× the 3D half-mass radius) of its main Milky Way halo, the scale length (R d ), the number of satellite galaxies of the main halo (Nsat) that meet both our resolution criteria (globally resolved first, structurally resolved in parentheses; see Section 2), the number of central galaxies beyond the virial radius (N field ) that are globally (structurally) resolved, and the number of present-day Milky Way satellites that fell in as satellites of another dwarf galaxy (Nsat,prior) that are globally (structurally) resolved.
Simulations run with ChaNGa and Gasoline using the above star formation and feedback models have yielded numerous results in the dwarf galaxy regime, and have explained a variety of observed properties, such as the stellar mass-halo mass relation (Munshi et al. 2013(Munshi et al. , 2017, the baryonic Tully-Fisher relation (Christensen et al. 2016;Brooks et al. 2017), the mass-metallicity relation (Brooks et al. 2007;Christensen et al. 2018), and the properties of Milky Way satellites Brooks & Zolotov 2014) and field dwarfs . These models produced the first simulated cored dark matter density profiles and bulgeless disk galaxies (Governato et al. 2010;Brook et al. 2011;Governato et al. 2012). The simulations have also been used to make observable predictions for the star formation histories of nearby dwarf galaxies (Wright et al. 2019) and the merger rates of dwarf galaxy SMBHs (Bellovary et al. 2019).
MILKY WAY-LIKE GALAXIES
While the focus of this work is on the satellite and other nearby dwarf galaxies, we also briefly present the properties of the central, Milky Way-like galaxies. Future work will investigate these galaxies more closely. Figure 1 shows mock face-on and edge-on multi-band images of the galaxies. These images have been generated using the Monte Carlo radiative transfer code SKIRT (Baes et al. 2003(Baes et al. , 2011Camps & Baes 2020), assuming a dust-to-metals ratio of 0.3. The two galaxies have very different morphology. Sandra has flocculent spiral arms and a clear bar structure in the center. Elena, on the other hand, hosts no spiral arms but has a star-forming ring. As late as z ∼ 0.5, Elena had spiral and bar structures. However, during its latest merger (see below), it began to quench and redden, and its morphology transformed to the one seen in the figure. While Sandra Elena Figure 1. Mock UVI images of Sandra (left column) and Elena (right column), for both face-on (top) and edge-on (bottom) orientations. Images were generated using outputs from the Monte Carlo radiative transfer code SKIRT, assuming a dust-to-metals ratio of 0.3 and a maximum dust temperature of 8000 K. Images are 40 kpc across. Elena is shown to a dimmer surface brightness (23 mag arcsec −2 ) than Sandra (21 mag arcsec −2 ) in order to highlight the low surface brightness disk. Sandra shows a strong central bar, flocculent spiral arms, and a dusty disk. Elena shows an apparent ring structure and an extended low surface brightness disk.
it may not be morphologically a Milky Way analogue, its halo and stellar masses, as well as its relatively quiet assembly history, are thought to be consistent with that of the Milky Way. We will therefore continue to refer to it as a Milky Way-like galaxy.
Summary properties of the two Milky Way-like galaxies, including virial masses, virial radii, and number of dwarf satellites, are listed in Table 1.
Throughout its history, Sandra experiences multiple mergers with LMC-mass halos 4 . Its last major merger is with an LMC-mass halo (merger ratio ∼1.5) at z ∼ 2, though the first infall of the galaxy occurs earlier, at z ∼ 3. During this time in the galaxy's history, a clear disk has not yet formed, and many simultaneous mergers occur close in time. Therefore, there is some uncertainty on the exact timing and masses involved. However, the merger is consistent with a Gaia-Enceladus/Sausage-like event (e.g., Belokurov et al. 2018;Helmi et al. 2018). By z ∼ 1.5, a clear disk forms, around which time another LMC-mass halo falls in, completing its merger by z = 1. At z ∼ 0.5, the galaxy experiences another LMC-mass infall, which orbits for several Gyr before merging at z = 0.15. Finally, in the present-day, an LMC-mass halo satellite is completing its first pericentric passage, currently at a galactocentric distance of 200 kpc.
Commensurate with its lower mass, Elena experiences mergers with smaller halos than Sandra. At z ∼ 3, it experiences its most major merger (though, similar to Sandra, there are many simultaneous mergers that complicate the picture), with a merger ratio of 4. At z ∼ 1 it experiences a Sequoia-like infall (e.g., Barbá et al. 2019;Myeong et al. 2019) of a ∼10 10 M halo. Finally, at z ∼ 0.5 two unassociated halos fall in, one LMC-mass and one SMC-mass, with both eventually merging by z = 0. During their several Gyr of orbit, the two galaxies fly by the Milky Way numerous times and harass it substantially, ultimately leading to the quenching and morphological transition mentioned above. Additionally, about 2 Gyr before the present day, one of these galaxies passes directly through the center of the main galaxy, which may explain its ring structure (see, e.g., Appleton & Struck-Marcell 1996). Figure 2 shows the star formation histories of the two central galaxies. As expected, Sandra, the more massive galaxy, has a higher star formation rate throughout its history. As noted above, Elena's last merger caused a decline in star formation, visible in the last 2-3 Gyr.
THE DWARF GALAXY POPULATION
In this section we focus on the properties of the dwarf galaxies in the simulations. First, we discuss the general attributes of the population, demonstrating consistency with observations across the entire luminosity range. Properties of the galaxies that are presented be- For each galaxy, the three most major mergers are marked, with the size inversely proportional to the merger ratio (i.e. the largest point is the most major merger). Sandra has a higher star formation rate across most of cosmic time, commensurate with its higher mass. In the last ∼3 Gyr, Elena has had a declining star formation rate, leading to its redder color. During this time, the central galaxy is harassed by the orbiting dwarf that ultimately merges at ∼13 Gyr.
low are collated in Table 2, the full version of which is available as supplementary material.
Observational Sample
We compare the results of our simulations to several dwarf galaxy catalogs that have been assembled in the literature. In particular, we compare to the updated version of the McConnachie (2012) catalog 5 (though we take the velocity dispersions for Phoenix and Tucana from Kacharov et al. 2017 andTaibi et al. 2020, respectively.). We also compare to the Milky Way satellites sample assembled from the literature in Simon (2019). We additionally compare to the homogeneously analyzed outer halo satellites sample of Muñoz et al. (2018). For the latter catalog, we have used their best-fit parameters assuming an exponential density profile in order to better compare to our galaxy morphological fits. In cases where the same galaxy may exist in multiple catalogs, we show values only for the more recent estimate. We exclude observed galaxies that are more than 1.5 Mpc from the Milky Way, in order to keep their environments comparable to our simulations.
In comparing to our simulations, we assume a virial radius of 300 kpc for both the Milky Way and M31. We (Chiboucas et al. 2013), down to their completeness limits. The SAGA results (Geha et al. 2017) are shown as the grey band, representing the full range of luminosity functions from their survey. Sandra and Elena are largely consistent with luminosity functions from the literature, bracketing the range of observed satellite populations. Commensurate with its larger mass, Sandra hosts many more satellites, and is more similar to M31 and Cen A, while the lower mass Elena is comparable to the sparsely populated M94 system. In the right panel we compare to the Near Mint resolution versions of Sandra and Elena (Sandra NM and Elena NM), which are consistent down to our resolution cutoffs.
exclude all observed dwarf galaxies with half-light radii below 50 pc; this is approximately the radius at which size alone cannot distinguish objects as galaxies versus globular clusters (Simon 2019), and all such galaxies are below the resolution limit of the simulations, so we do not in general expect to be able to reproduce them. For clarity in plot comparisons, we exclude observed properties with large uncertainties (e.g., uncertainty in M V greater than 5); however, if the uncertainty is missing for M V , we assume it to be 1.
Luminosity Functions
The left panel of Figure 3 shows the satellite luminosity functions of the simulated galaxies. We compare to the sample of Simon (2019) for Milky Way satellites and the updated version of McConnachie (2012) for M31. We also show results from the SAGA survey (Geha et al. 2017) down to the survey completeness limit, where we represent the full range of luminosity functions as a grey band. Finally, we also include, down to their completeness limits, the luminosity functions of M94 (Smercina et al. 2018), M101 (Bennet et al. , 2020, Centauras A (Cen A; Crnojević et al. 2019), and M81 (Chiboucas et al. 2013), where for M81 we have included galaxies within a projected distance of 300 kpc. We find that our simulations fall within the range of observed luminosity functions for this mass range; Elena has fewer satellites than the Milky Way, and is more consistent with M94, while Sandra is more similar to M31, M81, and Cen A.
Given the halo masses of the simulated galaxies, it is unsurprising that Sandra would have significantly more satellites. Elena and Sandra are also in line with results from the SAGA survey, though Elena is among the sparsest systems. Together, Elena and Sandra seem to bracket the observed range of luminosity functions very well.
The right panel of Figure 3 shows the satellite luminosity functions of the Near Mint (2x lower force resolution) versions of each simulation, indicated by "Elena NM" and "Sandra NM." The luminosity functions are consistent between the Near Mint and Mint resolution simulations. The results are even converged down to the faintest galaxies in the Near Mint sample. For the Near Mint simulations, we applied the same resolution criteria (N star ≥ 10 and N dark ≥ 1000 at peak halo mass) for inclusion in the sample. It is therefore reassuring that galaxy global properties are converged down to 10 star particles, consistent with Hopkins et al. (2018). Additionally, the Near Mint simulations are in fact able to probe the very brightest UFDs, and while their resolution is lower than the simulations presented in this work, they are still comparable to other high resolution studies of the Milky Way (e.g., Sawala et al. 2016;Grand et al. 2017;Garrison-Kimmel et al. 2019a;Buck et al. 2020).
satellite distribution and survey coverage of SDSS and DES along with the radial subhalo distribution from the (dark matter-only) Aquarius simulations (Springel et al. 2008), Newton et al. (2018) predict that there should be ∼40 galaxies brighter than M V = −4 within 300 kpc of the Milky Way. This indicates that we may be underproducing the faintest galaxies in our simulations, or that adjustments to our halo-and galaxy-finding procedure are necessary. On the other hand, a large fraction of the UFDs discovered in DES are thought to be associated with the Magellanic Clouds (e.g., Deason et al. 2015;Jethwa et al. 2016;Sales et al. 2017;Kallivayalil et al. 2018;Li et al. 2018), the exclusion of which would substantially lower the faint-end luminosity function, alleviating the tension. Since Sandra appears fully consistent with the expectations of Newton et al. (2018), it is likely that Elena's lower mass and lack of LMC are sufficient to explain any tension. While we only compare to two simulations, the different luminosity functions strongly suggest a dependence of the total satellite population on the host halo mass (see also Carlsten et al. 2020), in contrast to e.g., Samuel et al. (2020), who find little correlation. We note, however, that our mass range is a factor of two larger than in their work, which may account for the difference.
Stellar Mass-Halo Mass Relations
To explore the galaxy-halo connection, in Figure 4 we show the stellar mass-halo mass (SMHM) relation for all galaxies in our sample, along with recent results from the literature for comparison purposes (Read et al. 2017;Jethwa et al. 2018;Nadler et al. 2020) 6 . We note that at low masses, not all halos host galaxy counterparts, and we have chosen to compare to relations that incorporate this fractional occupation. We show both centrals and satellites. The left panel shows stellar mass as a function of present-day virial mass, while the right panel uses M peak , the peak halo mass (i.e., before stripping), as done in abundance matching techniques.
As was shown in the satellite luminosity functions of Figure 3, Sandra hosts many more satellites and nearby galaxies than Elena, in approximate proportion to the higher mass of the main halo. However, the SMHM relation appears to be consistent between the two runs. As in previous works (e.g., Garrison-Kimmel et al. 2017;Munshi et al. 2017), we find increasing scatter at lower masses, extending all the way to the UFD regime. Nonetheless, above M peak ∼ 10 10 M our results appear to be consistent with the results of Read et al. (2017). At lower masses, we largely overlap with the results of Nadler et al. (2020). In the left-hand panel, satellite galaxies exhibit the largest scatter, where tidal interactions with the main halo can lead to preferential mass loss of the dark matter content of halos. When using the peak halo mass, the scatter is greatly reduced, though it still increases at low masses. For an in-depth analysis of scatter and more regarding the SMHM relation, we refer the reader to Munshi et al. (in-prep), . The stellar and halo masses of galaxies in our sample, for both simulations in our suite. Satellite galaxies are shown as filled squares, backsplash galaxies are shown as stars, and field galaxies are shown as empty squares. The left panel shows galaxies' present-day halo masses, while the right panel shows their peak halo masses through time. Both panels show z = 0 stellar masses. We compare to the stellar mass-halo mass relation inferred in Nadler et al. (2020), as well as the halo occupation + scatter model of Jethwa et al. (2018), where the dark (light) bands represent the 68 (95)% confidence intervals. Both Nadler et al. (2020) and Jethwa et al. (2018) derive their relations via a Bayesian analysis of the Milky Way's observed satellites. We also compare to the relation of Read et al. (2017), which uses halo masses inferred from HI rotation curves of isolated field dwarf galaxies; the lines enclose the inner 68% confidence interval, and we use dashed lines to indicate an extrapolation of their relation.
which will update the results of Munshi et al. (2017) using the larger Marvel + DC Justice League sample. We note here that a large fraction of galaxies presently in the field have passed within the virial radius of the Milky Way previously. In Figure 5 we show that in the region just beyond the virial radius, these so-called "backsplash" galaxies are the majority of galaxies (as opposed to those that have never had an infall), consistent with previous results (e.g., Teyssier et al. 2012;Buck et al. 2019). Over half of all present-day field galaxies in our sample have had at least one infall within the virial radius of the Milky Way. As we show later in this work, these passages through the Milky Way halo can substantially alter the galaxies' kinematic and structural properties.
Galaxy Sizes
To calculate the structural parameters of the galaxies, we use maximum (log) likelihood estimation to find the best fitting parameters for a 2D elliptical exponential density profile 7 ; for more detail, see Martin et al. (2008). The density profile has the following functional form: 7 These fits actually find half-density radii rather than half-light radii. As a check, we have fit to images of the V-band luminosity for these systems and found no systematic differences in the parameter estimates.
where r e is the scale radius (with a half-light radius given by r h = 1.68r e ), Σ 0 is the central density, and r is the elliptical radius given by where is the ellipticity 8 and θ is the angular offset of the ellipse from the vertical. We additionally simultaneously fit for the centroid (x 0 , y 0 ) of the ellipse, such that X i = x i − x 0 and Y i = y i − y 0 . Figure 6 shows the size-luminosity relationship as viewed on the sky from the central simulated Milky Way galaxy. We compare to observational data from McConnachie (2012) and Muñoz et al. (2018). We also show the line of constant mean surface brightness, µ V = 32 mag arcsec −2 , which is approximately the limiting surface brightness of the Vera Rubin Observatory's co-added Legacy Survey of Space and Time (LSST Science Collaboration et al. 2009). We finally also show the results from several cosmological simulations of field UFDs Fitts et al. 2017;Wheeler et al. 2019), where for Jeon et al. (2017) and Wheeler et al. (2019) we have determined their galaxy luminosities by assuming a stellar mass-to-light ratio of 2, as appro- . Magnitude vs distance from the central Milky Way for galaxies in both simulations. Satellites are shown as filled squares, field galaxies are empty squares, and backsplash galaxies (galaxies that were previously within the virial radius of the Milky Way) are shown as stars. The vertical dashed line represents the virial radius, for visualization purposes. Marginalized histograms are shown for field (solid black lines) and backsplash (dashed grey lines) galaxies. Just beyond Rvir, most "field" galaxies are backsplash galaxies, while beyond ∼1.5 Rvir most galaxies have never fallen into the Milky Way. Over half of the galaxies beyond the virial radius are backsplash galaxies, but there is no trend with luminosity.
priate for the predominantly ancient populations of the galaxies. For Fitts et al. (2017), whose galaxies span a variety of SFHs, we have assumed a mass-to-light ratio of 1.6. We note that the half-light radii in the other works were not calculated via the same exponential elliptical fitting as in this work. Additionally, the half-light radii of Fitts et al. (2017) are 3D, so we multiply the listed values by 3/4 to approximate the 2D projected half-light radii . Across most of the sample, including in the ultra-faint range, we reproduce the same size-luminosity relation as observed in nearby galaxies, including approximately the same scatter 9 . At a given magnitude, the galaxies span a range of sizes, including some diffuse and some relatively compact galaxies. Our most compact galaxies are generally about as bright and slightly larger than Andromeda XVI or Leo T; reproducing these galaxies Leo T Figure 6. Magnitude versus half-light radius for galaxies with at least 50 star particles, derived using a 2D elliptical exponential fit. Galaxies are oriented as viewed on the sky from the central simulated Milky Way galaxy. We compare to observed dwarf galaxies in the updated catalog of Mc-Connachie (2012) and the sample from Muñoz et al. (2018); see Section 4.1 for more detail. We also compare to the simulated samples of Jeon et al. (2017), Fitts et al. (2017), and Wheeler et al. (2019). The dashed line represents a constant surface brightness of µV = 32 mag arcsec −2 , roughly the limit of co-added Vera Rubin Observatory's LSST. We expect essentially all galaxies near the Milky Way to be observable by LSST. With the exception of the compact elliptical galaxies like M32, the simulated galaxies reproduce the full range of scatter in the size-luminosity plane.
has been a challenge in some previous works (e.g., Revaz & Jablonka 2018;Garrison-Kimmel et al. 2019a). There is one additional galaxy with M V ∼ −9 and r h = 40 pc that is by a wide margin the most compact simulated dwarf. This galaxy seems to be a faint analog of ultracompact dwarf (UCD) galaxies, and understanding its evolution may shed light on the origin of UCD galaxies. Our sample includes just one of these galaxies; it is possible that a larger simulation suite-or a more massive central galaxy-would include more of them, or even brighter ones. In Section 6.3 we discuss this compact galaxy in more detail, including its evolutionary history.
The UCD analog represents a step forward in modeling compact galaxies. However, the lack of compact bright galaxies like M32 in cosmological simulations is a manifestation of the "diversity" problem (Oman et al. 2015), for which there is as yet no accepted solution 10 . It is possible that we lack the ability to resolve such dense galaxies, or that spurious dynamical heating from 2-body interactions systematically increases half-light radii by z = 0 (Revaz & Jablonka 2018;Ludlow et al. 2019b). Nonetheless, with the exception of the compact ellipticals such as M32, we do produce the entire observed range of luminosities at a given size. Prior works, while overlapping with these simulations in the size-luminosity plane, show less scatter.
Importantly, none of our galaxies have mean central surface brightnesses dimmer than 32 mag arcsec −2 . We therefore expect essentially all nearby galaxies to be observable by the Vera Rubin Observatory's co-added LSST, with no galaxies too diffuse to detect in the UFD galaxy range, at least down to the luminosity limit probed here. This prediction stands in contrast to the simulated field galaxies of Wheeler et al. (2019), who predict that ultra-faint dwarfs are fairly diffuse, with the majority having surface brightness much dimmer than the detection limit of LSST. Multiple factors could influence this discrepancy: the work of Wheeler et al.
(2019) focused on isolated dwarf galaxies far from any massive galaxy, and perhaps environment plays a role in galaxy sizes. Additionally, their resolution, with a baryonic particle mass of 30 M , is higher than in this work, and all the most diffuse galaxies in their work are fainter than we can structurally resolve; thus, we cannot rule out that we could be consistent if we had a higher resolution. We explore possible numerical explanations, including feedback implementations and resolution, in Section 7.1. Figure 7 shows the line-of-sight velocity dispersions for galaxies in the simulations, separated by environment. We compare to observations from McConnachie (2012) and Simon (2019), as well as previous simulations of field dwarfs Revaz & Jablonka 2018). We also compare to the FIRE-2 simulations of Garrison-Kimmel et al. (2019a), who investigated dwarf galaxies with M star > 10 5 M in a suite of Milky Way-like and Local Group-like simulations, and the NIHAO simulations of Buck et al. (2019). Both the latter simulations have large samples, so we show their results as shaded bands that approximate their full range of values.
Kinematics
The kinematics of the simulated DC Justice League galaxies reproduce those of observed galaxies. For luminosities L V < 10 6 L , galaxies show a large scatter, but that scatter is relatively constant down to the faintest galaxies, such that galaxies with L V ∼ 10 4 L and galaxies with L V ∼ 10 6 L appear to span a similar range of masses/kinematics, just as seen in the observations. However, for L V > 10 6 L galaxies have higher velocity dispersions. In our simulations, galaxies with M star 10 7 M are able to form cored dark mat- . Line-of-sight velocity dispersion of the simulated dwarf galaxies as a function of luminosity for both simulations (Sandra and Elena). We show satellites as filled squares, backsplash galaxies as stars, and field galaxies as empty squares. We compare to observed dwarfs using the compilation of Simon (2019) , we assume a stellar mass-to-light ratio of 2 for galaxies fainter than 10 6 L , and a ratio of 1 otherwise. While the prior field simulations tend to have higher velocity dispersion, our simulations reproduce the full range. The dynamically coldest systems (σv 5 km/s) have all been severely tidally stripped; see Figure 8.
ter density profiles. Dark matter cores allow for more substantial tidal stripping that can also lead to smaller velocity dispersion, particularly for those satellites with small orbital pericenters (e.g., Brooks & Zolotov 2014).
Unlike most of the prior simulations we compare to in Figure 7, we produce galaxies with σ v < 5 km/s, consistent with many observed galaxies. This can largely be explained by the effects of environment: all of the low-σ v galaxies in our simulations are either current satellites or backsplash galaxies. Jeon et al. (2017) and Revaz & Jablonka (2018) simulated only isolated field environments. Correspondingly, their galaxies are generally consistent with the galaxies from our simulations that have the highest dispersions at a given luminosity.
Previous work has consistently shown that severe tidal stripping in the presence of a massive host can lead to lower velocity dispersions (e.g., Peñarrubia et al. 2008;Brooks & Zolotov 2014;Errani et al. 2015;Frings et al. 2017;Fattahi et al. 2018;Buck et al. 2019), explaining the difference between environments. Peñarrubia et al. (2008) found that galaxy structure and kinematics tend to follow evolutionary tracks that depend mainly on how much total mass has been lost, with σ v decreasing monotonically as mass loss increases. In Figure 8 we show the same is true in our simulations. The figure shows the present-day velocity dispersion for all galaxies with L V < 10 6 L as a function of their (total) mass loss from peak. We separate by environment, though σ v appears to only depend on mass loss, not present-day location. Figure 8 directly shows the importance of simulating faint galaxies in the context of the Milky Way environment. All but one of the galaxies with σ v < 5 km/s have lost most of their mass due to tidal stripping, and all galaxies with both L V > 10 4 L and σ v < 5 km/s (all of which have M peak > 10 8.5 M ) have lost at least 90% of their mass. Importantly, even some backsplash galaxies that today are in the field have experienced severe tidal stripping. The figure also shows a mass-dependent trend in velocity dispersion. While tidal stripping leads to lower dispersions in all halos, galaxies in smaller halos are systematically dynamically colder; the coldest field galaxy has σ v = 4.8 km/s, despite having lost less than 4% of its mass.
Tidal effects explain why the field simulations in Figure 7 do not contain low-σ v galaxies. Buck et al. (2019) are the only other set of simulations that produce galaxies with velocity dispersions below 5 km/s. We note that their simulations were run with a modified version of Gasoline, which uses the same hydrodynamics solver upon which ChaNGa is based, as well as similar feedback recipes. However, their simulations were run at lower resolution and with a density-based star formation scheme. ChaNGa's H 2 -based star formation scheme leads to low velocity dispersions at birth (Bird et al. 2020), and may be necessary in order to reproduce the galaxies with L V < 10 4 L and σ v < 5 km/s, which have generally been less severely tidally stripped than their more luminous counterparts. It is interesting that the Milky Way and Local Group simulations of Garrison-Kimmel et al. (2019a) also do not reproduce the lowest σ v galaxies, despite capturing the same environmental effects as our simulations, and despite adopting a star formation prescription that should also capture the high densities and low temperatures of gas forming in H 2 . They discuss several possible explanations, including insufficient resolution, N-body dynamical heating, or spurious (numerical) subhalo disruption (van den Bosch & Ogiya 2018). . Velocity dispersion as a function of the fraction of mass remaining from peak halo mass for all galaxies in the simulations with LV < 10 6 L . Present-day satellites are marked with filled squares, backsplash galaxies are marked with stars, and field galaxies are marked with empty squares. There is a tight correlation between velocity dispersion and tidal stripping: the dynamically coldest halos at a given M peak have experienced the most mass loss, regardless of present-day location, while less massive halos tend to be intrinsically dynamically colder. All galaxies with σv < 5 km/s and LV > 10 4 L (all of which have M peak > 10 8.5 M ) have lost at least 90% of their mass.
Mass-to-Light Ratios
We show in Figure 9 the mass-to-light ratio within the half-light radius as a function of V -band luminosity. Previous studies have shown that enclosed mass is robustly estimated within the observed half-light radius (e.g., Walker et al. 2009;Wolf et al. 2010). For both the observed and simulated galaxies, we therefore use the methodology of Wolf et al. (2010) to calculate the dynamical mass within the half-light radius, or where σ v is the line-of-sight velocity dispersion (as seen from the Milky Way) and r h is the half-light radius. When calculating the mass-to-light ratios, we find the mass within the "circularized" half-light radius r h √ 1 − (Sanders & Evans 2016), where is the ellipticity of the system as seen from the Milky Way. For comparison, we also plot the mass-to-light ratios calculated by directly summing the particle data enclosed within the half-light radius.
The results in Figure 9 match the observational data, whose masses are also derived using equation 4. The simulations reproduce the general trend, as well as the scatter, with the faintest systems being dominated by dark matter. For the most part, the mass-to-light ra- . Mass-to-light ratios within the half-light radius for galaxies with at least 50 star particles. We show the values as derived using the Wolf et al. (2010) mass estimator (equation 4), separated into present-day field, satellite, and backsplash populations. For all galaxies, we also show the mass-to-light ratios as derived from the simulation particle data; the two methods are generally consistent with each other, even at higher luminosities where the systems are no longer dispersion-supported. We compare to observed dwarf galaxies, whose values have all been derived using the Wolf et al. (2010) relation. The case where only an upper limit exists on the observations is shown as a downward red arrow. Across the whole range of luminosities, the simulated galaxies match the observed relation. In the faint end, simulated galaxies with higher mass-to-light ratios tend to be field galaxies, unlike the observational data. We note that above ∼10 7 L , galaxies transition to rotation support, so the inferred mass-to-light ratios should be treated with caution.
tios derived from equation 4 are close to the true values derived from particle data, indicating the general robustness of the Wolf et al. (2010) estimator (see also Campbell et al. 2017;González-Samaniego et al. 2017). The outlier to this trend, with L V ∼ 10 5.5 L and a particle-derived mass-to-light ratio < 10, has a ∼0.5 dex lower ratio when inferred from the velocity dispersion. This galaxy corresponds to the compact system (see Figure 6) with limited dark matter content; it is seemingly an ultra-compact dwarf analog, which we discuss in more detail in Section 6.3. A few of the simulated galaxies with L V < 10 4.5 L are more dark matter-dominated than observed systems in the same luminosity range. These galaxies are mostly field galaxies that have never had an infall-in other words, they have never been substantially tidally stripped. As shown in the previous section, these are the galaxies that are dynamically hottest in this luminosity range. On the other hand, all of the observed galaxies in this range are either satellites of the Milky Way or Andromeda; the observations may therefore be biased toward more heavily stripped halos, and subsequently lower mass-to-light ratios.
We additionally note that in Figure 9, galaxies with L V 10 7 L transition from primarily dispersionsupported to rotation-supported galaxies, and so equation 4 is no longer valid. Despite this, for the simulated galaxies the estimator remains consistent with the particle data across the entire range of luminosities, demonstrating its robustness. Nonetheless the mass-to-light ratios of the brightest observed galaxies in the figure should be treated with caution.
Metallicities
Observations of Local Group galaxies follow a universal relationship between stellar mass (or luminosity) and stellar metallicity, across orders of magnitude in mass and across various morphologies (e.g., Kirby et al. 2013aKirby et al. , 2019. At higher masses, numerous groups are now able to reproduce these trends (e.g., Brooks et al. 2007;Ma et al. 2016;De Rossi et al. 2017;Christensen et al. 2018;Torrey et al. 2019). At lower masses, however, most simulations produce galaxies with stellar metallicities below those observed (e.g., Macciò et al. 2017;Revaz & Jablonka 2018;Wheeler et al. 2019). Part of the challenge stems from the need to both ensure inefficient star formation via feedback, while also retaining metals in the interstellar medium to be incorporated in subsequent generations of stars. Multiple explanations have been offered to explain the too-low stellar metallicities, including pre-enrichment from Population III (Pop III) stars or varying IMF yields (e.g., Revaz & Jablonka 2018;Wheeler et al. 2019), insufficient time resolution , pre-enrichment from the more massive host galaxy (Wheeler et al. 2019), or too-efficient feedback .
The left panel of Figure 10 shows the luminositymetallicity relationship for all galaxies in the sample. To maximally separate out the effects of environment, we plot satellite and backsplash galaxies with filled squares while showing galaxies that have never had an infall as empty squares. We compare to observed Milky Way satellite galaxies (Simon 2019;McConnachie 2012), as well as Local Group dIrrs and M31 satellites (Kirby et al. 2013a). We also compare to simulations of field dwarf galaxies from the literature ( There is broad agreement between the observations and the simulations presented in this work for galaxies with LV 10 4 L . For galaxies below 10 4 L , simulated metallicities appear to be lower than observations, but substantially more metal rich than in Wheeler et al. (2019). Right: metallicities are calculated from total metals, also applying a floor of log Z/Z = −4. Symbols are the same as the left panel, except we no longer compare to prior simulations. Compared to the left panel, agreement is improved even further, especially for the faintest galaxies. This agreement indicates that the simulated UFDs are able to retain metals in the ISM, but may be under-producing iron.
Galaxies with L V 10 4 L are consistent with observations across the full luminosity range, and there are no systematic differences across environment in Figure 10, in agreement with the observed data. While the metallicities might be slightly low (but see below), their slope is consistent with observations. In fainter galaxies, with L V < 10 4 L , the simulations are less successful at reproducing the observations, with a few of the simulated UFDs having stellar metallicities that are largely unenriched. However, most of the UFDseven those with as few as 10 star particles-have experienced some level of cumulative chemical enrichment that brings their metallicity above the floor. The more metalrich UFDs are fully consistent with observed galaxies. This is in contrast to the results of Wheeler et al. (2019), who found no metal enrichment at all in galaxies with M star < 10 4 M , despite resolving these galaxies with > 100 star particles. Only at M star > 10 5 M do their galaxies approach the observed relation.
The discrepant results between this work and Wheeler et al. (2019) may reflect the different feedback implementations between our simulations and the FIRE-2 simulations. Work by Agertz et al. (2020a) showed that metallicity is highly sensitive to feedback implementation; we discuss this further in Section 7.1. Wheeler et al. (2019) suggest that a lack of Pop III or environmental pre-enrichment may account for their low simulated metallicities, but contributions from preenrichment may be insufficient: while highly uncertain, Pop III yields were likely iron-deficient (e.g., Iwamoto et al. 2005;Ishigaki et al. 2014), and even assuming solar abundance in the yields is unlikely to raise the simulated metallicities to observed values ). Pre-enrichment from a more massive host is also unlikely, since as we show in the next section, these galaxies tend to quench long before they approach the Milky Way. This is reflected in Figure 10, which shows that galaxies that have never had an infall in our simulations have comparable metallicities to satellite and backsplash galaxies.
Similarly, Pop III or environmental pre-enrichment likely does not account for the low [Fe/H] of galaxies with L V 10 4 L in our simulations. Instead, the timing of star formation may be more important. The right panel of Figure 10 shows the luminosity-metallicity relationship again, but instead uses total stellar metallicity rather than [Fe/H]. The galaxies across the entire luminosity range-including the faintest galaxies-are consistent with the observed data. None of the UFDs are at or near the metallicity floor. This suggests the galaxies are successfully retaining metals in the ISM.
If the galaxies are both producing and retaining enough metals to enrich to the observed luminositymetallicity relationship, then the low [Fe/H] in the faintest galaxies implies that they are simply underproducing iron relative to oxygen. Iron is produced predominantly in Type Ia explosions, which occur in our simulations on ∼Gyr timescales (Raiteri et al. 1996). It is likely, therefore, that star formation is stopping too soon relative to Type Ia delay times. One possibility is that the duration of star formation is too short in the UFDs. Another is that the timescale for Type Ia supernovae is too long, and that we need to include models for "prompt" Type Ia supernovae, occurring on ∼100 Myr timescales (Mannucci et al. 2006;Maoz et al. 2012). Finally, Pop III stars may pre-enrich galaxies to a higher [Fe/H] floor, though this seems unlikely as Pop III yields were likely iron-poor (e.g., Iwamoto et al. 2005).
THE QUENCHING OF THE ULTRA-FAINTS
Of the UFDs with resolved star formation histories and ages, most appear to have formed the bulk of their stars early on (Okamoto et al. 2012;Brown et al. 2014;Weisz et al. 2014;Skillman et al. 2017). Such early quenching is consistent with UFDs being fossils of reionization (Bovill & Ricotti 2009). However, all observed UFDs with constrained SFHs are satellites of the Milky Way or M31 (with the exception of Leo T, which is in the field but may be a backsplash galaxy; see, e.g., Blaña et al. 2020), so it is difficult to rule out quenching due to interactions with the Milky Way. We compare both satellites of the Milky Way and near-field UFD galaxies in the same simulation, and use their orbital histories to show that feedback from reionization and/or supernovae is the dominant quenching mechanism.
The left panel of Figure 11 shows the cumulative (fractional) star formation histories of all galaxies in the sample, color-coded by V -band luminosity. SFHs are calculated from the star particles remaining in the galaxy at z = 0; stars that may have formed in a galaxy but been tidally stripped are not included. The galaxies exhibit a range of SFHs across all luminosities. On average, however, more massive galaxies form the bulk of their mass later than smaller galaxies. Several galaxies display "gaps" in their SFHs where previously quenched galaxies restart their star formation, similar to the phenomenon described in Wright et al. (2019). There also exist several galaxies that have delayed-onset star formation, with the first star formation starting well after the end of reionization. It is unknown whether any observed galaxies have such late star formation; these will be the subject of future work. Generally, however, most galaxies begin their star formation before z ∼ 6, and form the majority of their mass by z ≈ 2.
The right panel of Figure 11 includes only galaxies that are in the UFD range, along with the star formation histories of observed UFDs derived from colormagnitude diagarams (Weisz et al. 2014;Brown et al. 2014). For clarity, we plot only galaxies whose star formation lasts at least 100 Myr 11 ; the galaxies with < 100 Myr SFHs predominantly form as single-age populations within the first 500 Myr after the Big Bang (z 10). The simulated UFDs are color-coded by their environment (either satellite of the Milky Way or nearfield galaxy). All of the observed UFDs with star formation histories, on the other hand, are satellites of either the Milky Way or Andromeda, with the exception of Leo T. In this luminosity range, most of the simulated galaxies quench by t ∼ 3 Gyr, regardless of whether or not the UFDs are satellites (one of the quenched UFDs restarts its star formation again at later times, which we discuss further in Section 6.1). The lack of environmental dependence suggests that quenching is caused by reionization and/or supernova feedback, rather than environmental effects.
Compared to the observations in the right panel of Figure 11, the simulated UFDs appear to quench faster, with less extended star formation. However, within the total uncertainties of Weisz et al. (2014), several of the observed UFDs are consistent with forming all of their stars before z = 3, as in the simulations. Additionally, the inferred star formation later than z ∼ 3 may be so slight that it would be difficult for the simulations to capture it given the resolution of the star particles. Finally, while there is a slight tension between the simulated SFHs and those of Weisz et al. (2014), our results are consistent with the SFHs of the 6 UFDs studied in Brown et al. (2014) 12 . They found that all the UFDs in their sample formed 80% of their stars by z ∼ 6 and 100% by z ∼ 3, as in these simulations.
There is also one late-forming UFD in the right panel of Figure 11; this is a near-field galaxy just beyond the virial radius of Elena with a V -band magnitude of −7.9. 11 36% of the UFDs have star formation lasting less than 100 Myr.
It is unclear whether any real galaxies have star formation lasting less than 100 Myr; though many UFDs are consistent with exactly single-age populations (Brown et al. 2014), the uncertainty in stellar ages is well above this timescale at ∼1 Gyr. 12 Three of the UFDs studied in Brown et al. (2014) also have star formation histories from Weisz et al. (2014). Two of the three galaxy star formation histories are consistent between the two studies, but the SFH for Canes Venatici II (CVn II) is discrepant for as-yet unknown reasons. Figure 11. Cumulative star formation histories of galaxies in our sample. The left panel shows all galaxies, colored by their V -band luminosity. The dwarf galaxies display a wide array of SFHs, with a general trend that more massive galaxies form their mass later. The right panel shows only UFD galaxies, colored by their present-day environment (satellite or field). For clarity, in the right panel we do not plot simulated UFDs with star formation lasting less than 100 Myr. We also show observed UFD star formation histories derived from color-magnitude diagrams. For Weisz et al. (2014) we show their best-fit SFH as orange lines and their uncertainty as orange bands, while for Brown et al. (2014) we show as grey bands the full statistical uncertainty range of their cumulative SFHs as derived from their 2-burst model. Our UFDs have generally quick star formation, with most of them quenching by z ∼ 3. The star formation histories are consistent with those from Brown et al. (2014), but somewhat inconsistent with those of Weisz et al. (2014), who find later star formation in some UFDs. However, our results are largely consistent within their uncertainties. We note that both Brown et al. (2014) and Weisz et al. (2014) use isochrones older than the age of the Universe, and the latter sets the cumulative SFH to 0 at log(t) = 10.15 Gyr; we have made no correction for this, which is why their SFHs appear to start in many cases at t < 0.
Unlike the other UFDs in the sample, it began star formation well after reionization and undergoes a different evolution, which we discuss as a case study in Section 6.2.
To isolate the role of environment in UFD quenching, Figure 12 focuses on two processes pertaining to dwarf galaxies: star formation quenching and gas loss. The top panel shows the quenching time (here defined as τ 90 , the time when a galaxy reached 90% of its final stellar mass) of all quenched galaxies as a function of infall time to 2 R vir 13 , colored by peak halo mass. If halos had multiple infalls, the time of their first infall is used. Halos which have never approached within 2 R vir are assigned an infall time of 14 Gyr. We also mark the beginning and end of reionization as implemented in our simulations (z = 15 -6). Figure 12 shows two different galaxy populationsgalaxies that quenched uniformly early regardless of infall time, and galaxies whose quenching correlates with 13 Previous work (e.g., Behroozi et al. 2014;Fillingham et al. 2018) has found that environmental effects from the Milky Way extend out to ∼2 R vir , so we compare to infall at this radius. We have additionally confirmed that the results of Figure 12 hold true for infall radii between 1-3 R vir .
infall. The populations are approximately separable by mass, and the division between these galaxies occurs at M peak ∼ 10 9.3 M 14 . This division coincides with that of UFD galaxies, which have M peak 10 9.5 M . Though we do not show show it here, we have verified that the two populations of Figure 12 are just as clearly separated for infalls to 1 R vir as for 2 R vir . Additionally, for small halos hosting UFD galaxies, quenching was generally earlier than infall to 3 R vir , let alone 1 R vir . Combined with the general lack of connection between infall time and quenching time, the early cessation of star formation indicates that reionization and/or supernova feedback was responsible for quenching the majority of the UFDs. Larger halos, whose quenching is tied to infall, stop forming stars as a result of environmental effects. They are studied in more detail in Akins et al. (2020). Interestingly, the processes responsible for quenching are not necessarily the same processes that remove gas from the galaxy. The bottom panel of Figure 12 shows There are two different populations in the figure: one population, characterized by M peak 10 9.3 M (corresponding to UFD galaxies), are quenched uniformly early, regardless of infall time. More massive galaxies' quenching times are correlated with infall time. Bottom: The HI mass at infall (to 2 Rvir) for the same galaxies as in the top panel. Galaxies that are star-forming at infall are shown with squares, while galaxies that are quenched at infall are shown as triangles. Galaxies with less than 10 3 M in HI at infall are shown at the bottom of the figure. Combined with the top panel, the figure shows three populations of galaxies: galaxies that have lost their gas and quenched prior to infall, galaxies that have quenched but retained their gas prior to infall, and galaxies that are quenched after infall.
the HI mass at infall (to 2 R vir ) for the same galaxies as in the top panel; any galaxy without gas or with HI mass < 10 3 M is shown at the bottom of the panel. Galaxies that are star-forming at infall are shown as squares, while galaxies quenched at infall are shown as triangles.
The figure shows that a large number of galaxies that are already quenched at infall have retained substantial amounts of cold gas (we have confirmed the same holds true for infalls to 1 R vir ). The division for these galaxies occurs at M peak ∼ 10 9 M . In other words, while the large majority of UFDs quench early, well before any interaction with the Milky Way, UFDs residing in more massive halos (10 9.0 ≤ M peak ≤ 10 9.5 M ) retain their gas until they interact with the Milky Way. The processes responsible for quenching (supernova feedback and reionization) do not fully heat or remove cold gas prior to infall. Yet, in the present day, as we discuss in Section 6.1, most of the UFDs that contained gas at infall no longer do.
In summary, we see a transition in the way reionization acts on halos as we increase our mass scale. UFDs in halos with M peak 10 9.0 M are quenched uniformly early; the vast majority of these halos also lose their gas quickly. Galaxies in halos with 10 9.0 ≤ M peak ≤ 10 9.5 M represent a transition range, in which the galaxy is quenched early but can retain some halo gas for many Gyr, until infall. Above M peak ≥ 10 9.5 M , galaxies are quenched environmentally, if at all; these galaxies also retain gas until the present day, as we discuss below.
CASE STUDIES
Here we discuss several interesting dwarf galaxies; these either have unique properties or have interesting evolutionary histories. Taken together, they demonstrate how interactions in a Milky Way environment contribute to the diversity observed in faint dwarf galaxy properties.
6.1. Gas-rich UFDs Figure 13 shows the HI fractions of all galaxies in the sample as a function of their distance from the Milky Way. Triangles at the bottom indicate galaxies devoid of HI. Points are sized by galaxy V -band luminosity; on the right are several points for comparison. Finally, the points are colored by specific star formation rate (sSFR; defined as SFR/M star ), with quenched galaxies as unfilled points. Star formation rates are calculated as in Tremmel et al. (2019): we calculate the 25 Myr SFR, except in cases where two or fewer star particles form. To minimize numerical noise in these cases we use the average SFR over 250 Myr.
Most of the brighter galaxies are actively star forming, and even the few that are quenched (or nearly so) retain some HI. While the lowest sSFR galaxies are all found within 1 R vir , the higher sSFR galaxies can be found across a range of galactocentric distances. As seen in Figure 13. HI fraction as a function of galactocentric distance for all galaxies in the sample. Squares represent star forming galaxies, colored by specific star formation rate, while triangles represent quenched galaxies. Galaxies devoid of HI are placed at the bottom of the plot. Finally, squares and triangles are sized according to the galaxy's V -band luminosity, with references to guide the eye on the right-hand side of the plot. While most faint galaxies are quenched and have no gas or HI, a few have retained their gas. The solid circles highlight two UFDs that have non-zero HI masses, located at ∼2 Rvir, while the dashed circles show slightly brighter galaxies (−10 < MV < −8) with HI. Figure 11, all UFDs are quenched. However, not all the UFDs are devoid of HI. Two UFDs near 2 R vir (about 600 kpc) have nonzero HI masses; they are shown in solid circles in the figure. While unusual among UFDs, they are not as rare when considering slightly more massive galaxies; among galaxies with −10 < M V < −8, there are three galaxies with HI, shown in dashed circles. Most of them are concentrated near 2 R vir , but one of them is located within the virial radius. The more HI-rich UFD, with an HI mass of 3.5 × 10 5 M , should be detectable by surveys such as ALFALFA (Giovanelli et al. 2010) or through targeted observations. The more massive HIrich dwarf galaxies may likewise be detectable through targeted searches, including the nearest one, which has a distance of 200 kpc and HI mass of 4 × 10 4 M .
The standard view of UFD galaxies is not only that they quench during or shortly after the epoch of reionization, but that they are devoid of gas. Previous searches for HI in UFDs have yielded upper limits with no detections (e.g., Grcevich & Putman 2009;Spekkens et al. 2014;Westmeier et al. 2015;Crnojević et al. 2016). Leo P (M V = −9.27) is among the fainter known galaxies hosting HI, but at a distance of 1.62 Mpc (Giovanelli et al. 2013;Rhode et al. 2013;McQuinn et al. 2015) it is far more isolated than most known UFDs. Currently, Cumulative SFH Figure 14. Evolutionary history of an HI-rich UFD. The top panel shows dark matter, total gas, and HI mass of the main progenitor through time. The middle panel shows its galactocentric distance (interpolated between time steps with a cubic spline) as well as the virial radius of the Milky Way-like galaxy Sandra. The bottom panel shows the cumulative star formation history. The purple band shows z = 15 − 6, the epoch of reionization. Despite quenching early, the galaxy retained all of its gas until falling into ∼2 Rvir. At this point, it began losing gas but continued gaining HI, as well as restarting star formation. Nonetheless, it retains appreciable HI content at z = 0.
Leo T (whose luminosity of M V = −8.0 is on the edge of the UFD definition; see, e.g., Simon 2019) is the faintest known galaxy hosting HI (Irwin et al. 2007;Ryan-Weber et al. 2008); at 420 kpc from the Milky Way, it is also among the more distant of the known galaxies at such low luminosity. Figure 14 shows the evolutionary history of the most HI-rich UFD galaxy in Figure 13 (halo 24 in Sandra). The top panel shows the dark matter, total gas, and HI mass of the main progenitor. The middle panel shows the galactocentric distance of the halo as well as the virial radius of the Milky Way-like main halo. For visualization purposes, the galactocentric distance is interpolated between time steps on a cubic spline, though as shown in Richings et al. (2020) this can lead to underestimates in pericentric distances. The bottom panel shows the cumulative SFH. The purple band indicates the duration of the epoch of reionization. Figure 14 shows that the HI-rich UFD had a largely uneventful history; star formation began shortly after the epoch of reionization, and it formed the bulk of its stars around 1.5-2.5 Gyr, at which time it also lost most of its cold gas. It continued accreting dark matter throughout its lifetime, showing no obvious signs of interaction with either the Milky Way or other dwarf galaxies. However, during its long approach to the Milky Way, it began accumulating HI again, while also losing gas overall. When it neared 2 R vir , it restarted star formation, similarly to the reignited galaxies of Wright et al. (2019). The increase in HI mass, decrease in total gas mass, and renewed star formation can all be explained by ram pressure on the infalling galaxy as it approaches the halo of the Milky Way-like host; ram pressure compresses the galaxy's gas, increasing gas densities and promoting star formation (e.g., Fujita & Nagashima 1999;Bekki & Couch 2003;Du et al. 2019). We classify it now as quenched-it last formed stars 400 Myr agobut it may be more accurately described as forming stars at a rate below our resolution, as the formation of star particles at such low SFR is subject to shot noise.
Recently, Janesh et al. (2019) found 5 candidate UFD galaxies in imaging follow-up to ultra-compact highvelocity clouds discovered in the ALFALFA HI survey (Giovanelli et al. 2005). Of the candidates, several have distances of ∼2 − 3 R vir , HI masses of ∼10 5−6 M , and estimated magnitudes of -4 to -7. Given their similar properties, the very faint yet gas-rich galaxies of Figure 13 may serve as simulated counterparts to these recently discovered candidate galaxies, and can offer insight into their origin.
Late-forming UFD
Another outlier among the UFDs is the late-forming UFD of Figure 11 (halo 409 in Elena). It not only began star formation later than 2 Gyr, it then continued forming stars for over 7 Gyr, albeit with some periods of quenching during that time. Both its late onset and long duration make it unusual among UFDs. Figure 15 shows this galaxy's evolutionary history, akin to Figure 14. We additionally mark with a dashed vertical line the (approximate) time of pericenter during the halo's orbit. Interestingly, this halo began form-ing stars near apocenter, and continued forming stars (though with long pauses) until pericenter, at which point it quickly lost all of its gas and over 90% of its dark matter. Similarly dramatic tidal stripping occurring near pericenter is common (e.g., Klimentowski et al. 2009;Peñarrubia et al. 2010), particularly on highly eccentric orbits with close approaches, and in gas-rich dwarf galaxies where ram pressure stripping lowers the central density of the halo (e.g., Kazantzidis et al. 2017). We note that this galaxy is a backsplash galaxy; due to its eccentric orbit, it is currently in the near-field despite having a past pericentric passage closer than 50 kpc.
While Section 5 showed that the bulk of the UFDs are quenched early on by reionization and/or feedback, this galaxy demonstrates that even these seemingly simple systems can exhibit a variety of histories. While most of the observed UFD star formation histories show early quenching (see Figure 11), galaxies such as this one may explain the later star formation observed in a couple of the Weisz et al. (2014) UFDs. This kind of UFD is quite rare in our sample, however, so additional CMD-derived star formation histories are needed to better constrain how rare such dwarfs are in the observed Universe.
In addition to its unique star formation history, this galaxy's kinematics and structure are worth noting. With a half-light radius of 600 pc, M V = −7.9, and a line-of-sight velocity dispersion of 3.2 km/s, this galaxy is in many ways an analog of Crater 2 (M V = −8.2; Torrealba et al. 2016), which is unusually large (r h ∼ 1 kpc) and cold (σ v = 2.7 km/s). For halo 409, the same severe tidal stripping that quenched its star formation is also likely responsible for its low velocity dispersion; before tidal stripping, it had a velocity dispersion of 6.5 km/s, which while low, would not be rare. In Figure 8, this galaxy is one of the most severely tidally stripped, and also has one of the lowest velocity dispersions of any galaxy in the simulations.
Prior work using the APOSTLE simulations addressed the formation of cold, large galaxies such as Crater 2 (Torrealba et al. 2018), and similarly predicted that severe mass loss could explain their structure. However, as they could not directly probe such faint galaxies, they instead tied their derived stellar mass-halo mass relation with the tidal stripping evolutionary tracks of Errani et al. (2015) to infer progenitor properties from presentday dwarf galaxies. While our halo 409 exhibits a similar total mass loss to their predictions (∼99%), they also predict similar tidal stripping in the stellar component. Halo 409, on the other hand, lost less than 10% of its stars. This preferential stripping of the dark matter component may explain a lack of (so-far) observed tidal debris in the vicinity of Crater 2. There are two additional Crater 2 analogs in the simulations, which similarly underwent severe tidal stripping: halo 1467 in Sandra has M V = −8.3, r h = 1.3 kpc, and σ v = 2.6 km/s, and halo 2026 in Sandra has M V = −7.8, r h = 1.05 kpc, and σ v = 2.5 km/s. These are among the most diffuse galaxies in our sample, with central surface brightnesses of ∼30 mag arcsec −2 . Unlike the abovediscussed halo 409 in Elena, however, they have been stripped of the majority of their stars, and so would potentially have observable tidal debris in their vicinity.
Compact Dwarf
Below, we discuss a compact galaxy that forms in these simulations. Cosmological simulations, while successful in reproducing a wide array of dwarf galaxies, have had trouble simulating compact dwarf galaxies (e.g., Jeon et al. 2017;Fitts et al. 2017;Revaz & Jablonka 2018;Garrison-Kimmel et al. 2019a). In particular, none of these previous cosmological simulations have reproduced ultra-compact dwarf galaxies 15 (UCDs; Hilker et al. 1999;Drinkwater et al. 2000;Phillipps et al. 2001), a population of galaxies with M star ∼ 10 6 − 10 8 M and r h ∼ 10 − 100 pc, nor have they produced compact elliptical (cE) galaxies (M star ∼ 10 8 − 10 10 M and r h ∼ 100 − 700 pc), of which M32 is the prototype.
In Figure 6, there is a clear outlier in the sizeluminosity plane. While hosting a typical V -band magnitude of −9.2, it is unusually compact, with a halflight radius of 40 pc. This size and luminosity places it firmly within the faint end of the UCD population (Brodie et al. 2011). As we discuss below, this galaxy (halo 1179 in Sandra) is the remnant of a severely tidally stripped dwarf galaxy. Figure 16 shows the galaxy's evolutionary history; the top panel shows the dark matter, gas, and stellar mass of the halo as a function of time, the middle panel shows the orbital history of the galaxy, and the bottom panel shows the archaeological cumulative star formation history-i.e. the star formation history as inferred from just the remnant stellar population. We also mark the times of most severe gas loss and dark matter loss. Interestingly, while the galaxy loses its gas at second pericenter, it loses its dark matter at the following apocenter. It may be that the halo is tidally shocked during its pericentric passage, resulting in heating and expansion that lead to greater susceptibility to tidal mass loss (e.g., . Alternatively, ram pressure stripping at pericenter may have left the halo more susceptible to stripping (e.g., Kazantzidis et al. 2017). Ultimately, the tidal stripping was incredibly severe, with the galaxy losing all of its gas and over 99.99% of its dark matter over the course of several Gyr 16 . Currently, it would be observationally consistent with being devoid of dark matter.
The bottom panel of Figure 16 shows that the stars that constitute the central cluster formed within a very short period of time at ∼4 Gyr, at the time of the galaxy's first pericenter. These stars formed as a single, compact cluster, with approximately the same halflight radius as their present-day descendent. We find that the surviving stars all formed at the extreme highpressure tail allowed by our star formation model (see, e.g., Munshi et al. 2014), which may explain their quick formation and initial compact nature. The summary of the formation scenario for this galaxy, then, is that it is the remnant of a large star cluster that formed in a typical, dark matter-dominated dwarf galaxy, that was then stripped of all dark matter, leaving only the compact, dense cluster as a dark matter-free galaxy. Among the many UCD formation scenarios proposed, halo 1179's formation is most consistent with being the nuclear remnant of a tidally "threshed" dwarf galaxy (e.g., Bassino et al. 1994;Bekki et al. 2001).
However, this galaxy is at the edge of our resolution; in fact, with a gravitational softening length of 87 pc, the half-light radius is below our force resolution. It is therefore in some ways surprising that the galaxy is dynamically stable. This simulation allows for a minimum hydrodynamical smoothing length of 11 pc, making the gas clump that formed this cluster hydrodynamically resolved at the time of formation. The structure of the stars has evolved little since formation, despite being below the force resolution. However, given its subresolution size, we are cautious that this galaxy may be influenced by unidentified numerical issues. Additionally, it is possible that the dark matter halo was stripped too efficiently due to artificial numerical disruption (van den Bosch & Ogiya 2018).
Finally, we note that this galaxy was identified using AHF, which is tuned to find cosmological overdensities (see Section 2), which is biased towards the prevailing dark matter-dominated galaxies. The only reason this galaxy was identified at all is due to its extremely high baryonic density. It is therefore likely that other dark matter-free and/or compact galaxies of slightly lower density are not being identified. Future work will return to these topics using alternate methods for identifying galaxies. Cumulative SFH Figure 16. Evolutionary history of a compact dwarf galaxy. The top panel shows dark matter, HI, and stellar mass of the main progenitor through time. The middle panel shows its galactocentric distance (interpolated between time steps with a cubic spline) as well as the virial radius of the Milky Way-like host. The bottom panel shows the archaeological cumulative star formation history-i.e. the star formation history as inferred from the unstripped stellar population remaining at z = 0. We mark using vertical dashed lines the time of greatest gas and dark matter loss (defined by steepest logarithmic slope), which correspond to the second pericenter and apocenter after infall, respectively. The archaeological star formation history differs from the true stellar mass through time because most of the stellar material has been stripped by the present day.
DISCUSSION
We have presented a new set of cosmological hydrodynamic simulations of Milky Way-like galaxies, capable of resolving satellite and near-field galaxies down to M V ∼ −4. These simulations simultaneously produce realistic galaxies from the UFD regime to Milky Way mass, with the same feedback and star formation recipes for all galaxies.
Structural Properties and Scaling Relations
To date, several simulation groups have simulated dwarf galaxies in the same luminosity ranges as in this work, at comparable or higher resolution. However, none yet have done so in the environment of the Milky Way, which requires substantially more computational investment. Nonetheless, many of the prior simulations, like Simpson et al. (2013), Oñorbe et al. (2015), Fitts et al. (2017), Jeon et al. (2017), Revaz & Jablonka (2018), and Agertz et al. (2020a), yield consistent results to many of our scaling relations, albeit each one doing so in a much narrower luminosity range. Much of this consistency likely results from the high dynamical mass-to-light ratios of these low mass dwarfs, which lead to the dark matter halos setting the structural properties of the galaxy (e.g., Agertz et al. 2020a).
While our galaxy sizes and metallicities are consistent with those of other groups, our results are in some tension with those of Wheeler et al. (2019). Their faintest galaxies are more diffuse ( Figure 6) and less chemically enriched ( Figure 10) than those in our simulations. There are several possible explanations for these discrepancies. Recently, Agertz et al. (2020a) demonstrated that the mass-metallicity relation is highly sensitive to feedback strength. Explosive feedback can shut down star formation quickly and expel enriched gas, leaving stellar metallicities well below the observed relation. In their tests, the strongest feedback resulted in essentially primordial abundances. The feedback implementation in the FIRE-2 simulations is quite different from those implemented in ChaNGa. While we have included only thermal energy from supernovae as feedback, the FIRE-2 feedback model incorporates more feedback channels, including both energy and momentum injection from supernovae, radiation heating, and radiation pressure. Recently, Iyer et al. (2020) showed that dwarf galaxy SFHs are burstier in FIRE-2 than in ChaNGa, which may be a reflection of the different feedback implementations.
Most of the UFDs in Wheeler et al. (2019) are lower surface brightness than we find for our UFDs, but they are also at fainter luminosities than we are able to explore. Thus, it is not clear if there is a discrepancy between our size results, but we note that all of the Wheeler et al. (2019) UFDs at these fainter magnitudes are larger than have been observed, despite having a gravitational force softening of only 14 pc. Thus, we speculate on how the different feedback strengths might impact sizes. At higher masses, feedback has been shown to heat the stellar component (Agertz & Kravtsov 2016;El-Badry et al. 2016;Chan et al. 2018). Perhaps explosive outflows could also lead to large sizes in UFDs. However, the FIRE-2 simulations produce realistic sizes in higher mass dwarfs (at lower resolutions; Oñorbe et al. 2015;Fitts et al. 2017), indicating that if the feedback implementation is affecting galaxy sizes, it would be a resolution-dependent phenomenon. Alternatively, Revaz et al. (2016) found that 2-body relaxation in their simulated galaxies led to a lack of compact dwarfs. Ludlow et al. (2020) recently demonstrated that gravitational softening lengths that are too small can exacerbate this issue and lead to greater galaxy sizes than with larger softening lengths.
Finally, we note that none of the FIRE-2 galaxies in Garrison-Kimmel et al. (2019a) had velocity dispersions below 5 km/s, despite capturing the same environmental processes that in our simulations lead to dispersions as low as ∼2 km/s. It is possible that dynamical heating from 2-body interactions plays a role. Alternatively, stars may be born too kinematically hot, rendering it difficult to lower the velocity dispersion below 5 km/s even with tidal stripping. In fact, Sanderson et al. (2020) recently showed that, despite forming in dense, self-shielding gas, the youngest stars in the FIRE-2 Milky Way simulations have (total) velocity dispersions 20 km/s higher than those observed in the Milky Way (see their Figure 2). Likewise, El-Badry et al. (2016) and Yu et al. (2020) showed that some stars in FIRE-2 are born in feedback-driven superbubbles with large initial radial velocities.
Quenching
Our results indicating that most UFDs were likely quenched by reionization (and feedback) are in line with previous cosmological simulations of field dwarf galaxies (e.g., Simpson et al. 2013;Munshi et al. 2013Munshi et al. , 2017Munshi et al. , 2019Wheeler et al. 2015Wheeler et al. , 2019Jeon et al. 2017;Revaz & Jablonka 2018;Rey et al. 2019Rey et al. , 2020. We note that Rey et al. (2020) also find that above M peak ∼ 10 9 M dwarf galaxies quenched by reionization can remain gasrich. While in our simulations these galaxies generally lose their gas later as they fall in to the Milky Way, Rey et al. (2020) find that in the field these galaxies can continue to accrete gas, and even reignite their star formation. This presents a possible second mechanism for restarting star formation, in addition to the ram pressure-induced star formation discussed in Section 6.1 or Wright et al. (2019).
In this work, we have not separated out the contributions of feedback and reionization in UFD quenching at high redshift. Many prior works have relied on the timing, as we have here, to infer that reionization is primarily responsible for quenching. To separate the effects of the two processes, Jeon et al. (2017) instead resimulated a field UFD with energetic supernova feedback turned off. They found that the UFD did not quench in the latter run, implying that while reionization is important in quenching, feedback is also a necessary contributor.
On the other hand, Katz et al. (2020) used highredshift radiation hydrodynamic simulations to study early galaxies, and found that in the absence of reionization, galaxies would not quench. Even in halos that do not form stars at all, outside-in reionization causes a net outflow of gas, which does not occur in their simulation without reionization. At higher masses, however, supernova feedback begins to contribute to the outflow rate from halos. Unfortunately, it appears that the importance of reionization versus supernova feedback may be dependent on the specific feedback implementation, so we cannot assume results from their simulations would hold true in ours. We leave separating the effects of the two processes to future work.
Observationally, reionization quenching is consistent with previous works that compared infall times from dark matter-only simulations with CMD-derived star formation histories (e.g Rocha et al. 2012;Weisz et al. 2015;Rodriguez Wimberly et al. 2019;Fillingham et al. 2019). Recent work deriving UFD orbits using Gaia proper motions also shows that many UFDs likely had later infalls than quenching times (e.g., Fritz et al. 2018;Simon 2018). Recently, Miyoshi & Chiba (2020) directly compared the integrated orbital histories of several UFDs with the peaks in their inferred star formation histories. Unlike earlier works that use a static potential, they explicitly modeled the growing mass and radius of the Milky Way (compare, e.g., Figure 15 to their Figure 1). They also found that star formation in UFDs occurs well before infall. However, they did find evidence that one UFD (CVn I) had a second burst of star formation at infall, suggesting that some ultra-faint dwarf galaxies retained gas until infall, as we found in this work. This example emphasizes the need for more observed UFD star formation histories in order to quantify how much variety there is, if any, in UFD star formation.
Caveats
While these simulations represent a step forward in the modeling of galaxies-particularly dwarf and ultrafaint dwarf galaxies-there remain limitations that this work, as in all simulations, still face.
Reionization Model
In these simulations, we adopted the uniform background UV photoionization and photoheating rate of Haardt & Madau (2012). This model has been shown to spuriously heat the IGM too early (Oñorbe et al. 2017). Since we focus in this work on faint galaxies that are often quenched during reionization, correcting for a later reionization model may have a particularly large effect on the SFHs presented in Section 5. More recent UV background models (e.g., Puchwein et al. 2019) have corrected for this discrepancy, but the simulations presented in this work were begun before their release. On the other hand, because most UFDs infall to their parent halo much later than z = 6, we do not expect a later reionization model to alter our conclusions about the source of quenching. Additionally, the overdense Milky Way environment may be better represented by an earlier reionization (Li et al. 2014). Garrison-Kimmel et al. (2019b) found that using a later reionization model in their simulations shifts the majority of the star formation to even earlier times, as more stars are allowed to form in the pre-reionization era; if the same were true in our simulations, it would not change any of our main results. However, the specific mass at which galaxies transition from reionization and feedback quenching to environmental quenching may change, and galaxies in the UFD range could potentially form more stars before reionization ends, shifting them to higher masses. Future work will explore the impact of changing the reionization model in ChaNGa.
With the exception of a model for tracking Lyman-Werner radiation (Christensen et al. 2012), there is also no radiative transfer (RT) in these simulations. Ideally, reionization would be simulated self-consistently with RT rather than imposed as a uniform background. Radiative transfer, however, is computationally expensive, and simulations relying exclusively on RT without a cosmic UV background have been largely stopped at high redshift (e.g., Wise et al. 2014;Gnedin 2014;O'Shea et al. 2015;Pawlik et al. 2017;Rosdahl et al. 2018), both due to the expenses of RT and the need to resolve cosmologically representative volumes.
Resolution
A variety of recent works have shown that low-mass galaxies tend to approach a minimum size, which grows with time even for quiescent galaxies (e.g., Furlong et al. 2017;Revaz & Jablonka 2018;Pillepich et al. 2019). A likely contributor to this behavior is numerical: the 2body relaxation of unequal mass particles, such that the more massive (dark matter) particles sink to the bottom of the potential well and the less massive (star) particles slowly diffuse outward (e.g., Binney & Knebe 2002;Ludlow et al. 2019b). When approaching a simulation's resolution limits and studying poorly resolved galaxies, this can set a floor on the size of a galaxy and prevent, for example, the modeling of ultra-compact dwarf galaxies (e.g., Garrison-Kimmel et al. 2019a).
We expect spurious dynamical heating of the stellar component to be more severe for larger mass ratios (Ludlow et al. 2019b). Since star particles in our simulations are smaller than our gas particles, which themselves are initially smaller than the dark matter particles by the ratio Ω bar /Ω DM , it is possible that 2-body interactions are even more impactful in our simulations that some others. However, as evidenced in Figure 6 and others, we are indeed resolving effectively fairly compact galaxies. Likewise, we see no evidence of the size floor that can be introduced by 2-body effects (Ludlow et al. 2019b).
Using collisionless simulations, Ludlow et al. (2019a) argue that halos are resolved above a convergence radius r conv ≈ 0.055×l, where l is the mean inter-particle spacing, and l = L/N 1/3 , where L is the simulation box size and N is the number of particles. Ludlow et al. (2019b) find that spurious growth due to 2-body interactions is confined largely to radii below r conv . The dependence on softening length is weak, so long as the softening length is smaller than the convergence radius. Convergence, therefore, is based almost exclusively on particle number. The high resolution region starts with equivalent resolution to a 6144 3 grid, which would yield a convergence radius of r conv ≈ 450 pc. If including baryons in the mean inter-particle spacing, the convergence radius reduces to 300 pc. Even so, Figure 6 shows that several galaxies have half-light radii smaller than this value, though caution should be used in interpreting these particular galaxies.
Feedback Models
In this work, star particles were treated as simple stellar populations, in which the deposited supernova energy is calculated by integrating the IMF to calculate the number of exploding stars. As simulations increase in resolution, however, this methodology becomes insufficient, as such small stellar populations may contain only a few stars that explode as supernovae. It therefore becomes necessary to stochastically sample from the IMF. Applebaum et al. (2020) showed that while using a stochastic IMF has no effect on more massive galaxies (see, however, Su et al. 2018), for galaxies in the UFD range stellar feedback becomes more effective and their stellar masses are reduced. However, there appears to be no change in the metallicity of the galaxies.
The simulations in this work were not run with a stochastic IMF, and so it is possible that with the more realistic feedback model the UFDs would be generally less massive. Additionally, it is in principle possible that the burstier feedback could lead to structural changes (e.g., by producing stronger repeated gas outflows). However, at the low masses of the UFDs, it is likely that galaxy morphology is set primarily by the dark matter halo properties and assembly history (Rey et al. 2019;Agertz et al. 2020a). Additionally, no conclusions regarding the quenching of the ultra-faint dwarf galaxies would change, since any change in feedback from a stochastic IMF would lead toward even earlier quenching times.
Future work will additionally explore the impacts of different feedback models, including a superbubble model of supernova feedback that includes thermal conduction and models the subgrid multiphase ISM (Keller et al. 2014). Preliminary work has shown that dwarf galaxy properties remain nearly identical between the blastwave and superbubble feedback implementations, but that stellar masses in Milky Way-mass galaxies are suppressed with superbubble feedback relative to blastwave by a factor of 2 to 3 (Keller et al. 2015).
SUMMARY
We have introduced a new suite of cosmological hydrodynamic zoom-in simulations, the DC Justice League simulations, run at "Mint" (87 pc) resolution with the ChaNGa N-Body + SPH code, and focusing on Milky Way-like environments. This suite has the highest-ever published mass resolutions for cosmological Milky Waylike simulations run to z = 0, and pushes the boundaries of resolution forward to move beyond the classical dwarf regime, and begin the study of fainter dwarfs like those rapidly being discovered by digital surveys. With these simulations, we study a sample of 86 galaxies with M V −3, out to a distance of 2.5 R vir , including satellite and near-field galaxies.
We first compared our new galaxies to observations, ensuring that they are realistic and representative, and showed that our galaxy formation models continue to explain observations down into the UFD range. We found that the two simulations presented here, Sandra and Elena-whose galaxies span approximately 6 dex in luminosity, excluding the central Milky Way-reproduce the observations for a variety of scaling relations. In particular, with the exception of the compact ellipticals like M32, these galaxies span the full range of luminosities for a given size, down to ∼200 pc in half-light radius ( Figure 6). The galaxies also span the full range of observed kinematics of dispersion-supported systems (Figure 7), with tidal stripping responsible for the dynamically coldest galaxies. Given their central surface bright-nesses, we predict all nearby galaxies will be observable by the Vera Rubin Observatory's co-added LSST.
We found our metallicities are generally consistent with observations for all galaxies with L V 10 4 L ( Figure 10). The faintest galaxies are under-enriched in Fe compared to observed dwarfs, but the discrepancy disappears if total metallicity is considered. This result suggests that the galaxies are forming and retaining metals but may be Fe poor due to either SFHs that are truncated before enrichement by SN Ia, or a lack of a model for "prompt" SN Ia in our simulations. Future work will investigate the discrepancy further.
We took advantage of the high resolution of the simulations to investigate the star formation and quenching of UFDs (M V −8) in Section 5. We found that their SFHs are largely consistent with the limited number of available CMD-derived SFHs (Brown et al. 2014;Weisz et al. 2014). Additionally, while the large majority of UFDs quench uniformly early and long before infall, many of the quenched UFDs still retain their gas until later interactions with the Milky Way.
In Section 6, we also highlighted dwarf galaxies that are the first of their kind to be simulated around a Milky Way-mass galaxy. One of them is an HI-rich UFD, which is atypical in our simulations, and unseen in observations of UFDs near the Milky Way. We find that while quenched early on, this galaxy retained its gas until its first infall towards the Milky Way, at which point it also briefly restarted forming stars. We also highlighted a late-forming UFD that is structurally similar to Crater 2, which both started forming stars late, and maintained ongoing star formation for many Gyr before quenching during a close pericentric passage. Finally, we also examined a compact, dark matter-free dwarf galaxy in our simulations, which formed as the remains of a tidally threshed galaxy. This galaxy may serve as an analog to the observed ultra-compact dwarf galaxies. These rare but unusual galaxies emphasize the need for additional observations (such as CMD-derived star formation histories or targeted HI observations) that can quantify the full diversity of faint dwarf galaxies.
The simulations we have introduced in this work demonstrate that a unified set of physics can simultaneously explain galaxy formation across many orders of magnitude, as well as naturally reproduce the variety seen in observations. The simulations show that one of the primary drivers of the variety seen in nearby galaxies is due to interaction with the Milky Way galaxy. These simulations fill a gap in the available literature, extending the study of dwarf galaxies around the Milky Way below the classical dwarf regime, which before was only accessible in field environments very different from the majority of the observations. | 2020-08-27T01:01:00.515Z | 2020-08-25T00:00:00.000 | {
"year": 2021,
"sha1": "b555222576e3602f25c94b24187f26b153cd9728",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2008.11207",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cfd1f876f4854964eabb7fdf2ca535f3767f7ab6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221237050 | pes2o/s2orc | v3-fos-license | Pars plana vitrectomy in uveitis in the era of microincision vitreous surgery
Pars plana vitrectomy (PPV) in uveitis is indicated for various diagnostic and therapeutic indications. With the advent of microincision vitreous surgery (MIVS), the use of PPV in uveitis has increased with a wider spectrum of indications due to shorter surgical time, less patient discomfort, less conjunctival scarring, and a decreased rate of complications as compared to standard 20G vitrectomy. Because of faster post-operative recovery in terms of visual improvement and reduction of inflammation, and reduced duration of systemic corticosteroids, MIVS has gained popularity in uveitis as an adjunctive therapy to the standard of care medical therapy. The safety and efficacy of MIVS is related to the emerging vitrectomy techniques with better and newer cutters, illuminating probes, and accessory instruments. Because of the instrumentation and fluidics of MIVS, PPV is emerging as a safe and useful alternative for diagnostic challenges in uveitis, aiding in earlier diagnosis and better outcome of inflammatory disease, even in the presence of severe and active inflammation, which was once considered a relative contraindication for performing vitreous surgery. However, for surgical interventions for therapeutic indications and complications of uveitis, it is advisable to achieve an optimum control of inflammation for best results. The increasing reports of the use of MIVS in uveitis have led to its wider acceptance among clinicians practicing uveitis.
Uveitis may encompass a wide spectrum of intraocular inflammation, which may be exclusively limited to the eye, or may occur secondarily to an underlying systemic disease. The visual outcome in uveitis is variable, depending upon several factors. While some forms are self-limiting, a severe form of uveitis has potential visual morbidity. The visual damage becomes irreversible, if wrongly diagnosed, or if the treatment is delayed or inadequate. While the clinical phenotypes play the most important role in the work up of uveitis followed by ocular imaging, baseline laboratory investigations (immunological, serological, radiological) are often indicated to corroborate the clinical findings. These investigations may be required more extensively in cases with atypical presentations or poor response to conventional treatment, involving intraocular sampling (of aqueous or vitreous humor) to rule out an intraocular infection or malignancy.
Sampling of aqueous humor by anterior chamber paracentesis is indicated in infections predominantly involving the anterior segment (such as viral, fungal, tubercular, or toxoplasmic uveitis) or to study the intraocular immune reactions in various infectious and non-infectious uveitis. [1][2][3][4][5] It is a quick, minimally invasive surgical procedure that can be performed in the outpatient setting, and has the advantage of being repeatable on subsequent visits. However, it provides only a small amount (up to about 0.1 mL to 0.15 mL) of the intraocular fluid, which is its major limitation, restricting only one or two tests to be done. Moreover, in eyes with predominantly posterior uveitis or significant vitreous involvement (and minimal anterior chamber inflammation), aqueous sampling has a limited role and contributes occasionally. [6][7][8] Vitrectomy enables to obtain a large volume of vitreous fluid. Vitrectomy in uveitis may be indicated for both diagnostic and therapeutic purposes to diagnose and treat several sight-threatening inflammations of the eye. [9][10][11]
Method of Literature Search
The PubMed and Ovid electronic databases were searched to identify potential studies for this review. The following keywords and Medical Subject Headings (MeSH) were used: "Uveitis," "Microincision Vitrectomy," and "MIVS." Detailed search criteria were "Micro incision vitrectomy surgery" and "Uveitis" or "Micro incision vitreous surgery" and "Uveitis" "MIVS" and "Uveitis" or "Small gauge vitrectomy" and "Uveitis" OR "23G PPV" and "Uveitis" or "25G PPV" and "Uveitis" or "27G PPV" and "Uveitis" or "Diagnostic PPV" and "Uveitis" or "Therapeutic PPV" and "Uveitis", Filters: Humans. References of the relevant studies that were identified were also reviewed to identify other potentially related articles. Articles with non-human subjects or including cadaveric data and articles that were not in English were excluded. As this was a literature review and patient charts were not reviewed, there was no need for Institute Review Board approval. Bansal, et al.: MIVS in uveitis
A. Pars plana vitrectomy in uveitis: Historical perspectives
The beneficial role of pars plana vitrectomy (PPV) in uveitis was established almost four decades ago when improved visual outcomes were reported following PPV and lensectomy in uveitis. The authors postulated the therapeutic effect of removal of the vitreous gel alone. [12] Since then, PPV has been increasingly used for managing complications of uveitis with favorable outcomes. [13][14][15][16][17][18] In addition to its therapeutic effect, the benefits of conventional 20-gauge (G) PPV were later established in terms of providing intraocular samples for diagnostic testing in clinically challenging cases. [9,10] However, the invasive nature and potential adverse effects of 20G PPV (such as risk of surgically induced intraoperative complications and postoperative exacerbation of intraocular inflammation) restricted its use as a primary intervention to very severe cases of uveitis, such as those with high suspicion of intraocular malignancy, or those with no or poor view of the retina. [19,20] Subsequently, following reports of small series indicating its usefulness in the form of a decreased inflammatory activity and a decreased flare up of uveitis after vitreous surgery, along with visual gain as an additional benefit, the role of vitrectomy broadened. [9][10][11] In recent years, the advent of microincision vitreous surgery (MIVS) further addressed this concern due to its advantages over 20G PPV. For optimal post-operative results, quiescence of inflammation in the eye is desirable for at least three months for any elective surgery in uveitis. But PPV is often indicated, particularly for diagnostic purposes, in eyes with active disease, for which MIVS has emerged safe and efficacious. [21][22][23][24][25] It also has a wide spectrum of complicated uveitis cases among therapeutic indications.
B. Microincision vitreous surgery: Historical perspectives
Since the first PPV in 1971, the three-port (vitreous cutting, infusion, and illumination) 20G vitrectomy remained the standard technique for PPV for more than two decades till the introduction of MIVS. Following the introduction of smaller 25G instrumentation for pediatric eyes and 23G vitrectomy probe for primary use in vitreous and retinal biopsies, [26,27] the widespread use of the 25G system began only after Fujii et al. introduced the transconjunctival 25G (0.5 mm) trocar/ cannula-based instrumentation in 2002. [28] As an alternative, Eckardt developed 23G (0.7 mm) vitrectomy instrumentation in 2005. [29] The MIVS further evolved with introduction of smaller 27G (0.4 mm) instruments by Oshima in 2010. [30] The quest towards smaller instrumentation is based on the premise that smaller gauge instruments would increase the safety of vitreoretinal surgery and reduce post-operative inflammation and discomfort and shorten the recovery time. [31] Studies have shown reduced complication rates following MIVS as opposed to 20G vitrectomy. [32,33] Thus, surgeons are today routinely performing MIVS even in complex scenarios of vitreoretinal diseases (such as giant retinal tears, advanced proliferative vitreo-retinopathy, diabetic tractional detachment, retinopathy of prematurity, etc.) and complications of uveitis (cataract with uveitis, dense vitritis, vitreous hemorrhage, tractional or rhegmatogenous retinal detachment, and subretinal biopsy). [25,[34][35][36][37][38] Phacofragmentation is also now possible using 23G instrumentation. Smaller ports have made it easier and safer to now perform a four-port PPV, enabling the use of a chandelier in bimanual surgeries. [39] Apart from the reduction in size of the incision, there have been changes in the direction of entry from perpendicular incisions to oblique incisions to biplanar incisions. [40,41] Valved cannulas have been developed which help to maintain a more constant intraocular pressure (IOP) throughout surgery and reduce turbulence. [42] Introduction of dual pneumatic cutters has helped achieve cutting rate of around 8000-10000 cuts per minute. Higher cut rates significantly reduce traction on the vitreous, a factor of considerable significance when dealing with uveitis eyes with active intraocular inflammation. [43] The MIVS has reduced the chances of complications such as iatrogenic retinal tears. [32] There were some initial concerns of an increased rate of endophthalmitis following sutureless incisions. However, recent studies have allayed these concerns. [44] Despite a learning curve, the advantages of MIVS have outweighed the pitfalls.
C. Microincision vitreous surgery: Applications in uveitis
Prior to the era of MIVS, a 22G needle was used for performing vitreous aspiration biopsy through the limbus or pars plana to yield a large volume of vitreous, but was associated with a high risk of retinal tear or detachment due to vitreous traction. [45] This has been overcome by the automated cutters of the vitrectomy systems that allow controlled vitreous removal that is much less traumatic and restoration of the ocular volume by the fluid. [46] The increased vitreoretinal adhesions in the presence of intraocular inflammation or infection predisposes the eye to iatrogenic complications. Both the inflamed retina and ciliary body are avoided by the placement of cannulas in the pars plana during MIVS. Further, it facilitates smaller surgical incisions, a decreased surgical time, better control of IOP, greater maneuverability of the surgical instruments, and a good yield of the vitreous sample. For these reasons, MIVS has found wide use in uveitis, both for diagnostic as well as therapeutic purposes. While the three-port MIVS remains the standard approach for vitrectomy, a single 23G port can be safely made for obtaining an undiluted vitreous sample. [46] A fine needle aspiration cytology of retinochoroidal lesions can be performed using a two-port MIVS. [47] A chandelier light is used for illumination through one of the ports and a 24/23G needle can be introduced through the other port for obtaining a sample. The vitreous thus obtained can be subjected to cytology, interleukin assays, polymerase chain reaction (PCR) for various pathogens and even culture. The standard three-port MIVS is preferred in uveitis to clear the media opacities, reduce the load of inflammatory mediators in the vitreous cavity, to obtain a retinal biopsy and to increase the yield of vitreous sample for analysis. [25] Some these specific situations include vitreo-retinal lymphoma [ Fig. 1], [48][49][50] intermediate uveitis, [51,52] amyloidosis, [53,54] sarcoidosis, [55] acute retinal necrosis, [56][57] endophthalmitis [endogenous [ Fig. 2], or exogenous], [58] intravitreal/subretinal cysticercosis [ Fig. 3], [59][60] and chronic endogenous/autoimmune uveitis. [23] Sequelae requiring MIVS once the active uveitis is over, include vasculitic vitreous hemorrhage, tractional/secondary rhegmatogenous detachments, epiretinal membranes, cystoid macular edema (CME), macular hole, etc. [21,22,[61][62][63][64][65] When combined with anterior segment surgeries, such as phacoemulsification and intraocular lens implantation, or trabeculectomy, MIVS has been reported to be safe and feasible in eyes with posterior uveitis for removal of cataract and pathologic vitreous, producing visual gain without any obvious complications. [66][67][68][69]
Diagnostic MIVS in Uveitis
These include indications where sampling of vitreous is required and critical for testing, such as cases with following features: [25] a. a strong suspicion of intraocular malignancy; [9,11,[48][49][50] b. a strong suspicion of intraocular infection where clinical clues are non-contributory; [10,16] c. intermediate, posterior or panuveitis of unknown etiology, where conventional clinical signs and laboratory tests have failed to determine the diagnosis; [13,[17][18][19]51,52] d. poor or no response to conventional treatment (antibiotic/ corticosteroid/immunosuppressive agents); [19,20,23] e. dense or severe vitritis with poor or no view of retina; [35,36,53] f. uveitis with atypical clinical features or phenotype; [19][20][21] g. acute, sight-threatening uveitis with negative laboratory investigations, to prevent irreversible visual loss; [13][14][15][16]20] h. acute or chronic endophthalmitis (exogenous or endogenous). [16,23] Vitreous sampling In the era of Endophthalmitis Vitrectomy Study, a single 20-G sclerotomy was described by Doft and Donnelly in 1991 for performing vitrectomy-assisted vitreous biopsy, as an alternative to needle aspiration biopsy. [70] Under direct visualization, the vitreous is collected by manual aspiration through the automated vitreous cutter. This technique yields small vitreous sample, and is not the preferred method in MIVS era, also due to lack of wide angle viewing. Collection of an undiluted vitreous sample by a three-port vitrectomy involves risk of hypotony and choroidal hemorrhage, as the infusion is kept off to avoid dilution. To address this issue and to maintain IOP, an innovative use of perfluorocarbon [71] The cost of the perfluorocarbon and the need for freezing the sample for perfluorocarbon removal were the major limitations, although it provided large amount of undiluted vitreous. A much safer and preferred method is to use continuous air infusion, which provides up to 1.5 mL of undiluted vitreous sample without any safety compromise. [72,73] However, air injection during early stage of vitrectomy invariably leads to fish-egg phenomenon and can compromise the visibility during surgery, especially while working with phakic and pseudophakic eyes. Hence, these procedures should be conducted by experienced or trained vitreo-retinal surgeons.
While some surgeons perform an automated aspiration with the machine, many prefer manual aspiration using a syringe connected to the aspiration tube for better control. As the continuous air infusion maintains the physiologic intraocular pressure, this provides the surgeon a good control during the procedure. [72] A higher duty cycle with a low-cut rate maximizes the vitreous yield as the cutter remains open for a longer time, allowing a larger bite of the vitreous. [74,75] However, it may increase the risk of iatrogenic retinal break/s due to increased traction in already inflamed eye, globe collapse and other associated complications. It is preferred to use high cut rate and lower suction.
Once the desired amount of undiluted vitreous sample is obtained (usually about 0.5 mL, and up to about 1 mL), the fluid infusion is turned on. [73] This is followed by diluted vitreous collection and completion of vitrectomy as per the pre-operative plan. Eyes with posterior vitreous detachment (PVD) already present fare better in terms of iatrogenic retinal breaks, as compared to those where PVD is induced during PPV. [74] The undiluted vitreous is preferred for cytology for optimal results. [76,77] Microbiological tests Vitreous samples have a limited positivity rates of smear (66% gram positivity) and culture (44.45-66.7%) in endophthalmitis. [15,16,78] Smears provide a rapid diagnosis of an infective etiology and help in initiating specific therapy. Cultures should be declared negative only after 4-6 weeks.
Molecular tests
The PCR-based molecular diagnostics provide a rapid diagnosis and have been extremely useful in the diagnosis of viral retinitis, toxoplasmic chorioretinitis, tubercular uveitis, Propionibacterium spp., fungal/bacterial endophthalmitis, etc. [24,25,57,61,[79][80][81] Cytopathology A cut rate of 600 cuts per minute is helpful for a good vitreous specimen for cytological analysis for intraocular lymphoma or other malignancies. [82] The availability of an ocular pathologist is critical to receive these samples for a quick analysis to avoid cellular degeneration. Cellular characterization by cytology is also helpful in diagnosing non-malignant conditions. [83] Flow cytometry and immunohistochemistry Identification of cell surface markers by fluorescence-activated cell sorters (FACS) provides additional information about cellular constituents in vitreous specimens. [84] Immunohistochemical staining for cell markers provides phenotypic characterization, and supports the cytological diagnosis of lymphoma (B cells) or non-infectious uveitis (T cells). [84] Antibody determination Detection of intravitreal antibodies with quantitative determination is helpful in infectious uveitis (viral, Toxoplasma gondii, etc).
Cytokine analysis
It provides adjunctive information, especially in intraocular lymphoma. An IL10:IL6 ratio of more than 1 is considered highly suggestive of intraocular lymphoma. [82] Cytokine in vitreous are potential targets as biomarkers of various ocular diseases.
Chorioretinal biopsy
The paucity of data on chorioretinal biopsies (CRB) reflect the rarity of its use. This is due to the complex nature of the surgical procedure with risk of serious complications (like vitreous hemorrhage, suprachoroidal hemorrhage, and retinal detachment) and the fact that a diagnosis is often possible with less invasive techniques. [85,86] Nonetheless, in select situations a CRB may be necessary, namely: [85,86] 1. To exclude intraocular neoplasm (masquerade syndrome, e.g., intraocular lymphoma, choroidal metastasis) 2. Progressive sight-threatening retinal or choroidal lesions unresponsive to therapy 3. To identify causative organism/neoplasm in an immunocompromised patient with uveitis 4. Sight threatening involvement of the second eye despite treatment 5. Negative vitreous analysis after multiple diagnostic biopsies/vitrectomies.
20G PPV for performing CRB has been the preferred approach for many years with a very few studies on CRB using the MIVS platform. [86][87][88][89][90] Use of 27G PPV for CRB has been shown to yield positive diagnostic results in about 89% cases if the lesion size was larger than 0.8 mm. [87,89] Recently, intra-operative optical coherence tomography has shown that it may improve the diagnostic yield of CRB by providing real-time information of biopsy site and depth. It also helps to examine the margins of the biopsy site at the end of surgery thereby ensuring complete retinal attachment. [88]
Therapeutic MIVS in Uveitis
Clearing of media (vitreous) opacities and improvement in visual acuity are the main goals of therapeutic vitrectomy in uveitis. A significant improvement has been reported following MIVS in terms of vitreous haze (as early as the next postoperative day), [25] and in posterior as well as anterior segment inflammations in sarcoidosis (at one week and one month, respectively). [22] Visual benefits following MIVS have been reported by majority of studies as early as next postoperative day, and at all subsequent visits. [9,10,12,19,21,22,91] Multiple factors (debulking of inflamed and opacified vitreous, use of concomitant corticosteroids for uveitis, reduction of CME and combined cataract removal) play a role in visual gain after MIVS.
As removal of vitreous (using any gauge) does reduce inflammation, a recent study has reported clinical resolution (as well as angiographic evidence) of focal posterior segment lesions in eyes undergoing MIVS. [91] Although the precise mechanism is not known, the decrease in inflammation may be attributed to removal of infectious antigens and inflammatory mediators (cytokines/chemokines) by vitreous debulking. In intermediate uveitis, PPV has been proposed as a valuable alternative to medical therapy. [52] While majority of studies have reported benefits in terms of resolution of CME, development of new episode of CME after MIVS has been reported in chronic endogenous/autoimmune uveitis. [23] The need for systemic corticosteroids/immunosuppressive therapy in uveitis has seen a decrease following MIVS, avoiding secondary complications arising out of these drugs. [12,25,52] Oahalou et al. reported that preoperative immunosuppressive therapy could be stopped in 44% patients following PPV. [19] Preoperative oral corticosteroids could be tapered to low dose or altogether stopped in 67.8% eyes. [25] In eyes with recalcitrant intermediate uveitis, long-term resolution of inflammation was seen in 82% of eyes undergoing PPV as compared to 43% of eyes receiving immunomodulatory therapy, which ultimately required PPV. [52] Combining MIVS with lensectomy, in eyes with cyclitic membranes, such as those in pediatric uveitis or chronic uveitis [ Fig. 4], further suppresses the immune activity in the vitreous cavity, possibly by clearing the inflammatory debris through the trabecular meshwork. [92] The potential benefits of MIVS in uveitis against a low risk of major complication related to surgery have encouraged the surgeons to perform an early vitrectomy as a prophylactic measure in a number of conditions. While the earlier reports showed a mixed efficacy, Huang et al. reported a reduced rate of retinal detachments in eyes with acute retinal necrosis that underwent an "early MIVS within 30 days" (25%) versus those with "no early vitrectomy" (59%). [93] On the other hand, Liu et al. reported that prophylactic PPV did not improve visual outcome or reduce the rates of recurrent retinal detachments in eyes with acute retinal necrosis. [94] Eyes with long standing vitreoretinal or choroidal inflammations develop irreversible structural damage in the form of fundus scarring or foveal atrophy. An early intervention by MIVS may reduce the extent of this damage by reducing the severity of inflammation. Further, the adjunctive use of intravitreal or sub tenon steroid injections with potential complications may be limited by an early vitrectomy.
When compared with 20G PPV, MIVS offers an added advantage in glaucomatous eyes by preserving the filtration blebs of a previous surgery or by reducing the conjunctival scar formation for a future possible filtering surgery. [54,95] The large sclerotomy incision of 20G PPV produces scleral and conjunctival scarring. An improved fluidic system in MIVS reduces the rate of intraoperative bleeding, and is particularly helpful in eyes with fibrovascular proliferations.
MIVS in Pediatric Uveitis
The use of PPV in pediatric patients with uveitis is limited and is often considered as the last therapeutic option (following conventional corticosteroids, and immunosuppressants) due to the high rates of complications and need for general anesthesia. Giuliari et al. compared the safety and efficacy of 20G PPV (done in 68% of study eyes) with 25G PPV (in 32% of study eyes) in chronic pediatric uveitis. [96] Two eyes in 20G PPV group developed intra-operative retinal tears. None of the eyes in 25G PPV group developed intra-or post-operative complications, and none required additional sutures to close sclerotomies. They concluded that PPV is safe and effective in chronic pediatric uveitis, and the profile of complications is comparable as in adult population. [96] Indications for PPV in pediatric uveitis include: 1. Intermediate uveitis-recalcitrant CME, dense vitreous opacities, epiretinal membrane, vitreous hemorrhage, to reduce dose of systemic immunosuppressive therapy; [96,97] 2. Uveitic hypotony; [98] 3. Severe uveitic cataract with associated complications like small pupil, hypotony, etc.; [99,100] 4. Ocular toxocariasis (OT); [101] 5. Endophthalmitis-traumatic, endogenous. [96] In patients of intermediate uveitis, MIVS has been shown to be beneficial for chronic resistant inflammation, CME, dense vitreous hemorrhage, tractional/rhegmatogenous retinal detachment, epiretinal membranes, and to reduce the dose or number of systemic immunosuppressive therapy. [97] A relatively early PPV is recommended in pediatric uveitis with CME not responding to systemic immunosuppressive therapy. [97] Hypotony is seen in about 10% of patients with juvenile idiopathic arthritis-related uveitis and needs lensectomy, vitrectomy, cyclitic membrane removal with/without long term 5000 centistroke silicone oil tamponade. [98] In severe ocular complications of juvenile idiopathic arthritis, an extensive PPV (25G) with cataract extraction can cause a significant improvement in visual acuity. [99,100] In ocular toxocariasis, the surgical outcomes following 23G or 25G PPV improved the visual outcome, with a guarded prognosis. [101]
Complications/Limitations
Because of the suture less nature of MIVS, postoperative complications have been a major concern, such as wound leak, hypotony, endophthalmitis, choroidal detachment, and choroidal hemorrhage. [102] An overall complication rate of 54% has been reported in 20G PPV in uveitis (hypotony 2%, vitreous hemorrhage 2%, retinal detachment 2%, epiretinal membrane 7%, and cataract 51%). [19] In the early postoperative period of MIVS, transient hypotony is common and most of the cases recover spontaneously. [25,103] Complications related to hypotony, secondary to sclerotomy leak, are largely due to faulty surgical techniques. However, extreme hypotony may occur, needing intensive steroids or re-suturing of the scleral ports. [21] It may infrequently cause hemorrhagic choroidal detachment, a devastating complication that requires a repeat PPV for suprachoroidal drainage. [25] To avoid this complication, at conclusion of surgery, one must ensure that sclerotomies are not leaking. If needed, it is advisable to suture the sclerotomies, especially in pediatric age group.
Conclusion
Besides being the standard of care in vitreoretinal (non-inflammatory) pathologies requiring PPV, MIVS has gained popularity in uveitis due to shorter surgical time, less patient discomfort, faster post-operative recovery in terms of visual improvement and reduction of inflammation, and reduced duration of systemic corticosteroids. The emerging vitrectomy techniques of MIVS (better and newer cutters, illuminating probes, and accessory instruments) have enabled safer surgeries, and widened the indications for vitrectomy in uveitis, both for diagnostic and therapeutic purposes. As compared to the pre-MIVS era, the use of PPV in uveitis has increased manifold. The instrumentation and fluidics of MIVS have largely influenced favorable outcomes of vitrectomy in uveitis, making PPV a safe and useful alternative aiding in earlier diagnosis and better outcome of inflammatory disease. The increasing reports of the use of MIVS in uveitis have led to its wider acceptance among clinicians practicing uveitis.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2020-08-23T13:06:01.125Z | 2020-08-20T00:00:00.000 | {
"year": 2020,
"sha1": "34b51a5e3cc64f044c87b093524c76aaaa85066c",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijo.ijo_1625_20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4f436dec8ec5f34b56c92aad3dcceaa15032fb9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119187126 | pes2o/s2orc | v3-fos-license | Viscous Dark Energy in $f(T)$ Gravity
We study the bulk viscosity taking dust matter in the generalized teleparallel gravity. We consider different dark energy models in this scenario along with a time dependent viscous model to construct the viscous equation of state parameter for these dark energy models. We discuss the graphical representation of this parameter to investigate the viscosity effects on the accelerating expansion of the universe. It is mentioned here that the behavior of the universe depends upon the viscous coefficients showing the transition from decelerating to accelerating phase. It leads to the crossing of phantom divide line and becomes phantom dominated for specific ranges of these coefficients.
Introduction
Dark energy (DE) seems to play an important role of an agent that drives the present acceleration of the universe with the help of large negative pressure. An effective viscous pressure can also play its role to develop the dynamical history of an expanding universe [1]- [4]. It is found [4] that viscosity effects are viable at low redshifts, which observe negative pressure for the cosmic expansion with suitable viscosity coefficients. In general, the universe inherits dissipative processes [5], but perfect fluid is an ideal fluid with zero viscosity.
Although, perfect fluid is mostly used to model the idealized distribution of matter in the universe. This fluid in equilibrium generates no entropy and no frictional type heat because its dynamics is reversible and without dissipation. The dissipative processes mostly include bulk and shear viscosities. The bulk viscosity is related with an isotropic universe whereas the shear viscosity works with anisotropy of the universe. The CMBR observations indicate an isotropic universe, leading to bulk viscosity where the shear viscosity is neglected [6]. Long before the direct observational evidence through the SN Ia data, the indication of a viscosity dominated late epoch of accelerating expansion of the universe was already mentioned [7].
The origin of the bulk viscosity in a physical system is due to its deviations from the local thermodynamic equilibrium. Thus the existence of bulk viscosity may arise the concept of accelerating expansion of the universe due to the collection of those states which are not in thermal equilibrium for a small fraction of time [8]. These states are the consequence of fluid expansion (or contraction). The system does not have enough time to restore its equilibrium position, hence an effective pressure takes part in restoring the system to its thermal equilibrium. The measurement of this effective pressure is the bulk viscosity which vanishes when it restores its equilibrium [9]- [12]. So, it is natural to assume the existence of a bulk viscous coefficient in a more realistic description of the accelerated universe today.
Physically, the bulk viscosity is considered as an internal friction due to different cooling rates in an expanding gas. Its dissipation reduces the effective pressure in an expanding fluid by converting kinetic energy of the particles into heat. Thus, it is natural to think of the bulk viscous pressure as one of the possible mechanism that can accelerate the universe today. However, this idea needs a viable mechanism for the origin of the bulk viscosity, although there are many proposed best fit models.
Many models have been suggested to discuss the vague nature of DE. During the last decade, the holographic dark energy (HDE), new agegraphic dark energy (NADE), their entropy corrected versions and correspondence with other DE models have received a lot of attention. The HDE model is based on the holographic principle which states that the number of degrees of freedom in a bounded system should be finite and has a relationship with the area of its boundary [13]. Moreover, in order to reconcile the validity of an effective local quantum field, Cohen et al. [14] provided a relationship between the ultraviolet (UV) and the infrared (IR) cutoffs on the basis of limit set by the formation of a black hole. This is given by [15,16] where constant 3c 2 is used for convenience, M 2 p = (8πG) −1 is the reduced Planck mass and L is the IR cutoff. This model has been tested by using different ways of astronomical observations [17]- [20]. Also, it has been discussed widely in various frameworks such as in the general relativity, modified theories of gravity and extra dimensional theories [21]- [29].
The NADE model was developed in view of the Heisenberg uncertainty principle with general relativity. This model exhibits that DE originates from the spacetime and matter field fluctuations in the universe. In this model, the length measure is taken as the conformal time instead of age of the universe and its energy density is ρ Λ = 3n 2 κ 2 η 2 where η is the conformal time. The causality problem occurs in the usual HDE model, while it is avoided here. Many people have explored the viability of this model through different observations [17]- [20,30].
Another proposal to discuss the accelerating universe is the modified gravity theories [31]. The f (T ) gravity is the generalization of teleparallel gravity by replacing the torsion scalar T with differentiable function f (T ), given by where κ is the coupling constant and e = √ −g. This leads to second order field equations formed by using Weitzenböck connection which has no curvature but only torsion. The equation of state (EoS) parameter, ω = p/ρ, is used to explore the cosmic expansion. Bengochea and Ferraro [32] tested power-law f (T ) model for accelerated expansion of the universe. They performed observational viability tests and concluded that this model exhibits radiation, matter and DE dominated phases. Incorporating exponential model along with power-law model, Linder [33] investigated the expansion of the universe in this theory. He observed that power-law model depends upon its parameter while exponential model acts like cosmological model at high redshift. Bamba et al. [34] discussed the EoS parameter for exponential, logarithmic as well as combination of these f (T ) models and they concluded that the crossing of phantom divide line is observed in combined model only. Karami and Abdolmaleki [35] constructed this parameter for HDE, NADE and their entropy corrected models in the framework of f (T ) gravity. They found that the universe lies in phantom or quintessence phase for the first two models whereas phantom crossing is achieved in entropy corrected models. Sharif and Rani [36] described the graphical representation of k-essence in this modified gravity with the help of EoS parameter. Some other authors [37,38] explored the expansion of the universe with different techniques in f (T ) gravity. Also, the effects of viscous fluid in modified gravity theories [39]- [41] are analyzed to display accelerating expansion.
In this paper, we construct the viscous EoS parameter for different viable DE models in the framework of f (T ) gravity with pressureless matter. For this purpose, we consider a time dependent viscous model with its constant viscous reduction to explore the DE era in general fluid. The graphical behavior indicates the acceleration of the universe for suitable viscous coefficients. The scheme of paper is as follows: Section 2 provides basic formalism and discussion about the field equations of f (T ) gravity. In section 3, the viscous EoS parameter is constructed for different DE models. Also, we discuss the graphical behavior of this parameter for these models. The last section summarizes the results.
The Field Equations
The f (T ) theory of gravity (as the generalization of the teleparallel gravity) is uniquely determined by the tetrad field h µ α (x) [42]. It is an orthonormal set of four-vector fields defined on Lorentzian manifold. The metric and tetrad fields can be related as where η ij = diag(1, −1, −1, −1) is the Minkowski metric for the tangent space. Here we use Greek alphabets (µ, ν, ρ, ... = 0, 1, 2, 3) to denote spacetime components while the Latin alphabets (i, j, k, ... = 0, 1, 2, 3) are used to describe components of tangent space. The non-trivial tetrad field h i , yielding non-zero torsion, can be written as satisfying the following properties The variation of Eq.(2) with respect to the tetrad field leads to the following field equations [37,44] [ The torsion scalar is defined as where S ρ µν and torsion tensor T ρ µν are given as follows which are antisymmetric. The energy-momentum tensor for perfect fluid is where u ν is the four-velocity in comoving coordinates, ρ and p denote the total energy density and pressure of fluid inside the universe. The flat homogenous and isotropic FRW universe is described by where a(t) is the scale factor such that a(t) = 1/1 + z in the form of redshift z. The corresponding tetrad components are [34]- [38] h which obviously satisfies Eq.(5). Using Eqs. (7) and (12), the torsion scalar turns out in the form of Hubble parameter H as T = −6H 2 , H =˙a a . The corresponding modified Friedmann equations become For the realistic model, we take viscosity term which introduces the effective pressure in the energy-momentum tensor [45], i.e., here ξ is the time dependent bulk viscosity function. To avoid the violation of the second law of thermodynamics, ξ(t) > 0. The field equations (14) and (15) may be rewritten as We assume here the pressureless (dust) matter, i.e., p m = 0 and the expressions for torsion contributions ρ T , p T and effective pressure become It is noted that if we insert f (T ) = T in Eq. (17) with non-viscous case, we arrive at the usual Friedmann equations in general relativity. The corresponding viscous EoS parameter becomes The phantom and quintessence regions are mostly described with the help of constant EoS parameter such as, −1 < ω < −1/3, which corresponds to the quintessence era whereas phantom era is referred to ω < −1 and the phantom divide line is given by ω = −1. If we consider a torsion dominated universe, then Eq.(14) reduces to Inserting the above value in the energy conservation equation for torsion, it follows that (ω ef f → ω T ) The EoS parameter ω T describes a vacuum, phantom dominated or quintessence dominated universe forḢ = 3 2 Hκ 2 ξ(T ),Ḣ > 3 2 Hκ 2 ξ(T ) orḢ < 3 2 Hκ 2 ξ(T ) respectively for viscous case. For the non-viscous case (ξ(t) = 0), these conditions reduce toḢ = 0,Ḣ > 0 orḢ < 0.
Viscous Fluid and Dark Energy Models
Viscous models have interesting insights about the evolution of the expanding universe. Here we consider a simple time dependent bulk viscous model as follows [46,47] where ξ 0 , ξ 1 and ξ 2 are positive coefficients. The cosmological evolution can be explored for different values of these coefficients [47]- [49]. This bulk viscosity model is motivated due to the terms involved, i.e., viscosity is related to the velocity and acceleration which give the phenomenon of scalar expansion in fluid dynamics. The viscous model having constant ξ 0 and velocity termȧ are discussed in [46], thus a linear combination of these two with acceleration termä may give more physical results. In general, the existence of viscosity coefficients in a fluid is due to the thermodynamic irreversibility of the motion. If the deviation from reversibility is small, the momentum transfer between various parts of the fluid can be taken to be linearly dependent on the velocity derivatives. This case corresponds to the constant viscous model. When viscosity is proportional to Hubble parameter the momentum transfer involves second order quantities in the deviation from reversibility leading to more physical results. The proper choices of their coefficients may lead to the crossing of phantom divide line.
To determine the evolution of effective EoS parameter incorporating f (T ) and viscous models, we assume the Hubble parameter in the form [50,51] Here h and t s are positive constants, the constant γ is either positive or negative and t < t s is guaranteed for the accelerated expansion of the universe due to the violation of strong energy condition (ρ + 3p ≥ 0). For γ = 1, it leads to the scale factor a(t) = a 0 (t s − t) −h which ends up the universe with future finite time Big Rip singularity. Using Eq.(25) with γ = 1, the torsion scalar becomes T = −6h 2 (ts−t) 2 with t s − t = (z + 1) 1/h . Also, taking the value of H, the energy conservation equation,ρ m + 3Hρ m = 0 for dust matter yields the solution where ρ m0 is an arbitrary constant.
In the following, we discuss three DE models by taking into account of the viscosity.
The First Model
First, we consider the following DE model [35,52] where α and β are constants. For β = 0, this model leads to the teleparallel gravity. It is interesting to note that the model (27) is the result of correspondence between energy densities of f (T ) and HDE model. In flat FRW universe, the IR cutoff L in Eq.(1) becomes the future event horizon R h = a ∞ t dt a resulting the HED energy density. Using Eq. (25) in the correspondence, α takes the form and β is an integration constant. Replacing f (T ) and viscous models in Eq.(21), the viscous EoS parameter takes the form The graphical behavior of time dependent viscous EoS parameter with respect to redshift is shown in Figure 1. We draw this parameter by taking arbitrary values of the coefficients (ξ 0 , ξ 1 , ξ 2 ) of viscous model, where α depends upon the constant c which is 0.818 for flat model [35]. Also, we fix the redshift range from 0 to 5 to discuss the behavior of the universe at low redshifts. The left graph in Figure 1 shows the evolution of the universe initially from matter dominated era for higher values of z and then converges to quintessence era at z = 1.6 for ξ 0 = 0.005 and (ξ 1 , ξ 2 ) = 0.1. The phantom divide line is being crossed by the ω ef f as z approaches to zero. By decreasing ξ 1 and ξ 2 from 0.1, the universe remains in phantom dominated era (shown in the right graph).
For the constant viscous case, we take ξ 1 = 0 = ξ 2 in Eq. (24), thus the constant viscous EoS parameter becomes Figure 2 represents the same behavior as indicated by time dependent viscous EoS parameter. However, the phantom crossing for the constant viscous coefficient occurs at ξ 0 = 0.82, it shows phantom behavior for ξ 0 < 0.82 (right graph).
The Second Model
Assuming the exponential f (T ) model [53,54] f (T ) = T e bT , Figure 3 represents the graphical behavior of time dependent viscous w ef f versus z. In the left graph, the plot shows the evolution of the universe from matter to DE phase for higher values of redshift, approximately for z > 2.37. At z = 2.37 for particular values ξ 0 = 0.005 and (ξ 1 , ξ 2 ) = 0.2, the EoS parameter indicates the quintessence era and approaches to −1 as z → 0. As we decrease the values of ξ 1 , ξ 2 , the ω ef f represents the phantom era of the universe. Now for constant viscous EoS parameter, we take (ξ 1 , ξ 2 ) = 0 in Eq.(32) yields Its plot versus z is in Figure 4, showing same behavior as that of time dependent case. Approximately, the universe meets the quintessence era at z < 2.1 and converges towards ω ef f = −1 as z approaches to zero (in left graph). In right graph, the evolution of EoS parameter represents the phantom era of the universe for z ≤ 0.5 by decreasing the value of ξ 0 , i.e., ξ 0 < 1.6.
The Third Model
Finally, we take the model which includes linear and nonlinear terms of torsion scalar and ǫ, γ are constants. Similar to the first model (27), this model comes through the where n = 2.716 for flat universe. Replacing Eq. (34) in (21), the viscous EoS parameter becomes The graphical behavior of time dependent viscous ω ef f is given in Figure 5. Initially, it shows the deceleration phase (ω ef f > − 1 3 ) of the universe for higher values of z. As we decrease the value of redshift up to 0.4, it meets the quintessence region for the particular values ξ 0 = 0.05 and (ξ 1 , ξ 2 ) = 4.2, and crossing of the phantom divide line takes place for z tends to zero. The right graph indicates that the universe remains in this era for (ξ 1 , ξ 2 ) < 4.2. The constant viscous model for this case is .
(37) Figure 6 shows its plot versus redshift. It provides the crossing of phantom divide line for a high value ξ 0 = 32, whereas ξ 0 ≤ 32 corresponds to the phantom region for decreasing z of the accelerating expansion of the universe.
Outlook
Viscous models have been discussed in cosmological evolution of the universe as compared to the ideal perfect fluid. The term of shear viscosity vanished when a completely isotropic unverse is assumed and only the bulk viscosity contributes for the accelerating universe to get negative pressure. In this paper, we have considered viscosity by taking dust matter in the framework of f (T ) gravity. We have taken three different viable DE models and a time dependent viscous model to construct the viscous EoS parameter for these models. The graphical representation is also developed by considering arbitrary values of the coefficients in viscous model for a specific expression of Hubble parameter. The results and the comparison with non-viscous case are given as follows.
All the three models in viscous fluid indicates the behavior of the universe from matter dominated phase to quintessence era and then converges to phantom era of the DE dominated phase for decreasing z. It shows the phantom universe by taking the particular values of viscous coefficients. The constant viscous cases also exhibit phantom behavior. The non-viscous case ξ = 0 shows a universe which always stays in phantom for h > 0 or quintessence for h > 1 regions [35]. However, the third model has resulted the phantom phase of the universe for the higher values of viscous coefficients as compared to the first and second f (T ) models. In each case, the time dependent case shows the phantom crossing by taking small values of viscous coefficients while constant viscous case needs higher values for crossing.
The combination of torsion and viscosity influences the accelerating expansion of the universe in such a way that it strictly depends upon the viscous coefficients of the model. We have to fix the ranges for these coefficients in order to get our desired results. We conclude that the viscosity model leads to different behavior of the accelerating universe in DE era under the effects of viscous fluid. On the other hand, viscosity may result the crossing of the phantom divide line and phantom dominated universe [6,39,55] as shown in Figures 1 and 6. In the non-viscous case [35], the universe remains in the phantom and quintessence eras for the relevant scale factors. Beyond the ideal situation, we remark that the DE era of the universe in a real fluid may be observed and hence accelerating expansion of the universe is achieved. | 2014-05-18T15:45:52.000Z | 2013-08-26T00:00:00.000 | {
"year": 2014,
"sha1": "9c26e07e14a69aab99063202ed717f16782ae76b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1405.5232",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9c26e07e14a69aab99063202ed717f16782ae76b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119217197 | pes2o/s2orc | v3-fos-license | The Density Distribution in Turbulent Bi-stable Flows
We numerically study the volume density probability distribution function (n-PDF) and the column density probability distribution function (Sigma-PDF) resulting from thermally bistable turbulent flows. We analyze three-dimensional hydrodynamic models in periodic boxes of 100pc by side, where turbulence is driven in the Fourier space at a wavenumber corresponding to 50pc. At low densities (n<= 0.6cm^-3) the n-PDF, is well described by a lognormal distribution for average local Mach number ranging from ~0.2 to ~5.5. As a consequence of the non linear development of thermal instability (TI), the logarithmic variance of the distribution for the diffuse gas increases with M faster than in the well known isothermal case. The average local Mach number for the dense gas (n>= 7.1cm^-3) goes from ~1.1 to ~16.9 and the shape of the high density zone of the n-PDF changes from a power-law at low Mach numbers to a lognormal at high M values. In the latter case the width of the distribution is smaller than in the isothermal case and grows slower with M. At high column densities the Sigma-PDF is well described by a lognormal for all the Mach numbers we consider and, due to the presence of TI, the width of the distribution is systematically larger than in the isothermal case but follows a qualitatively similar behavior as M increases. Although a relationship between the width of the distribution and M can be found for each one of the cases mentioned above, these relations are different form those of the isothermal case.
Introduction
The density distribution and particularly the density Probability Distribution Function (PDF) has become a crucial ingredient in theories about molecular cloud and core formation and evolution and on star formation theories (e.g. Krumholz & McKee 2005;Elmegreen 2002Elmegreen , 2008Hennebelle & Chabrier 2008;Elmegreen 2011;Padoan & Nordlund 2011;Zamora et al. 2012).
The volume density PDF in turbulent compressible flows has been widely studied for the isothermal case. In particular, numerical experiments of driven and decaying turbulence (Vázquez-Semadeni 1994;Padoan et al. 1997;Stone et al. 1998;Klessen 2000;Ostriker et al.2001;Boldryev et al. 2002;Beetz et al. 2008;Federrrath et al. 2008) have shown that this distribution has a lognormal shape in a large number of situations. From the theoretical point of view, the development of a lognormal distribution for isothermal flows have been explained as a consequence of the multiplicative central limit theorem assuming that individual density perturbations are independent and random (Vázquez-Semadeni 1994;Passot & Vázquez-Semadeni 1998;Nordlund & Padoan 1999). Furthermore, Passot & Vázquez-Semadeni (1998) showed that the density PDF develops a power-law tail at high (low) densities for values of the polytropic index γ, smaller (larger) than 1. Also for the isothermal case, numerical simulations at a fixed rms Mach number M, show an empirical relationship between the width of the density PDF, as measured by its variance or its standard deviation σ and M (e.g. Padoan et al. 1997, Passot & Vazquez-Semadeni 1998. For the distribution of ln n/n this relationship reads where b is a constant of the order of unity whose value is not clearly established. In the literature it goes from 0.26 (Kritsuk et al. 2007) to 1 (Passot & Vázquez-Semadeni 1998) and seems to vary depending on the relative degree of compressible and solenoidal modes of the forcing (Federrath et al. 2008, Federrath et al. 2010. Recently Price et al. (2011) found that for M up to 20 b = 1/3 is a good fit to numerical simulations with solenoidal forcing and is in agreement with other recent numerical results (Federrath et al. 2008(Federrath et al. , 2010Lemaster & Stone 2008).
On the other hand, the condensation of diffuse gas produced by the isobaric mode of thermal instability (TI ;Field 1965) triggered in colliding flows has been recognized as playing an important role for molecular cloud formation (e.g. Hennebelle & Pérault 1999;Vázquez-Semadeni et al. 2006;Heitsch & Hartman 2007), specially in early phases. However the density PDF resulting in turbulent bi-stable flows, which preserves the bi-modal nature that is signature of the development of TI, and the behavior of the distribution for each phase has not been systematically studied.
The density PDF resulting from non-isothermal simulations, has been nevertheless reported in some papers. Using two-dimensional simulations of turbulent thermally bistable flows Gazol et al.(2005) have shown that the effective polytropic index of the gas increases with M, and that for Mach numbers between 0.5 and 1.25 (with respect to the gas at 10 4 K) it remains < 1, with specific values depending also on the forcing scale going from ≈ 0.2 to ≈ 0.4 for large scale forcing. They also found that the bimodal nature of the distribution becomes less pronounced as the M increases. The same behavior has been reported from three dimensional simulations (Seifried et al. 2011). Audit & Hennebelle (2010) compare the density distribution resulting from a bi-stable simulation and the one resulting when a single polytropic equation of state with γ = 0.7 is used. In the latter case they find that the low density part of the PDF is well described by a lognormal distribution while for the higher densities the PDF is a power-law with an exponent of about -1.5, which is consistent with the distribution expected for a very compressible gas with γ < 1 predicted by Passot & Vázquez-Semadeni (1998). For the cooling run they find that the density distribution of the cold gas, whose effective polytropic index is close to 0.7, is well fitted by a log-normal for values larger than ∼ 300 cm −3 . They attribute the difference in the behavior to possible resolution effects and rise the question about whether or not the choice of log-normal distribution in models of molecular clouds is adequate.
The density distribution resulting from more complex systems including a variety of physical ingredients has been studied by a large number of authors. Here we just mention some examples that illustrate the activity in this direction. In the context of galactic discs, the effect of varying the polytropic index has been numerically studied by Li et al. (2003) who find that for self-gravitating gas the density distribution shows an imperfect lognormal shape whose width decreases as γ increases. Wada & Norman (2007) use three-dimensional hydrodynamic simulations of a globally stable, inhomogeneous cooling ISM in galactic disks and report density PDFs which are well fitted by a single lognormal function (with larger dispersions for more gas-rich systems) over a wide density range. However, Robertson & Kravtsov (2008), include a detailed model of cooling and heating for temperatures < 10 4 K and account for the equilibrium abundance of H 2 . They conclude that each thermal phase in their model galaxies has its own lognormal density distribution, implying that using a single lognormal pdf to build a model of global star formation in galaxies is likely an oversimplification. Density distributions with well defined peaks have also been obtained from models of vertically stratified interstellar medium dominated by supernovae (e.g. De Avilez & Breitschwerdt 2005;Joung & MacLow 2006).
Observationally, PDFs of the average volume density in diffuse interstellar gas (n HI < 1 cm −3 for the atomic gas) have been reported by (Berkhuijsen & Fletcher 2008). For the atomic gas they used 375 lines of sight from the sample given by Diplas & Savage (1994) and they estimate the volume density using the column density and the distance to each star. They find that the PDF tends to be a log-normal but cannot be well fitted by a single curve. In particular different widths and means are obtained depending on the position (in the disk or away from it) and on the type of sampled gas. Specifically, they find that a mixture of cool and warm gas along the LOSs causes an increase in the dispersion.
An associate tool is the column density PDF (Σ-PDF), which has been numerically studied mainly in the context of isothermal turbulent flows. Vázquez-Semadeni & García (2001) studied the relationship between a lognormal volume density PDF (n-PDF) and the resulting column density PDF, finding that when the number of decorrelated density structures along the line of sight is small enough, then the Σ-PDF is representative of the n-PDF and do also have a lognormal shape. As the number of decorrelated density structures increases the column density PDF slowly transits to a Gaussian distribution passing through an intermediate stage where the distribution shows an exponential decay. As the relationship between the n-PDF and the Σ-PDF depends on the applicability of the Central Limit Theorem, the authors argue that for non-isothermal flows having a volume density PDF with a well defined variance, this theorem should apply and the Σ-PDF should converge to a Gaussian for lines of sight with a large enough number of decorrelated density structures. More recently, Burkhart & Lazarian (2012) used solenoidaly driven isothermal MHD simulations to investigate the presence of an empirical relationship between the variance of the column density distribution and the sonic Mach number. They found a relationship with the same form as equation (3), namely where the scaling parameter A Σ = 0.11 and b Σ = 1/3, which is also close to the value reported for the 3D density distribution in the isothermal case .
On the other hand, Ballesteros-Paredes et al. (2011) numerically studied the evolution of the Σ-PDF resulting from colliding self-gravitating flows in presence of TI, which are intended to model the formation process of molecular clouds. They found that a very narrow lognormal regime appears when the cloud is being assembled. However, as the global gravitational contraction occurs, the initial density fluctuations are enhanced, resulting, first, in a wider lognormal Σ-PDF, and later, in a power-law Σ-PDF. These results suggest an explanation to the observational fact that clouds without star formation seem to possess a lognormal distribution, while clouds with active star formation develop a power-law tail at high column densities (e.g Kainulainen et al. 2009).
In this work we quantitatively study the behavior of the density and the column density distributions resulting in thermally bistable turbulent flows. For this purpose we analyze simple numerical 3-dimensional experiments which include only a cooling function appropriate for the diffuse neutral interstellar gas and turbulent forcing. The paper is organized as follows: The model we use is described in section §2. Then in section §3 we present the analysis of the behavior of both the volume density PDF and the column density PDF. These results are discussed in section §4. Finally in §5 we present our conclusions.
The model
We use the same model as in Gazol & Kim (2010), where a MUSCL-type scheme (Monotone Upstream-centered Scheme for Conservation Laws) with HLL Riemann solvers (Harten, Lax, & van Leer 1983;Toro 1999) is employed to solve hydrodynamic equations in three dimensions within a cubic computational domain with a physical scale of 100 pc by side. The turbulence is randomly driven in Fourier space at large scales corresponding to 1 ≤ k ≤ 2, where k is the magnitude of the wave vector, k. We use a purely solenoidal forcing because of two reasons. First, it is the most common kind of forcing used in the isothermal case and second, also for the isothermal case is the kind of forcing for which the density PDF is better described by a lognormal distribution (Federrath et al. 2008). For further details concerning the turbulent forcing see Gazol & Kim (2010) and for a discussion on the effects of forcing see §4.4.3. In the real ISM, turbulent motions are driven by spatially localized sources, as the forcing in Fourier space is applied at any time in all the grid points, this kind of driving is not very realistic. However we chose this kind of driving method because it allows us to compare our results with previous work done for the isothermal case which includes Fourier space forcing. As an additional advantage, with this method we can study the density distribution for the dense gas as well as the density distribution for the diffuse gas (see §3). We utilize the radiative cooling function presented by Sánchez-Salcedo et al. (2002), which is based on the standard P vs. ρ curve of Wolfire et al. (1995).
For all simulations we present in the next section the resolution is 512 3 , the boundary conditions are periodic, the gas is initially at the rest, and the initial density and temperature are uniform with n 0 = 1 cm −3 and T 0 = 2399 K, which correspond to thermally unstable gas.
Results
In this section we analyze the density distribution resulting from one set of seven simulations with different Mach number. The values of M for those simulations are 0.28, 0.73, 1.86, 2.60, 3.29, 5.50, and 6.23, where M is computed as the mean value of the local Mach number. In what follows we call this value M and we use M as a generic abbreviation for Mach number in situations that are independent of the specific way used to compute it or in situations where Mach numbers calculated in different ways are included.
The Volume Density Distribution
Volume density histograms resulting from these simulations are displayed in Figure 1 along with lognormal fits to the high and the low density parts of the distribution. As expected from previous works (e.g. Gazol et al. 2005;Seifried et al. 2011), when the Mach number increases the PDF becomes wider and its bimodal nature, which is a consequence of TI development, becomes less pronounced. These two facts have been explained as a consequence of the decrease in the local ratio between the turbulent crossing time and the cooling time η produced by the increase of M (Gazol et al. 2005). This behavior have been proved by Seifried et al. (2011), who using test particles measured the time spent by a particle in the unstable regime as well as the frequency with which a test particle is perturbed, finding that both quantities increase with M (i.e. both quantities increase as η decreases). Concerning the shape of each part of the distribution, there are four things to note by a simple inspection. First, the low density zone of the PDF can be relatively well described by a lognormal (see dotted lines in Figure 1). Second, the width of this lognormal increases with M . Third, for the smallest values of M we consider, the zone of the distribution corresponding to high densities is not well described by a lognormal (see dashed lines in Figure 1). Fourth, when a lognormal is an appropriate fit to the high density part, its width does not seem to systematically increase with the value of M as rapidly as in the diffuse case.
In Figure 2 we show the width of the lognormal fits σ s (note that σ s is the width of the logarithmic density distribution ln n/n) that we obtain for the gas at low densities as a function of the local Mach M w , which is computed as the average value of the local Mach number in points with densities within the range where the fit has been computed. The dotted lines in this figure are plots of the relationship, between σ s and M, for isothermal gas (see eqn. 1) for two different values of the parameter b. Although for low M w values, σ s is close to the b = 1/2 isothermal case, for our simulations it grows faster with M w . In fact, red line represents a fit with the form where A = 2.25 and b = 0.33. This value of b is close to typical values found for isothermal turbulence with solenoidal forcing.
For the dense gas the behavior of σ s with M c (computed as the average value of the local Mach number in points with n > 7.1 cm −3 , this density corresponds to the thermal equilibrium value below which the cooling function implies that the gas is thermally unstable), is qualitatively different from equation (3) (see Fig. 3). In this case, the distribution is narrower than in the isothermal case and its width grows more slowly with M c than in the isothermal case. In fact, due to the large difference between the widths we obtain and the ones implied by equation (3) we do not include both cases in the figure. For instance, for M = 10 in the isothermal case σ s = 1.58 is expected if b = 1/3.
As mentioned in §1, previous works have found that the shape of the density PDF in non-isothermal turbulent flows depends on the value of the effective polytropic index γ (Passot & Vázquez Semadeni, 1998). For our simulations we first measured this index as a line least squares slope of the whole log P vs. log n distribution and we plotted it as a function of M (Fig. 4, solid line). As expected from Gazol et al. (2005), γ increases with M and remains < 1. As can be seen from the log P/k vs. log n distributions displayed in Figure 5, for low values of M TI can develop almost without disturbance and the dense as well as the diffuse gas are predominately in thermal equilibrium with some of it transiting isobarically between stable branches. On the other hand, at high M the mean pressure in low density gas decreases below the thermal equilibrium value while the mean pressure in high density gas increases above its thermal equilibrium value, producing a neat positive slope of the whole log P/k vs. log n distribution. If the value of γ obtained in this way were a suitable parameter to infer the shape of the density PDF, we would expect a power law at high density for all our simulations. Also from Figure 5 it is clear that for low values of M almost all the dense gas (n > 7.1 cm −3 ) is in thermal equilibrium, implying that its thermodynamic state is approximately well described by a polytropic relation with γ = 0.53, which is the power corresponding to the thermal equilibrium curve and which accordingly with the Passot & Vázquez-Semadeni (1998) theory should exhibit a power-law PDF. The dotted line in Figure 4 corresponds to the slope of the log P/k vs. log n distribution computed for densities larger than 7.1 cm −3 and plotted as a function of the average local Mach number for dense gas M c . This slope shows a decrease with M c from ∼ 0.53 for M = 0.28 (Mc = 1.07) to ∼ 0 for M = 6.23 (Mc = 16.91). This decrease is due to the increased dispersion at the lower density values of high density gas, and implies that the polytropic description of the dense gas becomes less appropriate for large M values. For diffuse gas the slope of the distribution (dashed line), estimated for each simulation using density values within the range for which the fit to the corresponding PDF has been computed, increases from its thermal equilibrium value, namely 0.73, for low M w simulations to approximately 1 for very supersonic simulations, implying that the approximately lognormal shape we obtain is consistent with the predictions done by Passot & Vázquez-Semadeni (1998).
The Column Density Distribution
As long as the authors know, the column density distribution resulting from turbulent thermally unstable simulations, has not been systematically studied. It is thus interesting to study the Σ-PDF associated with the n-PDFs discussed in previous section.
In Figure 6 we show the column density distribution resulting from our set of simulations. The high density part of the distribution can be well described by a lognormal (dotted lines) for all the values of M that we consider, even for the two smaller values (0.28 and 0.73) for which de volume density PDF has a behavior approaching a power-law. For these two values the bimodal nature of the n-PDF is preserved in the Σ-PDF. For larger values of M the distribution becomes single peaked and the lognormal does also fit an important fraction of the low density part. The widths of the lognormals, plotted as a function of the Mach number in Figure 7 (solid line), are systematically larger than the corresponding values found by Burkhart & Lazarian (2012) for the isothermal case (dotted line), but they follow a qualitatively similar behavior with M. In fact this behavior can be fit by a function of the form (2) with A Σ = 0.084 and b Σ = 12.5 (dashed red line) obtaining an error in the fit of 2.17%. This set of parameters is however not unique. As an example we also display the curve for A Σ = 0.081 and b Σ = 14.29 (dashed blue line) which has an error of 2.22%. Considering the fact that errors in fiting lognormals are greater than the difference between the errors resulting from the two previous fits, we can consider them as being equivalent. Note that in Figure 6 the Mach number is the rms value at the mean temperature M rms . We choose this value because the average local Mach is dominated by the warm gas (T > 6100K), whose volume fraction is between ∼ 30%, for very turbulent simulations, and ∼ 80%, for the smaller value of M, which is very large when compared with the one of the cold gas (T < 310 K) which goes from ∼ 2 to 8%.
Diffuse Gas
The approximate lognormal shape of the diffuse gas density PDF is consistent with observations reported by Berkhuijsen & Fletcher (2008) for galactic low density neutral gas. However, the fact that the lognormal fit fails at very low densities is also consistent with predictions by Passot & Vázquez-Semadeni (1998) for polytropic flows with γ < 1, according to which the PDF is expected to decrease faster than a lognormal at low densities.
The behavior of σ s with M, that grows faster than in the isothermal case, shows the effects of the presence of TI. In fact as discussed in Vázquez-Semadeni et al. (2003), the increase of M has two effects on the development of TI. First, as the turbulent crossing time decreases it becomes shorter than the growth time for linear TI, which is the time for the gas to form condensations, increasing the fraction of unstable gas (e.g. Gazol et al 2001) and producing a larger drift from the thermal equilibrium. On the other hand, the presence of velocity fluctuations generate adiabatic density perturbations which are linearly unstable only when the cooling time is shorter than the dynamical time (which for the supersonic case is the turbulent crossing time), i.e. only at large scales . However, in the nonlinear regime, reached as M increases, due to the local increase of density in forcing generated compressions, the cooling time can locally decrease implying that even initially small scale fluctuations can become thermally unstable. The consequence of this nonlinear development of TI is an enhancement of density contrast that could lead the width of the density PDF to grow faster with M than in the isothermal case.
The dense Gas
The results we obtained for the high density gas reconcile the theoretical prediction from Passot & Vázquez-Semadeni (1998) with previous numerical results for thermally bistable flows (Audit & Hennebelle 2010). Following the former, for a cooling function adapted to describe the dense atomic gas, having an effective polytropic index γ < 1 in thermal equilibrium, the density PDF is expected to develop a power-law at high values. On the other hand Audit & Hennebelle (2010) report that the dense gas density PDF resulting from thermally bistable colliding flows seems to be better described by a lognormal than by a power-law. From results presented in §3.1 it is clear that the dense gas PDF can behave as a power-law or as a lognormal depending on the M value and more specifically on the amount of dense gas out of thermal equilibrium. The transition between these behaviors could be due to the fact that the higher is the amount of gas out of thermal equilibrium, the lesser is adequate a polytropic equation of state as a thermodynamic description of the gas. In fact, Passot & Vázquez-Semadeni (1998) suggest that the development of a power-law for γ = 1 is the consequence of density jumps depending the local density. In our simulations the dense gas drifting away from thermal equilibrium is in fact the consequence of the density fluctuations being determined by velocity fluctuations. A remarkable difference between the power-laws we obtain at high densities for low M simulations and the predictions from Passot & Vázquez-Semadeni (1998) is the logarithmic slope behavior. We get slopes of −4.43 and −2.41 for M = 0.28 and M = 0.73, respectively. This implies that the power-laws resulting from our simulations are stepper than the power-law they predict for γ = 0.5 which is expected to have a logarithmic slope of −1.2. Note however that this value corresponds to the large M limit.
The development of a lognormal at high densities in high M simulations suggests that in some cases the use of this kind of distribution as initial condition in the molecular cloud formation process represents an adequate choice. Nevertheless, for our simulations the relationship σ s -M is very different from equation (3), implying that in early times during the cloud formation process the density distribution can not be related with the dynamical state of the gas through the isothermal version of this equation. In particular, for a given Mach number we find a narrower distribution. It is important to note that the presence of self gravity, which is not included in our models, could have noticeable effects on the distribution of dense gas even in these early phase. This problem is going to be addressed in a future work.
The column density
The dense branch of the column density PDF is well described by a lognormal regardless of the M value. This is partially consistent with recent observational and numerical works suggesting that in clouds where the effects of gravity are not dominant in determining the cloud structure, the column density PDF has a lognormal shape (Kainulainen et al. 2011, Ballesteros-Paredes et al. 2011. In those works however, turbulence is invoqued as the main agent in shaping the column density PDF. From our results it is possible to suggest that even when low levels of turbulence are present, the cooling properties of the gas could produce a lognormal column density PDF. In fact, Heitsch et al. (2008) investigate the expected timescales of the dynamical and thermal instabilities leading to the rapid fragmentation of gas swept up by large-scale flows and compare them with global gravitational collapse timescales. They identify parameter regimes in gas density, temperature, and spatial scale within which a given instability dominate, finding that the thermally dominated parameter regime has a remarkable large extent due to the fact that outside the strictly thermally unstable regions, cooling could still being the dominating agent leading to fragmentation in the presence of an external (in this case ram) pressure.
On the other hand we find that the width of the column density distribution is much larger than in the isothermal case studied by other authors for MHD flows (Kowal, Lazarian & Bresnyak 2007;Burkhart & Lazarian 2012). This is a natural consequence of the large density dynamical range produced by the presence of thermal instability. The resulting σ ln(Σ/Sigma 0 ) -M relationship has to include the fact that the density contrast is large even for low Mach numbers requiring a very rapid grow that implies parameter values completely different from the ones reported in the isothermal case, namely A Σ = 0.11 and b Σ = 1/3.
Numerical Issues
Several numerical factors can affect in some way the results obtained in the present work.
Resolution
We have performed some higher resolution simulations in order to see the effects on the density PDF. For similar Mach numbers we find that the main consequence of increasing the resolution to 1024 3 seems a deviation of the high density tail in moderate Mach simulations. In particular, for M ∼ 1.9, the distribution falls faster than a lognormal for high n. Unfortunately we do not have enough snapshots to quantify this effect and to measure σ in high resolution simulations. Although our PDFs could be not fully converged and higher resolution simulations could potentially lead to results quantitatively different from those presented in previous sections, the facts that the diffuse gas PDF can be well described by a lognormal and that the dense gas PDF is well described by this kind of distribution only for large enough values of M, does not seem to change with resolution.
Model
Even if our model allows both, the study of diffuse gas distribution and a direct comparison with isothermal numerical experiments reported by other authors, there are some choices which can potentially affect our results. The periodic boundary conditions maintain a fixed amount of gas in the box and this fact can artificially regulate the gas segregation.
Also the particular choice of the cooling function can affect the gas segregation. This function could change because of physical reasons such as abundance variations, heating rate variations, or additional cooling process. Finally, additional physics such as the presence of self gravity and magnetic fields could also modify the quantitative behavior of the density distribution. Federrath et al. (2008) and Federrath et al. (2010) found that for isothermal gas the width of the PDF depends not only on the rms Mach number but also on the relative degree of compressible and solenoidal modes in the turbulence forcing, with b = 1/3 appropriate for purely solenoidal and b = 1 for purely compressive forcing. For the thermally bistable case the effects of forcing could be more complex due to the interplay between TI and turbulence. Although simulations with different kind of forcing have been presented by Seifried at al. (2011), the energy transfer between the solenoidal and the compressive modes for turbulent bistable flows have not been addressed. As shown in Gazol & Kim (2010) for purely solenoidal forcing, the presence of TI does significantly affect the density as well as the velocity power spectrum. In fact, it is well known that the development of TI produce turbulent motions with typical velocities of the order of tenths of km s −1 (Kritsuk & Norman 2002;Piontek & Ostriker 2004) but, as stated earlier, the presence of turbulence does in turn modify the development of TI. A detailed analysis of the effects of forcing comparing also the resulting velocity power spectrum and the density weighted velocity power spectrum for the compressible as well as for the solenoidal modes, should be done in order to address this problem, but it is out of the scope of the present work.
Summary and Conclusions
In the present work we have presented numerical experiments showing that: 1. At low densities the volume density PDF resulting from turbulent thermally bistable flows can be well described by a lognormal distribution whose width increases with the Mach number.
2.
A relationship between the width of the distribution and the average local Mach number can be found, however this relationship is not the same as in the isothermal case. The value of the parameter b, included in the isothermal case, is for our simulations surprisingly close to the one obtained for purely solenoidal forcing in isothermal gas but the non linear development of TI, producing a more efficient rarefaction of gas, causes a faster growth of the distribution width as M is increased. The consequence of this enhanced growth in the mathematical form of the σ s − M relation is the presence of a scale factor distinct from 1.
3. At high densities the form of the volume density PDF depends on the value of the Mach number. For simulations with transonic or weakly supersonic average velocities in dense gas the distribution is a power-law, while in the presence of highly supersonic velocities the distribution becomes lognormal.
4. For simulations which develop a high density distribution with a lognormal shape, the witdth of the distribution is smaller than in the isothermal case and grows slower with the Mach number.
5. At high densities the column density PDF resulting from our simulations can be described by a lognormal for all the Mach numbers we consider. As M increases the density range where the lognormal fit is adapted expands and the lognormal becomes wider.
6. The width of the column density distribution resulting from our simulations is systematically larger than the width obtained in the isothermal case. A relationship between the width of the column density distribution and the rms Mach number at the mean temperature can be found. This relationship has the same form as the one reported in the literature for the isothermal case, but the parameters resulting from our fit are very different.
From these results it is clear that when studying the diffuse and/or the dense atomic interstellar gas in order to relate the density structure, and in particular the width of its PDFs, with the dynamical state of the gas, characterized by the Mach number, the use of results obtained from isothermal turbulent flows is not an adequate choice. Specific relations between σ s (σ ln(Σ/Σ 0 ) ) and M for thermally bistable flows should be taken into account in any observational or theoretical work using the density (column density) PDF as a measure of the Mach number. The relationships obtained in the present work could be affected by the inclusion of additional physics such as self gravity, magnetic fields or variations on the cooling function due to variations of the heating rate and the gas abundances.
The work of A. G. was partially supported by UNAM-DGAPA grant IN106511 Some of the numerical simulations were performed at the cluster Platform 4000 (KanBalam) at DGSCA, UNAM. 86 (right). In each panel the black curve corresponds to the thermal equilibrium implied by the cooling function and contours are placed at 10% (violet), 30% (blue), 50% (green), 70% (orange), and 90% (red) of the logarithm of the maximum value of the two dimensional histogram. | 2013-01-18T00:15:27.000Z | 2013-01-18T00:00:00.000 | {
"year": 2013,
"sha1": "a42a2593dee6ae3fee635627e731c223c3b19320",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1301.4280",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a42a2593dee6ae3fee635627e731c223c3b19320",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
8037027 | pes2o/s2orc | v3-fos-license | Controlling the stoichiometry and strand polarity of a tetramolecular G-quadruplex structure by using a DNA origami frame
Guanine-rich oligonucleotides often show a strong tendency to form supramolecular architecture, the so-called G-quadruplex structure. Because of the biological significance, it is now considered to be one of the most important conformations of DNA. Here, we describe the direct visualization and single-molecule analysis of the formation of a tetramolecular G-quadruplex in KCl solution. The conformational changes were carried out by incorporating two duplex DNAs, with G–G mismatch repeats in the middle, inside a DNA origami frame and monitoring the topology change of the strands. In the absence of KCl, incorporated duplexes had no interaction and laid parallel to each other. Addition of KCl induced the formation of a G-quadruplex structure by stably binding the duplexes to each other in the middle. Such a quadruplex formation allowed the DNA synapsis without disturbing the duplex regions of the participating sequences, and resulted in an X-shaped structure that was monitored by atomic force microscopy. Further, the G-quadruplex formation in KCl solution and its disruption in KCl-free buffer were analyzed in real-time. The orientation of the G-quadruplex is often difficult to control and investigate using traditional biochemical methods. However, our method using DNA origami could successfully control the strand orientations, topology and stoichiometry of the G-quadruplex.
INTRODUCTION
Nucleic acids can adopt structures other than the canonical B-form duplex stabilized by Watson-Crick base pairing (1)(2)(3)(4). Among the noncanonical secondary structures (5)(6)(7)(8)(9)(10), the G-quadruplex is an attractive conformation of nucleic acids, and structural studies of G-quadruplex motifs are ongoing tasks in the field of chemical biology of nucleic acids (5,(10)(11)(12). G-quadruplex is a supramolecular architecture, which represents an unusual DNA secondary structure that can be formed by certain G-rich sequences. These structures comprise highly stable planar rings of four guanines stabilized through Hoogsteen hydrogen bonds leading to the formation of G-quartets (13). The stacking of the formed quartets contributes substantially to the stability of the quadruplex structure. The minimum requirement for the formation of a quadruplex is four separate tracts of at least three consecutive guanines. The topology of the G-quadruplex structures is polymorphic and reflects a range of alternate strand directionalities, loop connectivities and syn/anti-distribution of guanine bases around G-tetrads. This polymorphism and the stability of the quadruplex structures depend on the nucleic acid sequence, stacking between different G-tetrads, temperature, solvent, electrostatic interactions mediated by salts, and salt composition (14). For instance, it has been proposed that the quadruplex structure is regulated by sodium-potassium exchange (15). In addition to the strand polarity (parallel, antiparallel or mixed), glycosidic torsion angle (syn or anti) and the orientation of the loops (lateral, diagonal or both) (16), the G-quadruplex structures can also be differentiated by considering the stoichiometry of the strands into one (exclusively intramolecular structure) (5), two [for example, (3+1) type structure] (17), three (as we have described recently) (18) or four strands (19) (described in this report).
Although G-quadruplex structures were first observed with guanosine mononucleotides, and synthetic G-rich oligomers (13), they were later found in natural sequences such as human chromosomal telomeres comprising the tandem repeats of TTAGGG sequence with a singlestranded 3 0 -overhang of 100-200 nucleotides (20). Apart from the telomeric repeat sequences, the quadruplexforming G-rich regions have been identified in the human genome with enrichment in promoter regions of several proto-oncogenes such as c-myc, c-kit and K-ras (12,(21)(22)(23). There are experimental evidences for the G-quadruplex formation in the genomic DNA of several other organisms (24)(25)(26). The formation of the quadruplex can affect a wide range of biological activities including genome stability, cell growth, gene regulation, transcription, translation, DNA replication and DNA repair (27). Thus, G-rich regions are of great interest for therapeutic targets such as developing anticancer agents (28)(29)(30)(31).
We have recently reported the real-time analyses on a (3+1) type G-quadruplex structure (17) and a theoretical analysis of the folding pathways of human telomeric type-1 and type-2 G-quadruplex structures (5). In this report, we present the direct and real-time analysis on the saltinduced formation of a tetramolecular G-quadruplex structure and its disruption in a salt-free condition. The materials prepared by the scaffolded DNA origami method (32)(33)(34)(35)(36) are shown to be novel substrates for the analysis of single-molecule reactions and functions (37)(38)(39)(40)(41)(42)(43)(44)(45). Thus, we have performed our analyses within a nanovessel (46,47) constructed using the origami method. Cations such as K + and Na + are known to stabilize the G-quadruplex structures via binding at the center of the G-quadruplex, generally sandwiched between two consecutive quartets (48). Because of the high intracellular concentration of K + , we have used K + ions to mimic the cell-like condition and to execute the conformational switching. The conformational switching was monitored using high-speed atomic force microscopy (HS-AFM) (49-54) by following the topology change of the strands.
The important achievement in this study over the previous reports (17,19) is the demonstration of the capabilities of DNA origami structures for the control over the stoichiometry and strand polarity of the G-quadruplex structures. A number of indirect techniques have been used to study quadruplex structures. Among them, circular dichroism, thermal difference spectroscopy and gel electrophoresis have been used extensively. However, these techniques often fail to provide sufficient information about the strand polarity and stoichiometry. For example, Sen et al. reported a similar DNA synapsis through the formation of a tetramolecular G-quadruplex using gel electrophoresis (19). Though they could successfully achieve the DNA synapsis through the quadruplex formation, they found a mixture of quadruplex structures with different stoichiometry of the strands. However, in our single-molecule analysis using DNA origami, we could successfully control the stoichiometry of the strands and study exclusively the single G-quadruplex of interest rather than a mixture. Moreover, we have attached the DNA strands of interest within a DNA origami frame and we could successfully control the strand polarity (in the present case, antiparallel or mixed conformation), whereas it is difficult to control using other techniques or methods of analysis.
Chemicals and reagents
Tris-HCl, EDTA, MgCl 2 and KCl were purchased from Nacalai Tesque, Inc. (Kyoto, Japan). Single-stranded M13mp18 DNA was obtained from New England Biolabs, Inc. (Ipswich, MA, USA; catalog no. N4040S). The staple strands (most of them are 32-mer) for the fabrication of the DNA origami frame (46), and the oligomers for the synaptic DNA were received from Sigma Genosys (Hokkaido, Japan). The gel-filtration column and sephacryl S-300 were purchased from Bio-Rad Laboratories, Inc. (Hercules, CA, USA) and GE Healthcare UK Ltd. (Buckinghamshire, UK), respectively. Water was deionized (!18.0 M cm specific resistance at 25 C) by a Milli-Q system (Millipore Corp., Bedford, MA, USA).
Preparation of the origami frame and incorporation of the duplexes
Origami frame was prepared by annealing the solution of M13mp18 DNA (final concentration of 0.01 mM), staple DNA strands (4 equivalent 0.04 mM), Tris-HCl (20 mM, pH 7.6), EDTA (1 mM), MgCl 2 (10 mM) and KCl (0 or 100 mM) from 85 to 15 C at a rate of À1 C/min (33). The duplex DNAs (final concentration of 0.1 mM each) were also prepared using the same condition with that of the origami frame. Ten-fold excess of each duplex was then mixed with the origami frame. The self-assembly of these duplexes inside the origami frame was carried out by reannealing the solution from 50 to 15 C at a rate of À1 C/ min. The duplexes-incorporated origami was purified using sephacryl S-300 gel-filtration column before HS-AFM imaging. Gel-filtration columns were prepared in the same amount of buffer and salt with that of the origami solution that has to be purified. After purification, yield of the origami formation and incorporation of the duplexes inside the origami frame was found to be $100%, while <10% of the structures were broken during the AFM scanning. For the analysis in the absence of K + , all the experimental steps such as preparation of origami and duplex strands, sephacryl column, surface immobilization and observation buffer contained no KCl, while all these steps contained 100 mM KCl for the experiments in the presence of K + .
The sample (2 ml) was adsorbed onto a freshly cleaved mica plate (f 1.5 mm, RIBM Co. Ltd., Tsukuba, Japan) for 5 min at room temperature and then the surface was washed three to five times using the same buffer solution with same concentration of KCl with that of the origami was prepared. Scanning was performed using the tapping mode in the same buffer solution. AFM images were recorded either using the observation buffer that contained KCl or KCl-free buffer depending on the requirements. All images reported here were recorded with a scan speed of 0.2 frame/s. The yield calculations of the paralleland X-shapes were carried out by counting the shapes in the AFM images. The broken structures were not taken into account for the yield calculations.
Design of the origami frame and duplex DNAs
We have recently developed a defined DNA nanostructure, denoted as a 'DNA origami frame' (Figure 1a and b), to visualize the enzymatic reactions on double-stranded DNA (46,47). We used the same origami frame in the present study to observe the conformational switching and DNA synapsis with control over the stoichiometry and strand polarity. Briefly, this frame contains an inner vacant space of about 40 Â 40 nm in which two sets of connection sites (A-B and C-D) were introduced to hybridize the duplex DNAs of interest. The length of each connection site is 32-mer ($11 nm). The space between two connection sites (for example, A and B) is designed to be 64-mer doublestranded DNA, which corresponds to a length of $22 nm. To identify the orientation of the origami frame, a lacking corner at the right bottom of the frame was introduced (see Figure 1a and b).
For the G-repeat sequences, we have designed two different unique DNAs that are duplexes containing the G-G mismatch repeats in the middle. In each duplex, one of the strands is designed to have 16 bases long single-stranded overhangs at both termini that can assist the catenation of the duplex at the connecting sites present in the origami frame. To investigate the effect of the number of G-repeats on the formation of G-quadruplex, we adopted three, four or six G-repeats in each strand while keeping the number of base pairs of the duplexes constant (Supplementary Table S1). Because we fixed the duplex DNAs of interest inside the DNA origami, they may not diffuse easily to form the intermolecular G-quadruplex structure. Thus, to bring the duplexes closer, we imposed the structural flexibility to the incorporated strands by increasing the number of base pairs. Hence, the length of the top duplex was kept constant at 67-mer, the bottom duplex was varied to either 67-mer ('short duplex') or 77-mer ('long duplex') and the length between the two connecting strands in the origami was 64-mer ( Figure 1c). The sequences used in this study are listed in Supplementary Table S1.
Preparation of the origami assembly with synapsable duplexes
The origami frame was fabricated by folding the M13mp18 viral genome using 226 staple strands (32). The top and bottom duplexes were also prepared separately under similar conditions. A 10-fold excess of each duplex was mixed with the origami frame. The second annealing yielded the final duplex assembly inside the origami frame. The HS-AFM analyses were performed after removing the excess amount of unbound duplexes and staple strands in solution (for further details, see experimental section).
Static observation of the G-quadruplex formation
We first characterized the topology of the long duplex system with six G-repeats. Under K + -free condition, the incorporated strands laid parallel to each other, and both the duplexes could be seen clearly in the AFM images (Figure 2a and b). Similar results were also obtained for all other sequences with different G-repeat numbers (Figure 2d-f, and Supplementary Figure S1). Next, we performed the analyses under K + environment. Interestingly, the topology of the incorporated duplexes changed and adopted an X-shape by stably binding the double helices in the middle of the duplex, as shown in Figure 2c. This topology change in the presence of K + may be due to the formation of the G-quadruplex structure from the G-G mismatches present in both duplexes. Such a topology change and the formation of the X-shape were observed for all three G-repeat sequences tested, as seen in the AFM images in Figure 2g-i (also Supplementary Figure S1). The duplex regions of the incorporated strands can be seen clearly in the AFM images even after the formation of the X-shape.
Height and width profiles
In the absence of K + , the estimated height difference between the origami frame and the incorporated duplexes was nearly zero for the long duplex with six Grepeats (Figure 2j). This is because both the origami frame and the duplexes adopted the B-form conformation and hence displayed the same height profile. However, in the presence of K + , the strands formed an X-shape, and the height analysis indicated that the X-shape was $0.6 nm taller than the origami frame (Figure 2k and l). Moreover, the height profiles estimated for the duplex and in the middle of the X-region of the incorporated strands in the same AFM image also indicated a taller profile of $0.6 nm for the X-region, as shown in Figure 2m. It has been shown that different types of G-quadruplex structures [quadruplexes formed within a longer duplex DNA with loops, blobs and spurs (58), and closely spaced G-quartets as in the case of G-wires (59) or loosely spaced G-quartets as in the beads-on-a-string structure (59)] display different heights; however, all are slightly taller (!0.5 nm) than the duplex DNA. Thus, our findings indicated that the X-shape was produced by the formation of the four-stranded G-quadruplex, which is slightly taller than the two-stranded origami frame or incorporated duplexes. We also calculated the difference in width between the duplex strands and X-region, and the latter was $2.1 nm wider than the duplexes (Figure 2m). This indicated the association of four strands in middle of the X-region, possibly because of the formation of G-quadruplex structure. Note, the G-quadruplex region is tiny and our estimation may have a positional error, and it is uncertain whether we have exactly estimated the width on the G-quadruplex or on the merging of the duplex strands. However, our estimation indicates that the duplexes are in close proximity and it might be due to the formation of the G-quadruplex. The difference in height between the duplex regions and origami frame was found to be nearly zero even after the quadruplex formation, indicating that the quadruplex formation allowed the DNA synapsis without disturbing the duplex regions of the participating sequences. The most reliable dimension in the AFM technique is the height value. Here, the (a) Figure 2. The Zoom-out AFM images of the DNA origami frame with incorporated duplexes recorded in the absence (a, b) and presence (c) of KCl for the G-repeat number of six. The parallel-shape of the incorporated strands can be clearly seen in the absence of KCl, indicating that no G-quadruplex is formed in this case. The X-shape in the presence of KCl evidences the formation of the quadruplex structure. The representative zoom-in images recorded in the absence of KCl for the sequences that contained six (d), four (e) and three (f) G-repeats. The same sequences in the presence of KCl are given, respectively, in (g-i). (j) The height profile estimated from the image given above the graph indicates that the origami frame and the incorporated duplexes are nearly same in height. (k-l) Height profiles estimated (vertical: k, and horizontal: l) indicate that the X-shape is slightly taller than the origami frame. This could be due to the formation of the four-stranded G-quadruplex which is taller than the duplexes in origami. estimated height values agree well with the original design typical for B-form DNA and G-quadruplex structure. The width values were slightly overestimated because of the common problem of tip-sample convolution in the AFM technique. However, we have considered the difference in width and the errors might have subtracted already and obtained the meaningful values.
Statistical analysis
To obtain statistically meaningful values, we calculated the yield of the parallel-and X-shapes under K + and K + -free environments for all possible combinations of strands with different G-repeat numbers and length ( Table 1). In case of long duplex (67-mer top and 77-mer bottom duplexes) with six G-repeats, most of the duplexes (77%) adopted the parallel-shape, whereas only a minor but significant amount of duplexes (23%) were found to form the X-shape when no K + ion was added. This indicated that, in the absence of K + , most of the incorporated strands did not form the G-quadruplex. An abrupt change in the ratio between the parallel-and X-shapes was observed when K + was added, where 76% of the X-shapes and 24% of the parallel-shapes were observed, indicating the formation of the G-quadruplex. Similar results were obtained for short duplex (both top and bottom duplexes were 67-mer) in which 14% and 41% of X-shapes were found to exist in the absence and presence of K + ions, respectively. The analyses with G-repeats of four and three yielded similar trends, and in all the cases, K + induced the formation of the G-quadruplex structure.
In addition to the yield values, we derived the following information from our analysis: (i) the longer the G-repeats, the higher the G-quadruplex formation (i.e. 6 G > 4 G > 3 G); (ii) the longer the length of the incorporated strands, the higher the G-quadruplex formation (i.e. 77-mer > 67-mer bottom duplex), indicating the need for the structural flexibility of the incorporated strands for the formation of the G-quadruplex; (iii) because we fixed both duplexes inside the origami frame and the strands within a duplex are oriented antiparallel, the strand polarity of the formed G-quadruplex is expected to be antiparallel (Note, the imposed structural flexibility to the incorporated strands are just to bring the duplexes closer, so that they can form the quadruplex structure at the middle where the G-G mismatches are present. However, this structural flexibility is not sufficient for the strands to form other type of structures such as parallel G-quadruplex. There may be a possibility for the formation of mixed G-quadruplex conformation between the antiparallel duplexes in which two strands orient in one direction and other two in opposite direction.); (iv) the G-quadruplex was formed at the center of the incorporated strands, while leaving the duplex regions unaltered; (v) we could successfully monitor the conformational switching of the G-rich strands with as few as three contiguous G-repeats with concentrations in the nanomolar range of the strands in solution and roughly picomolar on the mica surface; (vi) in all cases, a minor but significant amount of X-shapes were formed in the K + -free condition, possibly 10 mM Mg 2+ present in the buffer may have induced a small amount of quadruplex structure. Note that previous studies using electrospray ionization mass spectrometry proved that the Gquadruplex can be formed in the presence of alkaline earth metal ions such as Mg 2+ , Ca 2+ , Sr 2+ and Ba 2+ (48). The stability order obtained was Sr 2+ > Ba 2+ > Ca 2+ >> Mg 2+ (48). Thus, we expect that the X-shape in the K + -free buffer could be the G-quadruplex induced by Mg 2+ ions. This is also evidenced by the slight decrease in the X-shape (18% out of 331 frames counted and 17% out of 220 frames counted for the long duplexes with six and four contiguous guanines, respectively) at a lower Mg 2+ concentration of 5 mM. Further, in some cases, the duplexes may be anchored closely and firmly on the mica surface and may look like an X-shape rather than the true formation of the G-quadruplex. Moreover, control experiment with six contiguous G-T and T-T mismatchcontaining duplexes failed to produce the X-shape in the presence of 100 mM of K + (2% X-shapes were found out of 273 frames counted, Figure 3), indicating that the minor amount of X-shape observed above in the absence of K + is due to the Mg 2+ -induced quadruplex structure. This control experiment further evidences that the Xshapes obtained for the G-G mismatch-containing sequences is because of the formation of the G-quadruplex structure.
Real-time observation of the salt-induced formation of a G-quadruplex
One of the major goals for scientists in various disciplines is to analyze the biological reactions and functions in realtime (60)(61)(62), which could offer new insights. Real-time analysis is not possible with most available techniques, and HS-AFM is one of the few techniques available (49). Although the real-time analysis described below does not provide all the possible information, it is in fact a good initiation.
To visualize directly the formation of a single G-quadruplex in real-time, we selected the long duplex with six contiguous guanines as a representative example. The duplexes-incorporated origami frame was prepared, purified and adsorbed onto a freshly cleaved mica plate in the absence of any added K + . The sample was scanned in an observation buffer that contained 100 mM K + , and the image acquisition frequency was 0.2 frame/s. The results of the HS-AFM imaging are summarized in Figure 4a, and the real-time movie is given in Supplementary Movie S1. During the imaging, the incorporated duplexes laid parallel to each other and maintained the parallel-shape until 80 s, after that the strands suddenly (i.e. in 5 s) formed an X-shape. Once the X-shape was formed, it remained unchanged and was not dissociated throughout the scanning (125 s). This indicated clearly that the X-shape resulted by the conformational change from the G-G mismatch strands to a G-quadruplex and not due to a simple overlap of the duplexes.
The duplex regions seemed not to be disturbed throughout the imaging, indicating that the DNA synapsis via G-quadruplex formation takes place without altering the duplex regions of the G-rich strands. In general, the quadruplex structure may be formed within the time scale of few milliseconds to 0.2 s, depending on various factors such as sequence, number of G-repeats and concentration of salt (63). Here, we could discern that the changes occurred within 5 s because the scan rate was 0.2 frame/s. However, the rate of the quadruplex formation within the origami scaffold on a mica surface may be slower than in bulk solution where the G-strands may have higher diffusion coefficient in the latter case (17). On the other hand, confining the duplexes within the nanospace increases the local strand concentration, which may increase the association rate. The conformational change from the parallel-to the X-shape was often observed, and we could reproduce such a conformational change in real-time (see Supplementary Figure S2).
Real-time analysis of the disruption of a G-quadruplex
Similar to the formation event, we monitored the deformation of a single G-quadruplex in the absence of K + ions for the same sequences. The origami frame with duplex strands of interest was prepared, purified and immobilized on a mica surface in a buffer containing 100 mM K + . The scanning was performed by immersing the mica surface in K + -free observation buffer. This K + -free buffer is expected to remove the K + that are prebound to the G-quadruplex and consequently destabilize it. As expected, the X-shape was observed at 0 s and remained unaltered for 10 s (Figure 4b, see Supplementary Movie S2 for the real-time movie). Then, the deformation of the X-shape was observed and resulted in the parallel-state of the G-G mismatch duplexes. Once the X-shape dissociated into the parallel-shape, the reverse conformational switching of the parallel-to X-shape was not observed thereafter (up to 65 s), indicating the deformation of the G-quadruplex by the release of K + ions. As in the case of G-quadruplex formation, the duplex regions were unaffected by the deformation of the DNA synapsis.
Real-time analysis has several advantages over static observation. The dynamics of a single G-quadruplex formation or disruption can be monitored only by real-time analysis. Using the static analysis, it is difficult to estimate the time required for the quadruplex formation or deformation, whereas at least a rough estimation is possible in the real-time analysis.
In conclusion, we report here the direct observation and single-molecule analysis of the KCl-induced formation of a tetramolecular G-quadruplex within a DNA nanoscaffold. The conformational switching of the G-G mismatched sequences within the duplex DNAs to a Gquadruplex formation was observed by using HS-AFM by monitoring the topology change of the strands. In a K +free buffer, most of the incorporated duplexes had no interaction and laid parallel to each other. Addition of K + induced the formation of a G-quadruplex structure by stably binding the double helices to one another in the middle of the duplexes. Such a quadruplex formation allowed the synapsis of the duplex DNAs, and the duplex regions of the participating sequences were found to be unaltered throughout our investigations. The effects of the duplex length and the number of G-repeats on the formation of the G-quadruplex structure were also investigated. We could monitor the formation of the G-quadruplex structure with a G-repeat number of as Long; 100 mM KCl
5'----GGGGGG----3' 3'----T T T T T T----5'
Top duplex: low as three and with a concentration of the strands in nanomolar range in solution and picomolar range on the mica surface. The G-quadruplex formation in a buffer solution containing K + and its deformation in K + -free buffer were analyzed in real-time. The orientation of the G-quadruplex is often difficult to control using traditional biochemical methods, particularly in the case of the intermolecular G-quadruplex structure. Because the G-G mismatched sequences were fixed inside the origami frame and the strands within a duplex were oriented antiparallel, the strand polarity of the formed G-quadruplex could be controlled. Moreover, the strand stoichiometry could also be controlled and is exclusively four in the present case. We have recently reported the ability of the DNA origami structures for the analysis of the DNA conformational changes such as (3+1) type Gquadruplex (17) and B-Z conformational transition (38). Here, we add one more example of the DNA conformational changes by investigating the formation and disruption of the intermolecular four-stranded G-quadruplex within a DNA origami nanoscaffold. We expect that our method can be useful for understanding the fundamentals of other conformations of nucleic acids, drug screening and protein-induced conformational switching.
SUPPLEMENTARY DATA
Supplementary Data are available at NAR Online. | 2017-04-14T23:18:37.365Z | 2013-07-17T00:00:00.000 | {
"year": 2013,
"sha1": "6c96976cdd116d8b2062f27898ac62bc5d8545ec",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/41/18/8738/7698271/gkt592.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c96976cdd116d8b2062f27898ac62bc5d8545ec",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
149782372 | pes2o/s2orc | v3-fos-license | Strengthening Families Through a Re-envisioned Approach to Fatherhood Education
Fatherhood education has the potential to affect not only fathers' nurturant behaviors but also multiple dimensions of family life. The weGrill program blends fatherhood, youth development, and nutrition education, with food grilling as the focal activity. Grounded in multiple learning theories, the program helps fathers and their adolescent children learn broadly about family life topics, planning for the future, and nutrition and healthful food behaviors. The program represents a re-envisioned approach to fatherhood education.
Introduction
For several decades, researchers, practitioners, and policy makers have acknowledged the need to strengthen family relationships through educational programs that target one or both parents (Fagan & Palm, 2004).The direction of influence typically flows from educator to parent and then from parent to child and family.Extensive efforts have been placed on influencing one particular population, fathers, with the goal of encouraging their nurturant involvement at home (Knox, Cowan, Cowan, & Bildner, 2011), including promoting a climate of health therein (Niermann, Kremer, Renner, & Woll, 2015).
of several weeks (Panter-Brick et al., 2014).Such programs engage participants in discussions, writing activities, role-plays, experiential learning, and video vignettes.With a few exceptions (e.g., Strengthening Families Program, Fathers and Sons Program), parent education programs generally, and fatherhood education programs specifically, typically do not include the participation of the child.Programs that do involve children are structured to provide instruction to parents children (Semeniuk et al., 2010).
According to data collected in 2011-2012, 20.5% of adolescents and 34.9% of adults are obese (Ogden, Carroll, Kit, & Flegal, 2014).Evidence has suggested that eating as a family is associated with fewer incidences of unhealthful eating and obesity during childhood and adolescence (Martin-Biggers et al., 2014).In addition, intervention strategies such as father-child goal setting and interactive group activities to promote healthful family meals may also be effective in strengthening relationships.Moreover, dynamic educational approaches that strengthen relationships and promote positive nutrition may also affect other areas of personal and family life (Fitzgerald & Spaccarotella, 2009).
In this article, we outline a re-envisioned approach to fatherhood education: a program that incorporates effective models of change (e.g., health action process approach [HAPA]), multigenerational learning (fathers and children), experiential learning (activities, cooking/grilling), and gamification (cards).The program is called weGrill.Participants are fathers and their children (youths aged 11-16 years).The approach is intended to broaden the existing fatherhood education paradigm and provide practitioners with new ideas for encouraging nurturant father involvement, more meaningful father-child relationships, and more healthful home environments.
Program Description
The weGrill curriculum has three educational focuses:
Program Objectives
Specific program objectives exist to guide program design and implementation.First, the program is intended to enhance the father-youth relationship by increasing fathers' knowledge and capacity for nurturant and supportive parenting.Second, it is intended to promote youths' capabilities in the areas of teamwork, communication, leadership, self-determination, mastery, and technology adaptation.Third, the program is intended to increase knowledge of nutrition, healthful eating, safe food preparation, and healthful food access in the community.It is also anticipated that fathers and youths will increase the frequency with which they eat meals together and demonstrate new habits of healthful eating and less habitual eating of unhealthful foods.
Ideas at Work
Strengthening Families Through a Re-envisioned Approach to Fatherhood Education JOE 55(2) © 2017 Extension Journal Inc. 1 The program relies on the HAPA model 2008) and gamification as instructional frameworks.The HAPA model uses a two-phase motivation-volition approach to engender planned participant success strategies.
Gamification is the application of games or game-like activities to nongame activities for the purpose of encouraging engagement and investment by participants.The program design also incorporates multiple learning theories, including Vygotsky's socio-cultural learning theory and his concepts of scaffolding and learning from a more capable peer (Vygotsky, 1978) and Bandura's social learning theory with an emphasis on the concepts of agency, self-efficacy, and modeling (Bandura, 2004).Specifically, hosts draw on participants' real-world experiences (a) to encourage fathers to reinforce their children's understanding of material and (b) to engage youths in teaching and learning from other youths.As a result, participants take ownership of their learning and make it personally relevant to their own circumstances so that application occurs during the session and at home.
Program Schedule
weGrill is structured to give fathers and youths time to learn together, time to learn in small groups, and time to grill and eat together.Each session is arranged so that half of the session is dedicated to nutrition/food safety instruction, grilling, and eating and half is dedicated to fatherhood and youth development instruction.Through the use of instructional recipe and food preparation cards, grilling and eating are done in family dyads, allowing communication and teamwork to occur.For about half of the fatherhood/youth development instruction period, family members separate, with fathers meeting to learn about and discuss nurturant fathering behaviors and youths meeting to learn about leadership, communication, and family life.Once the small-group activities are finished, fathers and youths reconvene to complete a session-closing activity.
Implications for Extension Professionals
The re-envisioned approach to fatherhood education described here blends fatherhood, youth development, and nutrition education into a single program with food grilling as the centerpiece activity.Extension professionals who work with fathers may consider implementing the following pedagogical, programmatic, and practical recommendations: Create male-friendly activities, such as grilling, to engage fathers at recruitment and throughout program sessions.
Include the father's child in the program, and provide opportunities for the two of them to interact with each other in positive ways.
Draw on fathers' real-world experiences to scaffold new material into their existing social cognitive schemata.
Play games that are engaging and that reinforce the material.
Seek to affect multiple areas of life to increase the likelihood that attitude and behavior change will become part of everyday life.
Involve personnel from different Extension areas (i.e., family and consumer sciences, agriculture, 4-H) and members of the community to teach the program.
(a) fatherhood education, (b) youth development education, and (c) nutrition education.The overall goal of the program is to strengthen the father-youth relationship; and food grilling is the central activity by which that strengthening occurs.Families learn about and apply healthful food preparation practices and food safety principles grounded in the 2015-2020 Dietary Guidelines (U.S.Department of Health and Human Services and U.S. Department of Agriculture, 2015).The main instructional artifacts are hand-held cards, used for recipes, food safety information, fatherhood topics, and youth development topics.Program hosts lead gamified activities and discussions of fatherhood, youth development, and nutritious grilling.Session themes and topics scaffold learning because they build on previous session material. | 2019-05-12T14:22:06.287Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "e6368817ceae7a0644b5b893198b50eefc153be2",
"oa_license": "CCBYNCSA",
"oa_url": "https://tigerprints.clemson.edu/cgi/viewcontent.cgi?article=1754&context=joe",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "87e331570b65d74b54328ca407daeb23f72aec72",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
242723644 | pes2o/s2orc | v3-fos-license | Improved Mood Boosts Memory Training Gains in Older Adults With Subjective Memory Complaints: A Randomized Controlled Trial
Objective: Older adults with subjective memory complaints (SMC) have a higher risk of dementia and commonly demonstrate symptoms of depression and anxiety. The study aimed to examine the effect of a memory training program for individuals with SMC, and whether memory training combined with group counseling aimed at alleviating depression and anxiety would boost memory training gains. Design: A three-armed, double-blind, randomized controlled trial. Setting and Participants: Community-dwelling older adults with SMC, aged ≥ 60 years. Methods: Participants (n = 124) were randomly assigned to memory training (MT), group counseling (GC), or GC+MT intervention. The GT+MT group received 4-hour group counseling followed by a 4-week memory training, while the MT group attended reading and memory training, and the GC group received group counseling and health lectures. Cognitive function and symptoms of depression and anxiety were assessed at baseline, mid-, and post-intervention. The GC+MT group and GC group had resting-state functional magnetic resonance imaging at mid- and post-intervention. Results: After group counseling, the GC+MT and GC groups showed reduced symptoms of anxiety and depression, compared to the MT group. Memory training enhanced memory performance in both MT and GC+MT groups, but the GC+MT group demonstrated larger memory improvement (Cohen’s d = 0.96) than the MT group (Cohen’s d = 0.62). Amygdala-hippocampus connectivity was associated with improved mood and memory gains. Conclusion and Implications: Group counseling reduced symptoms of anxiety and depression, and memory training enhanced memory performance. Specically, improved mood induced larger memory training effects. The results suggest that it may need to include treatment for depression and anxiety in memory intervention for older adults with SMC. Trial Registration: ChiCTR-IOR-15006165 in the Chinese Clinical Trial Registry. conducted a correlation analysis to validate the relationship between group counseling-related changes (mid- minus pre-intervention) and memory training-related changes (post- minus mid-intervention). Correlation analysis revealed a positive
Introduction
Individuals with subjective memory complaints (SMC) report declining memory without measurable cognitive de cits. SMC crosses the boundary between normal aging and mild cognitive impairment [1,2], and is associated with higher risks of subsequent cognitive decline and dementia [3,4], as well as poor quality of life [5]. Cross-sectional and longitudinal evidence shows individuals with SMC have increased likelihood to manifest Alzheimer's Disease biomarkers such as brain amyloid deposition [6,7], glucose hypometabolism [7], and hippocampal volume loss [8]. SMC is considered as a "promising" stage for non-pharmacologic interventions aimed at delaying cognitive decline and preventing cognitive impairment [9].
Cognitive training is one of the most commonly used non-pharmacologic interventions. Some studies have shown older adults with SMC bene t from cognitive training [10][11][12], while others failed to nd signi cant cognitive improvement training [13,14]. A meta-analytic study [15] revealed that cognitive training could improve cognitive function in older adults with SMC, resulting in a small to moderate effect size (Hedge's g = 0.38). Several studies have reported structural plasticity in response to cognitive training in individuals with SMC [16,17]. Engvig et al. (2014) found that after training, SMC participants showed an increase in gray matter volume in the brain area surrounding the episodic memory network, with the cortical volume expansion comparable to that of healthy controls [16]. In addition, individual differences in left hippocampal volume change in the SMC group were related to verbal recall improvement following training [17]. These results suggest that training-related brain changes can be evident in older adults with SMC, the earliest stage of cognitive impairment.
Individuals with SMC commonly demonstrate symptoms of depression and anxiety [18,19]. Depression and anxiety are found detrimental to memory performance [20][21][22], and can lead to greater cognitive decline [23,24], and an increased risk of progression to dementia [25,26]. Animal studies demonstrate that exposure to psychological distress may harm older adults' memory by causing neurological deterioration to the limbic system including hippocampus [27]. As the close relationship between depression/anxiety and memory functioning, alleviating depressive and anxious symptoms may need to be incorporated into memory training program to optimize the training e cacy [28]. There may be greater memory improvement if treatment for depression and anxiety is added to memory training in older adults with memory complaints.
To our knowledge, no experimental study has directly examined whether memory training combined with psychological interventions for depression and anxiety would outperform traditional memory training. A few training studies [29,30] integrated stress management techniques into memory training in healthy older adults and found these comprehensive memory training programs reduced symptoms of anxiety and improved cognitive performance compared to placebo or waitlist groups. However, comparison with passive control groups cannot isolate the effect of stress management from pure memory training.
The aim of this study was to determine whether interventions for depression and anxiety would facilitate training gains on memory performance in older adults with SMC. We developed a comprehensive memory training program by combining psychological interventions with memory training. We evaluated the e cacy of combined interventions by comparing it with memory training and psychological intervention alone. We also used the resting-state functional magnetic resonance imaging (fMRI) to explore the neural mechanism of the boost effect of counseling-induced positive emotion on memory training gains.
We expect that (1) the combined interventions would induce larger memory improvements than memory training or psychological intervention alone, and (2) the boost effect of improved emotion on memory training gains would be associated with the functional connectivity (FC) between hippocampus and amygdala.
Research Design
This study was an active controlled, randomized trial conducted between November 2013 and July 2014. It was registered in the Chinese Clinical Trial Registry (www.chictr.org.cn, identi er ChiCTR-IOR-15006165). The protocol was approved by the Ethics Committee of the Institute of Psychology, Chinese Academy of Sciences (CAS). All participants provided written informed consent according to institutional guidelines. The study was reported according to the Consolidated Standards of Reporting Trials [31] (CONSORT) and the extension for social and psychological interventions [32] (CONSORT-SPI; see Supplementary Materials for the CONSORT-SPI 2018 checklist).
Participants
Community-dwelling older adults were recruited from neighborhoods near the Institute of Psychology, CAS through advertisements and yers posted in the community service stations. The inclusion criteria were: (1) age ≥ 60 years; (2) education ≥ 6 years; (3) a score ≥ 21 on the Montreal Cognitive Assessment -Beijing Version [33] (MoCA-BJ); (4) with SMC; (5) right-handed; (6) free of neurological de cits or traumatic brain injury; (7) a score ≤ 15 on the Activities of Daily Living scale [34]; (8) no severe visual or auditory impairment which would hinder intervention.
The following criteria [2] were used for screening SMC: (1) subjectively reported a decline in memory, rather than other domains of cognitive function; (2) onset of SMC within the last 5 years; (3) worries associated with memory decline; (4) feeling of worse memory performance than others of the same age group; (5) performance on the objective memory scale was within the normal range or within 1 standard deviation below the normal value. Subjective memory complaints were assessed by the Memory Inventory for the Chinese [35].
Power analysis was calculated using G*Power 3.1 [36] based on the e cacy of memory training on associative learning. A minimum sample size of N = 93 is necessary to detect a small to moderate effect on the within-between interaction using the repeated measures two-way analyses of variance (ANOVA) (alpha = 0.05, power = 0.80, f = 0.15, number of groups = 3). Two hundred and nineteen participants were contacted and assessed for eligibility. One hundred and twenty-four eligible participants consented to participate in the intervention. After baseline evaluation, they were randomly allocated to three groups: memory training (MT) group (n = 38), group counseling (GC) group (n = 44), and GC + MT group (n = 42). A researcher who did not involve in study design, participant enrollment, intervention implementation, and assessment used SPSS 21.0 (IBM Corporation, Somers, NY) to generate the random allocation sequence and assigned participants to three groups. Figure 1 shows the ow of the participants. Nine participants in each group discontinued intervention because of illness, time con ict, or traveling. The attrition rate was comparable among three groups. In total, 97 participants who completed intervention were analyzed.
Procedure
Three groups of participants attended seven weeks of intervention, respectively. During the rst three weeks, the GC and GC + MT groups attended weekly group counseling while participants in the MT group completed reading assignments at home as control activities. From Week 4 to Week 7, the MT and GC + MT groups received memory training, and the GC group attended lectures as control activities. Group counseling, memory training and lectures were group-based, delivered at the Institute of Psychology CAS. Table 1 Participants were blind to study design and hypotheses. Counseling psychologists and training instructors were blind to study design and hypotheses, and all assessors were blind to group allocation and study design. Participants who completed intervention received a cash incentive of 300 RMB after post-intervention assessment, and those who attended fMRI scanning received extra 200 RMB. The GC and GC + MT groups attended group counseling. Group counseling was led by two licensed counseling psychologist and administrated in small groups (6-10 people).
Activities were designed to provide information on aging process and cognitive aging, strategies of coping with stress and depression in late life, knowledge on lifestyle and brain health. Participants were encouraged to share personal experiences and make interpersonal communications. Homework were assigned after each session.
The MT group received reading assignments. Participants were instructed to complete reading independently at home and to record their reading progress on a log sheet. The reading materials were articles on healthy/positive aging, and strategies for coping with late-life stress and depression.
Memory Training
Group-based, 12 sessions in total; 3 sessions/week, 90 minutes/session The MT and GC + MT groups attended memory training. Each session included 60minute mnemonic training and 30-minute brain game playing. Mnemonic training was designed to promote elaborate encoding and retrieval in older adults by teaching them a series of mnemonics, including generation of mental images, item association (interactive imagery and sentence generation), and the method of loci. Participants were assigned homework to continue practicing mnemonics at home. Brain games were designed to train three components of executive function (inhibition, switching, and updating) through three tablet video games (Li et al., 2014 [38]. A list of 12 pairs of nouns was presented aurally to participants. Half of the word pairs were semantically associated (e.g., sun-moon; ALT-easy condition), and the other six were unrelated pairs (e.g., teacher-railway; ALTdi cult condition). Immediately After listening to the list, the rst noun in each word pair was given as a cue, and participants were asked to recall the second noun. Participants scored 0.5 points for each correct answer in the easy condition (ALTeasy) and 1 point for each correct answer in the di cult condition (ALTdiff). A composite ALT score that ranged from 0 to 9 was calculated. (3) Working memory ability was measured by the Digit Span Forward (DSF) and Digit Span Backward (DSB) tasks [39].
A battery of questionnaires was used to evaluate the effects of group counseling on emotion.
(1) The Self-rating Anxiety Scale [40] (SAS) was used to assess the state of anxiety. Table 2 shows the demographic and clinical characteristics of the participants. Three groups did not differ signi cantly in gender, age, years of education, cognitive function, or emotional indicators. The adherence rate of three groups were 81.35%. No adverse events were reported by participants. Behavioral data on the effects of memory training and the boost effect of counseling-induced positive emotion Effects of group counseling on emotions ANOVA revealed signi cant Group × Intervention interactions in anxiety, depression, and subjective well-being, and a marginally signi cant interaction in ATA (Fig. 2, Table 3). Further analysis revealed that after group counseling (mid-minus preintervention), for the MT group, there was no signi cant difference in anxiety and ATA, a decrease in well-being, and an upward trend in depression, while for the GC and GC + MT groups, there was a downward trend in anxiety, no signi cant difference in well-being and depression, and an increase in the ATA. Results suggested that, compared with the MT group, group counseling reduced negative emotions and maintained subjective well-being.
Effects of memory training and the boost effects of group counseling
Regarding cognitive outcomes, ANOVA revealed signi cant Group × Intervention interactions in associative learning (Fig. 2, Table 3). Further analysis showed that, after memory training (post-minus mid-intervention), the GC group showed no signi cant improvement in ALT (p = 0.08, Cohen's d = 0.26) and ALTdiff (p > 0.05, Cohen's d = 0.09), while the MT group signi cantly increased performance in ALT (p = 0.001, Cohen's d = 0.62) and ALTdiff (p < 0.001, Cohen's d = 0.68), as well as the GC + MT group (p < 0.001, Cohen's d = 0.96 for ALT ; p < 0.001, Cohen's d = 1.08 for ALTdiff). Comparing to the CG group, two memory training groups showed enhanced memory performance. Compared with the MT group, the MT + GC group demonstrated greater memory improvements.
We further conducted a correlation analysis to validate the relationship between group counseling-related changes (mid-minus pre-intervention) and memory training-related changes (post-minus mid-intervention). Correlation analysis revealed a positive correlation between the change scores in ATA and Digit Span Forward only in the GC + MT group (r = 0.346, p = 0.049) but not in the MT (r = 0.146, p = 0.449) or the GC groups (r = 0.174, p = 0.325).
Emotional improvements, memory training gains and amygdala-hippocampus connectivity
In brief, emotional improvements in anxiety and ATA were positively correlated with FC between the right amygdala and left hippocampus, and negatively correlated with FC between the right amygdala and right hippocampus.
The ROI-based analyses were performed to examine the correlation of amygdala-hippocampus connectivity with memory training gains. Results showed that FC between left hippocampus and amygdala positively correlated with improvements in Digit Span Forward when individual differences in emotional changes were controlled. The voxel-wised analysis validated the boost effect of amygdala-hippocampus connectivity on cognitive improvements. In addition, it also showed a negative relationship between ALT improvements and FC in right hippocampus and amygdala. However, this correlation was not signi cant when the individual differences in emotional changes were controlled. Detailed results are presented in Supplemental Materials (Results S1-S3).
These results suggested that counseling-induced emotional improvements manifested as changes in the amygdalahippocampus pathway, in the meanwhile, changes in this pathway in uenced memory training gains in older adults. It is noteworthy that functional separation was demonstrated in the FC between amygdala with left hippocampus and that with right hippocampus.
Discussion
This study examined whether memory training combined with group counseling aimed at alleviating depression and anxiety would produce greater training gains in older adults with memory complaints. The active-controlled randomized trial compared the combined intervention (GC+MT) group with memory training and group counseling groups. Results show that 3 sessions of group counseling decreased symptoms of depression and anxiety, maintained well-being, and promoted attitudes towards aging. Memory training enhanced performance on associative learning, in consistence with previous training [15,[44][45][46] studies which reported individuals with SMC could bene t from cognitive training. More importantly, the GC+MT group demonstrated a larger improvement in memory (Cohen's d = 0.96) than memory training group (Cohen's d=0.62), suggesting improved emotional states derived from group counseling boosted the effect of subsequent memory training. The present study expands previous multicomponent memory interventions by providing direct evidence supporting the synergistic effects of psychological intervention and memory training on cognitive outcomes. Our nding highlights the importance of treatment for negative emotional states correlated with subjective memory decline and the signi cance of promoting positive self-perception of aging.
Integrating psychological intervention into traditional memory training may be promising to augment effectiveness on cognitive performance for older adults with SMC.
The present study also demonstrated that the boost effect of positive emotion on training bene ts was related to the amygdalahippocampus connectivity. The amygdala and hippocampus are the two fatal brain regions related to human emotion and cognition. Several imaging studies [17,47] found cognitive training was associated with hippocampal relevant regions. A study [46] using multidomain MRI scans found that resting-state connectivity between the right hippocampus and the superior temporal gyrus signi cantly differed between the pre-and post-test. Although episodic memory critically depends on the hippocampal complex, the amygdala is important for modulating the neural circuitry of episodic memory. Previous researchers [48,49] suggested that emotion through the amygdala's in uence can alter three components of episodic memory: encoding, consolidation, and the subjective sense of remembering. Recent studies [50] also found that the emotional signi cance of the experience in uenced the cognitive process, and emotionally arousing events were typically better remembered than neutral events. Through the amygdala-hippocampus circuit, negative emotions probably have an impact on cognitive process such as attention and perception [51], and alleviated depression and anxiety can facilitate a greater magnitude of cognitive training gains.
The present study con rmed the boost effect of improved mood on memory training from both behavioral and the cognitive neural perspectives. There are several strengths in the present study. First, by combining psychological intervention with cognitive training, we made the pilot experimental work to investigate whether improved emotional states would amplify e cacy of cognitive training, which helps to have a better understanding of the relationship between memory and emotion in individuals with SMC. Second, we used an active-controlled design, the intervention and control activities were matched in frequency, duration and format for both group counseling and memory training. It enabled us to control several potential confounding factors such as expectation effect, social interaction during group training and general cognitive stimulation of using tablets. Finally, we combined the behavioral and cognitive neural analyses to con rm the boost effect, which strengthened the reliability of the nding.
Some limitations also should be mentioned in the present study. The MT groups did not receive fMRI scanning which hindered to systematically compare intervention-induced functional changes among three groups. Further, as not all participants meet the requirements of MRI scanning, the sample of behavioral data and MRI data were not strictly matched, which might obstruct the interpretation of the results. Second, the duration of group counseling and memory training was relatively short, so it might limit emotional and cognitive bene ts derived from the intervention. Third, no follow-up data was collected so we cannot evaluate whether the superior intervention effect in the combined group would be maintained. Fourth, 27 out of 124 participants at baseline withdrew during intervention. Although the attrition rate was comparable across three groups, decreased sample size reduced the power to detect small effect sizes on emotional and cognitive outcomes.
Conclusions And Implications
In conclusion, the present study show that memory training combined with group counseling for memory complaints-related depression and anxiety can induce larger memory gains than memory training or group counseling alone in older adults with SMC. It may be important to integrate treatment for depression and anxiety into cognitive training for older adults with memory complaints to achieve better intervention effect.
Declarations
Ethics approval and consent to participate The protocol was approved by the Ethics Committee of the Institute of Psychology, Chinese Academy of Sciences (CAS). All participants provided written informed consent according to institutional guidelines.
Consent for publication
Not applicable.
Availability of data and materials
The datasets analyzed during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare that they have no competing interests. Funding | 2021-08-25T17:34:58.573Z | 2021-02-02T00:00:00.000 | {
"year": 2021,
"sha1": "07c07229593af851c11f24c15274723d6b31bad5",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-164434/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "ebe866592f4439842b7d64fc6a67fccd60e17b24",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
256648677 | pes2o/s2orc | v3-fos-license | How technological change affects regional voting patterns
Does technological change fuel political disruption? Drawing on fine-grained labor market data from Germany, this paper examines how technological change affects regional electorates. We first show that the well-known decline in manufacturing and routine jobs in regions with higher robot adoption or investment in information and communication technology (ICT) was more than compensated by parallel employment growth in the service sector and cognitive non-routine occupations. This change in the regional composition of the workforce has important political implications: Workers trained for these new sectors typically hold progressive political values and support progressive pro-system parties. Overall, this composition effect dominates the politically perilous direct effect of automation-induced sub-stitution. As a result, technology-adopting regions are unlikely to turn into populist-authoritarian strongholds.
Introduction
The widespread use of new technology at the workplace has raised fears about wage pressure and employment loss.Influential work in labor economics shows that capital in the form of industrial robots or specialized software directly replaces certain routine tasks previously done by human labor in both white-and blue-collar occupations (Autor et al., 2003;Acemoglu and Restrepo, 2019).These findings have sparked a vivid debate about the political and societal consequences of such an uncertain future of work (Gallego and Kurer, 2022).A growing literature in political science has gathered evidence that workers directly threatened by a transforming employment structure seek for ways to express their discontent and disproportionately support anti-establishment parties (Frey et al., 2018;Im and others, 2019;Kurer, 2020;Anelli et al., 2021;Milner, 2021).
However, there is another side of the same coin that has attracted less attention in this literature.Labor economists also agree that new technologies increase productivity and contribute to rising demand for labor in non-automatable tasks, which may well result in aggregate welfare gains.It is widely accepted that this productivity growth leads to job creation, yet of a very different type of jobs.Far away from the conveyer belt, new jobs tend to pertain to more high-skilled, cognitive, and interactive occupations oftentimes requiring tertiary education (e.g., Michaels et al., 2014;Graetz and Michaels, 2018;Dauth et al., 2021).In contrast to the emblematic manufacturing worker, individuals in those growing occupations tend to hold more cosmopolitan and progressive values through their experience of higher education and the exposure to a profoundly different work logic (Kitschelt, 1994;Oesch, 2006;Kitschelt and Rehm, 2014).In this sense, technological change may also lay the foundation for a socially progressive society, a possibility that is widely appreciated in the influential literature on the rise of the "knowledge economy" (e.g., Iversen and Soskice, 2019).
This paper explicitly recognizes that technological innovation affects regional voting outcomes in two ways.On the one hand, there is a direct effect on workers who are threatened by technology and may well become more supportive of radical right and populist forces.On the other hand, technological innovation also affects regional voting through a compositional effect.Over time, more and more workers belong to occupations which are associated with more progressive values.The direction of the net effect of technological innovation on regional voting outcomes is theoretically ambiguous.We advance the existing literature by an empirical analysis of the relative importance of the direct and compositional effect in West Germany.This case is relevant because West Germany (a) is both one of the largest information and communication technology (ICT) markets in the world and home to the overwhelming majority of industrial robots currently installed in Europe, (b) still has the largest manufacturing share of employment compared to other advanced economies, and (c) has recently seen the rapid rise of a radical right party.
Fine-grained labor market data with high levels of geographical disaggregation from the German Institute for Employment Research (IAB) allow for a more detailed regional analysis than most existing accounts.We combine these detailed labor market data with two distinct empirical measures of technological change.First, we use data from the International Federation of Robotics (IFR) to measure county-level exposure to robotization and how it has changed over time.This indicator mainly captures automation in the manufacturing sector.Second, we measure county-level exposure to digitalization in the form of ICT by relying on EU-KLEMS data, which constitutes a distinct form of technological change that also affects the service sector.Following pioneering work in the field (Acemoglu and Restrepo, 2020), identification stems from a shift-share approach, where we use pre-sample-period local employment composition to estimate the exposure to new technologies in a time-varying fashion.We employ a panel model with region and time-fixed effects to control for unobserved factors.
Unlike most existing work, our approach allows us to document technology-induced changes in the labor market that are typically invoked to explain political reactions.This is important as all studies on the topic-more or less explicitly-argue that technological change affects political outcomes through material changes at the workplace.In line with previous work in labor economics, our approach reveals that robot adoption and ICT investment shift employment from manufacturing and routine jobs to the service sector.Regions with faster growing technological innovation experience stronger labor market polarization.Robots primarily displace manual routine jobs, whereas ICT investment more powerfully substitutes for cognitive routine jobs.However, importantly, overall employment does not decrease in West German counties with higher exposure to technological change.To the contrary, we find weakly positive net employment effects. 1 Our analysis of political outcomes shows that, on average, regions more strongly affected by technological innovation shift their political support toward socially progressive parties.The regional vote shares of center-right and right-authoritarian parties decline as a result of the labor market transitions caused by robot adoption and ICT investment.We provide evidence that these results are indeed the consequence of changing local labor market composition.In 1 This finding helps correct a common misperception.Investment in new technologies is actually a sign of a relatively healthy, future-oriented local economy.While it could be imagined that the alternative to robot adoption were thriving manufacturing plants relying on human work, recent research suggests that the more realistic counterfactual scenario is substantial job loss and closed factories as companies without robots fall behind in global competition (Koch et al., 2019).
Political Science Research and Methods
line with the literature on occupational preference formation, we demonstrate that a lower number of regional manufacturing jobs is associated with less support for right-authoritarian parties whereas a larger interpersonal service sector is associated with more support for progressive left parties.
By highlighting that new technologies not only replace human work (the replacement effect) but also create new jobs (the productivity effect), we challenge rather gloomy perspectives on the political repercussions of technological change.Concerning the important case of West Germany, we show that compositional effects of technology adoption on local labor markets can outweigh the political resentment among workers directly affected by the adverse consequences of technological change.Hence, our results suggest that technological innovation need not result in local political disruption.While we acknowledge that automation contributes to the emergence of anti-establishment forces through electoral support from the segment of society directly exposed to the negative consequences of this process, our results show that, overall, technology adopting regions do not necessarily turn into right-authoritarian strongholds.
Labor market implications of technological change
The seminal work by Autor et al. (2003) argues that new technologies substitute for routine tasks that follow clearly defined rules.Such rules make routine jobs "codifiable" and hence replaceable by computers or robots.This substitution effect mainly hits workers located at the middle of the income and skill distribution and in particular those in the manufacturing sector.At the same time, technology also has a reinstatement effect (Acemoglu and Restrepo, 2019).New technologies raise productivity which leads to an increased demand for workers whose skills are complementary to automation.Newly created jobs pertain either to the growing group of white-collar professionals with college education focusing on cognitive and interpersonal tasks (management, education, and cultural and health sector) or to low-skilled manual services (retail, restaurants, and hospitality).Most of them benefit from automation indirectly through lower prices of goods and new demands for their products and services.
While scholars agree that these are the main forces at work, it is still hotly debated whether the substitution or productivity effect dominates.With respect to robotization, an influential paper on the US found that the substitution effect dominates as regions adopting more robots experienced weaker employment growth (Acemoglu and Restrepo, 2020).However, studies focusing on Europe and on Germany in particular found null or slightly positive employment effects (Klenert et al., 2020;Dauth et al., 2021).With regard to ICT, existing work appears slightly less controversial and tends to show that investment in technology has not led to a decline in employment (Biagi and Falk, 2017) but shifted jobs from mid-skill to high-skilled sectors, consistent with ICT-based employment polarization (Michaels et al., 2014).Our own original analysis points in the same direction: although we do find that mid-skilled routine jobs in general, and manufacturing employment in particular, are negatively affected by technological innovation, this decline is more than offset by an increase in work in other sectors.
Political implications of technological innovation
These distributive implications of technological innovation give rise to two distinct and most likely countervailing political implications.On the one hand, studies that focus on the direct effect are interested in the individual-level response to imminent automation exposure.On the other hand, studies on the consequences of economic modernization and occupational change at the aggregate level emphasize the changing composition of postindustrial societies.It should not come as a surprise that these two perspectives offer starkly different views on the prospect of democracy in the age of automation.While the first is often motivated by a concern about the potential substitution of human labor and resulting political disruption, the second provides a much more optimistic outlook, emphasizing economic opportunity and mobility in the rising knowledge economy.Interestingly, the net impact of the two effects remains unclear.It appears that the relative importance of winners and losers is at the root of much of the ongoing debate about the political implications of technological change.
Direct effect
Existing papers studying what we call the direct effect of automation focus on individual-level responses regarding political preferences and voting behavior.Despite the fact that technological change creates both winners and losers, it is safe to say that most existing work investigates the political reactions of workers who stand to lose from technological change.Alluding to historical examples of machine breaking during the Industrial Revolution, pundits and academics alike have raised concerns that the left-behind would turn against the system.In short, it is argued that losers of technological change become more attracted to anti-establishment forces due to their economic decline (Kurer and Palier, 2019;Im et al., 2019).Specifically looking at the impact of robots, Frey et al. (2018) showed an association between robot adoption and anti-incumbent voting in the US and Anelli et al. (2021) and Milner (2021) provide evidence for a link between local robot penetration and support for right-authoritarian parties across Western Europe.
The political reactions of winners of technological change have received considerably less attention in individual-level research.Gallego et al. (2020) examine political preferences of "ordinary winners" of digitalization in the UK.They show that a majority of the population, but especially high-skilled workers, benefit from ICT capital investment and that these economic benefits translate into more support for moderate incumbent parties, hence creating a stabilizing pro-system force.
Summing up, workers imminently threatened by automation tend to become more supportive of radical parties challenging the political status quo.The direct effect of automation seems to primarily benefit authoritarian-right parties.Voters who benefit at least moderately from the "digital revolution," in contrast, tend to vote for more centrist ideological positions and support incumbent parties.Technological change hence potentially creates political divergence between winners and losers and can contribute to increasing political polarization.
Compositional effect
While research on individuals' susceptibility to automation has concentrated on the downsides of the technological revolution, its upside is at the heart of a different body of work that describes the transition of modern society into "knowledge economies."Starting back in the late 1970s, technological progress has facilitated a transition in advanced capitalist democracies from a manufacturing-based to a more services dominated economy, with an ever greater reliance on intellectual capabilities (Powell and Snellman, 2004).Influential recent accounts highlight the relevance of a broad (upper) middle class enjoying economic growth, wealth, and opportunity (Iversen and Soskice, 2019).
The emergence of the knowledge economy is intimately linked to the distributional implications of technological change discussed above.Non-routine and service sector jobs, especially higher skilled ones, have expanded at the expense of mid-skilled routine jobs.A changing composition of local labor markets is politically highly relevant because occupations are known as important sites of preference formation (Kitschelt, 1994;Oesch, 2006;Kitschelt and Rehm, 2014).Occupations shape political preferences through both a market logic reflecting vertical divisions in marketable skills and economic self-interest, and an important additional horizontal differentiation in terms of work logic.The literature differentiates between a technical, organizational/bureaucratic and interpersonal work logic depending on the education level required, setting of the work process, the relation to authority, the primary type of client relation, and the kind of skills applied.At the risk of simplification, the theory of occupational preference formation thus posits that lower education levels, strict hierarchies, and dealing with objects and files (rather than people) are associated with more authoritarian views.Occupations that require university educations, which are based on cooperation (rather than hierarchies), which focus on social interactions and culture hold more cosmopolitan and progressive values.
Translating this into actual occupational groups and milieus means that mid-skilled, routine occupations in the manufacturing sector are characterized by disproportionate support for authoritarian-right parties.Much in contrast, the growing number of highly educated workers engaging in more analytical and interactive work tend to belong to a milieu which is more left-leaning and cosmopolitan.This transformation of the employment structure has resulted in a decline of traditional class voting: contemporary progressive left parties draw substantial electoral support from among an expanding highly educated middle class (Gingrich and Häusermann, 2015;Oesch and Rennwald, 2018).
It is important to note that the underlying forces changing the regional composition of the labor force go beyond a narrow individual-level mechanism.Of course, workers can retrain and change occupations in response to technology adoption and declining demand for their incumbent jobs.Existing research on intragenerational mobility and political attitudes indeed provides evidence that the theory of occupational preference formation has some traction even within an individual.Changing occupational environments and work logics have been shown to shift political participation (Lahtinen et al., 2017), policy preferences (Ares, 2019), or economic ideology (Langsaether et al., 2022), where the resulting political behavior typically comes to lie between the class of origin and the class of destination.This "strong theory" of occupational preference formation (Kitschelt and Rehm, 2014) is thus one possible channel contributing to changes in the composition of the local labor force resulting in a more progressive regional electorate.However, we do not believe that it is the main channel.Although considerable levels of incremental retraining and adjustment to new technologies happen within firms, the main driver behind the consistent decline in routine work in the aggregate is not individual occupational re-orientation.Individual-level (between-firm) transitions into (better or worse paying) jobs are relatively rare.Instead, routine workers exit into retirement and new labor market entrants find work in different (non-routine) jobs (Cortes, 2016;Kurer and Gallego, 2019).Dauth et al. (2021) show that the largest burden of the reduction in manufacturing employment as a consequence of robotization falls on young labor market entrants rather than on incumbent workers.Importantly, and very much in the spirit of our basic argument, they also show that displacement in manufacturing is overcompensated by offsetting gains in services.An observable implication of this narrative is the rising average age among workers in "declining jobs" (Autor and Dorn, 2009;Kurer and Gallego, 2019) while the average age should be lower in technology-adopting regions because of local labor supply.
A second likely explanation of a changing composition of the local electorate is internal migration.The evident sectoral shift in technology-adopting regions likely attracts a different type of worker with a distinct skill profile and work logic, which contributes to a changing composition of the local labor force that manifests itself also in the electoral arena.Again, Dauth et al. (2021) provide important evidence on the basis of high-quality administrative panel data.The productivity effects of robotization spill over into the service sector and pull in workers into this expanding sector from other regions (but see Faber et al. (2022) for the US case, which operates under opposite signs).Below we will provide some original evidence tracing observable implications of different plausible channels contributing to the observed transformation of the labor market composition.Our analysis supports the presence of all three channels but also confirms that an intergenerational transformation of the employment structure appears as a key source of change.
Net effect
The political space in Germany and many other postindustrial democracies is composed of an economic and a cultural dimension.The lion's share of voters as well as the relevant political actors tend to cluster along the diagonal, which is characterized by a progressive, economically left-leaning pole and an authoritarian, economically right-leaning pole with progressive left parties and authoritarian-right parties representing "polar normative ideals" (Bornschier, 2010).In the online Appendix, we provide a descriptive overview of the contemporary German partisan landscape.From a theoretical perspective, the direct and the compositional effects of automation work as opposing forces.While the direct effect of automation risk and substitution may fuel individual support for the authoritarian right, the accompanying shift in the composition of the labor force fuels party support for more progressive, cosmopolitan left parties.Hence, a priori, technological innovation could affect regional party support in either way.We treat the question of which factor dominates as an empirical issue and strive to provide an answer, at least for the German case, in below analysis.
Data
Our empirical analysis focuses on West Germany, a highly relevant case characterized by a large manufacturing sector, the largest number of robots anywhere outside Asia, and large investments in ICT over the past decades (see Figure 1).East-German regions of the former German Democratic Republic (GDR) are dropped due to their profoundly distinct economic and political trajectories.We apply a regional approach similar in spirit to previous studies in economics (Acemoglu and Restrepo, 2020;Dauth et al., 2021), choosing West German counties (Landkreise und kreisefreie Städte) as the regional unit of analysis (n = 324, NUTS-3).We employ population weights from the Federal Statistics Office to take care of mergers and create a consistent panel based on the current shape of counties.
Robot exposure
To calculate regional robot exposure over time, we use data from the IFR (IFR, 2016).A robot is defined as an "automatically controlled, re-programmable, and multipurpose machine."The yearly data differentiate between 25 industries, mostly in manufacturing.We follow Acemoglu and Restrepo's (2020) approach to exploit information on pre-sample regional employment composition.Robots of a given sector are distributed to regions based on the number of employees in the region working in the sector relative to the nation-wide employment in the sector.To capture robot intensity, i.e., the number of robots per workers, we normalize by the region's total employment in thousands.Finally, to account for the heavily skewed distribution of robots across regions, we apply a logarithmic scale.(The robustness section shows results without this transformation.)Robot intensity r,t = log 1 E r j Robots j,t * E j,r E j /1000 (1) where E r is the employment in region r, E j,r is the employment in industry j in region r, Robots j,t is the number of robots in industry j in year t, and E j is the total employment in industry j across all regions.
Information on local employment composition is derived from administrative data of the Institute for Employment Research (IAB).In constructing the measure, we rely on employment records from a 2 percent sample randomly drawn from the universe of German employees subject to social security (Antoni et al., 2019).To further increase the effective sample size, we also take advantage of the fact that the IAB provides information on the number of co-workers for every randomly selected employee.For every respondent, we have information on employment status, employer, and occupation for any given day for the entire sampling period.An adjacent firm dataset includes information on the firm's industry classification, its number of employees, and geographic information.We aggregate information on all firms in a 10-year window prior to our sample period by region and industry to approximate local employment composition.Employment data are used from pre-sample period, as later sectorial employment composition might be endogenous to the adoption of robots.In addition, IAB data also provide regional employment shares along various dimensions (e.g., by sector, main task, or skill requirements).These time-varying, disaggregated employment shares allow us to carefully trace distributional implications on the regional level.The measure constitutes a typical Bartik-style shift-share variable where an industry-level shock is apportioned across regions (Bartik and Doeringer, 1993).
ICT investment
We use changes in ICT capital stock by industry to measure digitalization, drawing on the 2019 release of the EU-KLEMS dataset (Stehrer et al., 2019), which contains yearly measures of output, input, and productivity for 40 industries in a wide range of countries, including Germany, and covers the period 1995-2017.The data are compiled using information from the national statistical offices and then harmonized to ensure comparability.Most importantly for our purposes, the database provides a breakdown of capital into ICT and non-ICT assets.We define the industry-level ICT capital stock as the capital stock in information technologies, communication technology, and software and databases.Based on this, we create a time-varying, industry-specific measure of digitalization using a shift-share approach analog to our robot intensity measure.More specifically, we calculate the ICT capital stock per 1000 in region r in year t as where E r is the employment in region r in the base year, E j,r is the employment in industry j in region r in the base year, ICT j,t is the industry ICT capital stock in 1000 in industry j in year t, and E j is the total employment in industry j across all regions.
Figure 2 shows the spatial distribution of both measures of technological change per county for 2017.The left panel shows that most robots can be found in regions dominated by the automotive industry: For example, Volkswagen has its headquarters in Wolfsburg, Audi in Ingoldstadt, Opel in Gross-Gerau, and Dingolfing-Landau and Emden are major production sites of BMW and Volkswagen respectively.The right panel shows that ICT is concentrated in the major servicesector business hubs of Munich, Frankfurt, and Stuttgart.This pattern suggests that we capture two distinct forms of technological change.The correlation between the two measures is indeed low (0.12).
Elections
For each county we gathered official election results for all Federal, State, and European elections between 1994 and 2017 which yields 7 federal, 40 state elections, and 5 European elections.If multiple elections were held in the same year, we only consider one of them, preferring federal election over state election over EU election (order of voter turnout) which gives a total of 4277 county-election pairs.We consider all parties currently represented in national parliament: Grünen (greens), Linke (leftist), SPD (social democrats), FDP (pro market), CDU-CSU (christian democrats), and the Alternative für Deutschland (AfD, right-authoritarian). Since the AfD was only founded in 2013, we pool it with other right-authoritarian parties (NPD, DVU, Republikaner).
According to expert judgements (see online Appendix might query an uncritical classification of the Left party within this group.Some would rather classify the Left party as a populist or radical left party that arguably attracts groups of angry voters that do not resemble the successful, skilled workers in the growing cognitive professions.We will come back to this question in the empirical analysis.
Empirical approach
We employ a two-way fixed effect panel model to capture the effect of new technologies, measured as robotization or ICT investment, respectively, on economic and political outcomes: The dependent variable Y r,t is a party vote share or an employment outcome in region r in year t which is regressed on Technology r,t measured as (a) the number of log robots per 1000 workers or (b) the ICT capital stock per worker in 1000.The model includes region fixed effects η r and year fixed effects μ t .As robustness checks, we will further add a vector of control variables in later specifications.These specifications have sometimes been presented as "generalized" versions of the canonical diff-in-diff with two time periods and two groups, but recent research has highlighted that one has to be careful with a causal interpretation of the aggregated parameters (e.g., Callaway and Sant'Anna, 2021).The two-way fixed-effects estimator has been shown to equal a weighted average of all possible two-group/two-period diff-in-diff estimators in the data (de Chaisemartin and D'Haultfoeuille, 2020;Goodman-Bacon, 2021).A causal interpretation hence rests on the assumptions of parallel trends and constant treatment effects over time.
Political outcomes
In line with our theoretical point of departure, we first turn our attention to political outcomes and look at "reduced-form" specifications modeling the direct relationship between regional technological adoption and regional election outcomes.Figure 3 plots estimated marginal effect of regional robot intensity and ICT investment, respectively, on regional electoral vote shares of all major German parties.The reported coefficients each stem from a separate regression.The first specification only includes one of the technological change measures (blue triangles) and the second includes both measures simultaneously (red circles).Both specifications include a region and an election fixed effect. 2he results show that regions exposed to more intense technology adoption generally shifted their electoral support to the progressive-left of the political spectrum.For ICT, the patterns are consistent and robust.We find that the green party Die Grünen and leftist party Die Linke were the parties that gained most votes in digitalizing regions.For the social-democratic SPD, we find a positive but imprecisely estimated effect.On the other hand, the center-right CDU/CSU and the authoritarian-right party AfD received less support.The estimated effect for the pro-market party FDP is marginally negative.These findings are not affected when controlling for the effect of regional robotization.These reduced form models focusing on ICT investment hence provide evidence that the compositional effect, which favors progressive-left parties, seems to dominate the direct substitution effect at the regional level.
For robotization, the overall pattern is similar but much more noisy.When considering the effect of robotization in isolation, we find the same gradient across the political spectrum: progressive-left parties gain, whereas conservative and authoritarian-right parties tend to receive fewer votes when a region adopts robots.However, only the effect of the progressive-left party Die Grünen is statistically significant.When controlling for the parallel influence of ICT, the marginal effects of robotization hover around zero.One might interpret this as evidence that, with regards to robotization, the direct effect favoring authoritarian-right parties and the compositional effect favoring progressive-left parties are on balance.This contrasts with previous work claiming that robotization leads to an unambiguous shift toward the right of the political spectrum.However, given the large confidence intervals, we do not want to over-interpret potentially countervailing effects, which might simply reflect a noisy estimation process.
In terms of effect magnitude, our baseline models predict that a one standard deviation increase in the log number of robots per thousand workers (+35% more robots) is associated with an increase of the Grünen vote share of 0.15 percentage points.In itself, this is a relatively modest effect but considering that the average region increased its number of robots by 270 percent between 1994 and 2017, the accumulated effect for Die Grünen is an estimated increase of the vote share by 0.71 percentage points.This is significant for a party that typically attracted less than 10 percent of the vote.Similarly, an increase of the ICT capital stock by one within-region standard deviation (+520 per worker) is associated with an increase of the vote for Die Grünen by 0.19 percentage points. 3e run a series of robustness checks (see online Appendix section A.2 for details).First, additional to the two-way fixed effects, we control for trade exposure and GDP growth.Furthermore, we use an instrumental variable (IV) approach, where we instrument technology adoption in Germany with values from other European countries.Considering ICT investments, effects are Note: The graph shows estimated marginal effect of the (a) regional log number of robots per thousand workers and (b) the regional ICT capital stock per worker in 1000 on regional party vote shares in percentage points (see column ( 1) and (3) of Tables A.1-A.12).Standard errors clustered at the county level.Bars represent 95 percent confidence intervals.
stable or even stronger in the case of IV results.Again, the robotization findings are not very stable, which is why the above results should be interpreted with caution.
Understanding compositional effects and underlying mechanisms
The remainder of the empirical exercise makes use of fine-grained individual and regional labor market data to trace underlying distributive implications of regional technology adoption.We first empirically confirm that the regional employment composition indeed shifts toward higher skilled and less routine occupations.Second, we show that the disappearing jobs are associated with conservative and authoritarian-right vote, whereas the newly appearing jobs are associated with voting for more progressive parties.In sum, the analysis of intermediary distributive mechanisms on labor markets supports our conjecture that technological change results in a relative growth of occupations that are generally more supportive of progressive left parties.
Regional-level economic outcomes
We first turn our attention to the economic effects of technology adoption by simply switching the dependent variable from voting results to labor market indicators.In line with much of the existing literature in labor economics, we find that robot adoption and ICT investment affect the composition of the labor force but do not result in net employment loss.Both forms of technological innovation (if anything) marginally decrease manufacturing employment.Importantly, this decline in manufacturing is more than offset by an increase in the non-manufacturing (service) sector employment.The sum of both coefficients represents the effect of robot exposure on total employment relative to population (see Figure 4).
The main reason for an increase in aggregate employment is that the fall of routine jobs is often accompanied by disproportionate job growth in non-routine occupations (de Vries et al., 2020).Indeed, when looking at labor shares of task groups instead of sectors, we find that technology adoption increases non-routine cognitive jobs at the cost of routine jobs (see Figure 5).In line with our intuition, robots have a stronger replacement effect with respect to routine manual jobs, whereas ICT investment substitutes in particular for routine cognitive occupations.The share of low-skilled manual non-routine jobs is not significantly affected by technology adoption in Germany.This pattern is largely confirmed when looking at labor shares by skill group.Technology-adopting regions experience a strong increase in the share of high-skilled jobs and stagnation or even decline in mid-and low-skill jobs (see Figure 6).
Summing up, we show that regions with stronger exposure to technology adoption experience a polarized upgrading of labor markets.While overall employment is not negatively affected, the share (and numbers) of jobs in the semi-skilled and manufacturing domain decreases markedly.The observed pattern in which technology adoption shifts the sectoral and task-specific composition of the local labor force could be a result of at least three distinct mechanisms: (1) The incumbent labor force can retrain and change occupations and sectors, (2) young workers may enter different jobs than those exiting the local labor market or (3) a changing labor demand may attract workers from other regions.While it is beyond the scope of this study to offer a definite explanation of these different channels, we collected additional regional-level indicators to trace observable implications of each possible mechanism (see Appendix A.4.2 and Figure A.3 for details).Based on these auxiliary analyses, we conclude that the narrow individual-level mechanism is certainly not the only channel contributing to a changing labor market composition.Both intergenerational occupational upgrading and migration play a role, too.The results suggest that ICT investment in particular seems to attract (young, skilled) workers from other regions, whereas robotization is more strongly related to intergenerational transitions into other sectors with a more stable population size and local skill mix.These findings are not in itself groundbreaking, but align with previous work on the labor market effects of automation.Nevertheless, they provide a vital first piece of evidence to strengthen our argumentation that compositional effects play an important role to understand how automation affects political preferences at a regional level.Both mechanisms contribute to a lower average and median age of the local labor force.This aspect of a changing composition of the local electorate may add to our finding that technology-adopting regions tend to lean toward the political left-beyond an explanation based on occupational preference formation.However, note that-in contrast to the popular narrative-young voters in Europe are not generally more "socialist" than older voters.While younger cohorts are much more socially liberal, they are, if anything, more economically conservative, i.e., more opposed to government spending and higher taxes (O'Grady, 2022).An explanation focusing on the impact of technology on the local age structure-in principle fully compatible with our arguments-is thus unlikely to account for much of the observed shifts in political support.Still, it is possible that parties that are particularly appealing to young, socially liberal voters benefit from this side effect of technology adoption on top of a changing occupational structure.
Regional-level relationship between occupation and vote choice
According to the theory of occupational preference formation, the shown changes in the labor market composition should shift political support more toward progressive parties.In order to corroborate these underlying expectations, the following analyses zoom in on the relationship between regional employment composition and party vote shares.For this, we focus on the results of the 2017 Federal Elections (the last year in our sample) and regress the county-level party vote share on the local employment share as of 2017.For each party p-employment share s (manufacturing share, routine worker share, etc.) pair, we run a separate regression of the following kind: where VoteShare p r is the vote share of party p in region r which is regressed on the employment share of type s in region r.The results presented in Figure 7 show that a higher non-manufacturing (service) employment to population ratio is associated with more vote for progressive-left parties and a less support with conservatives and right-authoritarian parties.This closely resembles the effect of technological change on voting outcomes.Conservatives and right-authoritarian parties perform particularly well where the manufacturing employment to population ratio is high (panel 7a).Similarly, regional labor market characterized by a high share of cognitive non-routine occupations display more support for cosmopolitan-left parties and less support for conservative and authoritarianright parties.Conversely, regions with a large share of manual workers (both routine and nonroutine) tend to be less supportive of the progressive left parties and more supportive of authoritarian right parties (panel 7b).Similar patterns emerge when we look at the regional skill distribution (panel 7c).
Individual-level relationship between occupation and vote choice
As a final step, we analyze party preferences of different occupational groups using individuallevel data from the German Socio-Economic Panel (SOEP).This allows us to test more directly how local labor market composition relates to election results.To do so, we recreate the sectoral and occupational groups from the previous analysis as closely as possible.Therefore, we considered all respondents between 18 and 65 for the years 1994-2018 (n=323000) and classified them into manufacturing and non-manufacturing, by main task and created three education groups.Figure 8 plots the party support of different occupational groups over time.We control for the age of individual respondents to take into account the above-discussed side effect with respect to the regional age structure in technology-adopting regions.To facilitate the visualization, we grouped responses in 5-year intervals.
The findings confirm a few common priors of the relevant literature (e.g., Oesch, 2008;Kitschelt and Rehm, 2014).First, we find that the progressive-left party Die Grünen is mainly supported by non-manufacturing (service sector) workers, whereas manufacturing workers became more and more supportive of conservative and authoritarian-right parties over the last years (panel 8a).Second, we observe the cognitive non-routine workers disproportionately support the progressive-left party Die Grünen whereas conservative parties are mainly supported by routine workers and authoritarian-right parties draw most support from manual occupations (both routine and non-routine) (panel 8b).Finally, we find a strong education gradient.Highly educated workers are the core constituents of the green party (and the pro-market FDP) whereas conservative and far-right parties find most support among middle and low educated workers (panel 8c).This further corroborates the idea that those occupational groups which expand due to technological change are more supportive of progressive-left parties, whereas conservative and authoritarian-right parties find the size of occupational groups that mainly supported them to be in decline. 4A theory of occupational preference formation in tandem with a gradually changing composition of local labor markets hence provides a reasonable explanation of why technological innovation can shift the regional electoral landscape to the progressive left.
Discussion
In this paper, we demonstrate that, on average, technological innovation increased the regional vote shares of cosmopolitan left parties whereas right-authoritarian parties receive fewer votes in affected regions.The increased prevalence of robots and ICT changes the local labor market composition and shifts the employment structure from routine to non-routine jobs.This shift has important indirect consequences in that it opens more jobs for highly-educated, high skilled workers who often work on cognitive interactive tasks.Such "children of digitalization" gravitate toward the cosmopolitan left whereas routine workers in manufacturing whose jobs were, as we show, partly replaced by robots, often feel attracted by the promises of right-wing populism.Hence, the common narrative that technological change and robotization will first and foremost result in political disruption may provide an incomplete perspective.
How can we reconcile our findings with previous work that showed evidence in favor of the disruption narrative?Our study finds that regions exposed to robotization and digitalization tend to shift employment away from manufacturing and routine jobs, which in turn leads to less support for right-authoritarian parties.Hence, we would not expect that right-authoritarian parties make the strongest inroads in strongly technology-adopting regions.Here, the composition of local labor markets changes more substantially than in regions less exposed to technological change and economic modernization.And yet, it is important to repeat that we do not claim that technological innovation is unrelated to the recent surge in right-authoritarian and populist voting in Germany and elsewhere.The mounting evidence that automation increases right-authoritarian support among individuals or occupational groups that are imminently affected-or threatened-by automation is entirely plausible and convincing.However, we wish to highlight that the broader compositional changes in local labor markets work in the opposite direction and may well dominate the political response by those disaffected voters who lose out in the process of economic modernization.
Hence, we can resolve the apparent conflict by a conceptual differentiation between a compositional (regional) and a direct (individual) effect.This differentiation has important implications for future research, as it highlights the pros and cons of using a regional approach versus an occupational/individual-level approach.The disadvantage of our regional analysis is its inability to isolate those workers directly threatened by technological innovation.Put differently, some disruptive political consequences of technological change "might be masked [⋅⋅ ⋅] by the aggregate welfare gains brought about by automation" (Anelli et al., 2021, p. 4).This is exactly right: Our approach inherently bundles winners and losers within the unit of analysis.Depending on the workers' skills and occupation, the adoption of technology can have either positive or negative effects, even if they live in the same region.
On the positive side, a regional approach allows us to capture the compositional effect of changing local labor markets, i.e., precisely the before-mentioned welfare gains in the aggregate.Recall that a focus on within-individual changes lets us focus on the direct effect but-by design-neglects the compositional effect.Positive indirect effects of technological innovation such as the creation of new jobs can only be captured by a regional approach.Also, the fact that new generations joining the labor market enter into different occupations and hold different political attitudes than previous generation is hidden when focusing on within-individual changes.The academic literature has shown that technological change mostly shapes employment composition through generational turnover, rather than directly displacing affected workers.Hence, in the long term, the compositional effect may be considered more important and more consequential in political terms.
Figure 1 .
Figure 1.Evolution of manufacturing share, robot penetration, and ICT.Note: The graph shows (a) the share of employees working in the manufacturing sector, (b) the number of robots per thousand employees, and (c) the ICT capital stock per worker in 1000.Compared to other advanced economies, West Germany still has a large manufacturing sector, while robots are already playing an important role.Digitalization also plays an important role in West Germany.Sources: IFR, ILO, EUKLEMS, own calculations.
Figure A.1), the Greens, the SPD, and the Left party all fall into the camp of what we broadly call progressive-left parties, which we expect to benefit from local technology adoption.Assessments based on party manifestos provide a very similar overall picture (see, e.g.,Burst et al., 2021).Despite such "objective" mappings, one
Figure 2 .
Figure 2. Regional distribution of new technologies.Note: The graph shows (a) the estimated number of robots per thousand workers and (b) the ICT capital stock per worker for 324 West-German regions (Kreise und kreisfreie Städte) in 2017.Top 5 cities are labeled.Analogous to our measure of robot intensity in the main analysis, the color scale is in logs.
Figure 3 .
Figure 3. Region-level exposure to technological change and party vote shares.Note: The graph shows estimated marginal effect of the (a) regional log number of robots per thousand workers and (b) the regional ICT capital stock per worker in 1000 on regional party vote shares in percentage points (see column (1) and (3) of Tables A.1-A.12).Standard errors clustered at the county level.Bars represent 95 percent confidence intervals.
Figure 4 .
Figure 4. Region-level exposure to robots and employment effects.Note: Estimated coefficients of effect of log number of robots per thousand workers on employment to population ratios (in percent) after controlling for region and year fixed effects.See column (1) of Tables A.13-A.15.Black bars represent 95 percent confidence intervals.
Figure 5 .
Figure5.Technological change and regional task composition.Note: All variables are expressed as changes in regional employment shares in percentage points such that coefficients sum up to zero.Bars represent 95 percent confidence intervals where standard errors are clustered at the commuting zone-year level.
Figure 6 .
Figure6.Technological change and regional skill requirements.Note: All variables are expressed as changes in regional employment shares in percentage points, such that coefficients sum up to zero.Bars represent 95 percent confidence intervals, where standard errors are clustered at the commuting zone-year level.
Figure 7 .
Figure7.Cross-sectional correlations of regional employment shares and party vote shares in 2017 Federal Elections.Note: Cross-sectional regression of regional party vote shares in 2017 federal elections on regional employment shares without controls (n=324 counties).The estimated coefficients are proportional to raw correlations.Bars represent 95 percent confidence intervals.
Figure 8 .
Figure 8. Party support of different segments of the workforce over time.Note: Graphs show self-reported party support of different occupation groups over time accounting for the age of respondents (clustered into 5-year intervals).Bars represent 95 percent confidence intervals. | 2023-02-08T16:20:09.758Z | 2023-02-06T00:00:00.000 | {
"year": 2023,
"sha1": "8ef601f6803c7da06fb6f139f6fbcdfe1f1c52e0",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6934AF3302F0C9D957C8D1FCC4D5D8A1/S2049847022000620a.pdf/div-class-title-how-technological-change-affects-regional-voting-patterns-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0340f6ff68705285164847b9658ff2820b5af58e",
"s2fieldsofstudy": [
"Political Science",
"Economics"
],
"extfieldsofstudy": []
} |
56462152 | pes2o/s2orc | v3-fos-license | Part A : The Development of mI SMART , a Nurse-Led Technology Intervention for Multiple Chronic Conditions
Background: The treatment of Multiple Chronic Conditions (MCC) is complex for both patients and providers. Used as integrated tools, technology may decrease complexity, remove the barrier of distance to obtain care, and improve outcomes of care. A new platform that integrates multiple technologies for primary health care called mI SMART (Mobile Improvement of Self-Management Ability through Rural Technology) has been developed. The purpose of this paper is to present to development of mI SMART, a nurse-led technology intervention for treating for MCC in primary care. Methods: The creation of mI SMART was guided by the model for developing complex nursing interventions. The model suggests a process for building and informing interventions with the intention of effectiveness, sustainability, and scalability. Each step in the model builds from and informs the previous step. Results: The process resulted in the integrated technologies of mI SMART. The system combines a HIPAA compliant, web-based, structure of mHealth sensors and mobile devices to treat and monitor multiple chronic conditions within an existing free primary care clinic. The mI SMART system allows patients to track diagnoses, medications, lab results, receive reminders for self-management, perform self-monitoring, obtain feedback in real time, engage in education, and attend visits through video conferencing. The system displays a record database to patients and providers that will be integrated into existing Electronic Health Records. Conclusion: By using the model for developing complex nursing interventions, a multifaceted solution to clinical problems was identified. Through modeling and seeking expert review, we have established a sustainable and scalable integrated nurse-led intervention that may increase access and improve outcomes for patients living in rural and underserved areas. The first trial of mI SMART has been completed and evaluated for feasibility, acceptability, and effectiveness in persons in rural areas living with multiple chronic conditions. Corresponding author.
Introduction
People living with chronic conditions suffer from poor health, disability, and pre-mature death [1].It is estimated that 117 million (half) of US adults have a chronic condition, and 1 in 4 adults have Multiple Chronic Conditions (MCC) [2].In addition, most health-care expenditures in the United States are due to chronic conditions [3].Due to the complexity of treatments, conflicting advice for each condition, and co-existing lack of the social, health and behavioral determinants of health due to disparities, individuals with MCC often have difficulty in achieving treatment goals [4]- [7].Therefore, simplifying treatment regimens, managing conflicting advice, and assessing determinants of heath such as addressing the availability of resources, access to healthcare, and improved diet physical activity are imperative to improving outcomes and self-management ability.
Healthcare technology is developing rapidly and may offer an opportunity to enhance care.Use of technology can improve healthcare systems' ability to monitor patients without travel, coordinate the services of multiple healthcare providers, and manage patient symptoms in real time [5] [8].In addition, technology interventions augment care and management of MCC by improving access to care and improving patient outcomes.The subsequent long-term effects of technology use lead to diminished health disparities and reduced healthcare costs [9].However, much of the available healthcare technology intended to treat MCCs is developed by private industry and used in non-academic settings.In the current literature, widespread reporting of development process is not present.Therefore, questions remain about acceptability, feasibility and efficacy across populations.
Some examples of existing patient facing technology interventions include: Access to information from electronic medical records, requesting medication refills and appointments through automated systems, communicating with healthcare providers using secure messing systems, managing specific chronic conditions using telehealth, using personal health records to track progress, and interacting with on-line support groups using social media [10].The available literature on these individual technology interventions is promising and does provide limited evidence of improving outcomes, cost effectiveness and cultural relevance [11].However, the lack of integration of data into existing systems and healthcare records, lack of reimbursement for technology services, and the necessity to access multiple technology interventions individually, increase complexity for both healthcare systems and patients and decrease widespread use for patients with MCC.
Nurses are uniquely trained to address the complexity of caring for the whole person as an individual in the context of their family, community, and environment.The Institute of Medicine (IOM) calls nurses to practice to the full extent of their education and to be partners in redesigning heath care in the United States [12].As patient advocates who store, maintain and communicate data and information [13], nurses are trained to use validated processes to develop complex interventions, to think about patients within systems, and to implement and improve interventions [14].The purpose of this paper is to present the development of mI SMART (Mobile Improvement of Self-Management Ability through Rural Technology), a nurse-led technology intervention for treating for MCC in primary care.
Methods for Intervention Development
The model for developing complex nursing interventions [15] guided the overall process of intervention development.The model was developed and refined from the Medical Research Council (MRC) frame-work for developing complex interventions [16] and other guidelines that contribute additional guidance to inform the development of nursing interventions.The model suggests a process for building and informing interventions with the intention of effectiveness, sustainability, and scalability.Each step in the model builds from and informs the previous step.The steps include: problem identification, practice analysis, identification of the overall objective, identification of theory or key principles, building and planning, modelling and seeking expert review, and developing the study protocol.
Problem Identification & Practice Analysis
The first phase of mI SMART development, problem identification, was undertaken through a needs, practice and policy analysis.The identified problem originally started with a concern for the poor outcomes of persons with diabetes in a rural clinic serving uninsured and underinsured individuals.A retrospective review of the combined pharmacy records, front desk scheduling system, and Electronic Health Record (EHR) found that diabetes patients who lived further than 30 miles from the clinic and had more than one chronic condition missed appointments compared to those who live closer and had only one chronic condition [17].Based on these findings, the research team decided that an intervention was needed that addressed MCCs and overcame the disparity of access to care due to distance.A search of the empirical evidence related to eliminating the burden of distance revealed the use of mHealth tools as a potential intervention to improve care [10].In addition, it was also noted that policies for federal payers were improving and Medicare pays for services that provide live, interactive videoconferencing [18].Hence, the decision was made to begin to develop an intervention that incorporated mHealth tools with live videoconferencing.
Identification of the Overall Objective
The overall objective of the mI SMART project is to improve quality of life and outcomes of care for rural and underserved individuals living with multiple chronic conditions.This objective was determined using a series of activities.First, the thoughts and ideas of nurses, including in-patient, out-patient, advanced practice, and researchers were sought.In addition, conversations intentionally included physicians, pharmacists, social workers, health educators, computer scientists, engineers, and private industry leaders.This objective will be accomplished by improving access, self-management ability, and communication with use of technology within a healthcare system that is trusted by patients.
Identification of the Theory or Key Principles
A substantive review of potential foundational theories was conducted.Two models came forward as having relevance and utility in the development of mI SMART.The first model, the Quality Health Outcomes Model, was selected for its broad concepts that could be conceptualized and adapted based on the system, patient population, and evolving intervention [19].The second model chosen, the Chronic Care Model, assists in understanding how to build an intervention that changes healthcare delivery at the healthcare system level [20].
The Quality Health Outcomes Model was developed by the American Academy of Nursing's Expert Panel on Quality Health Care in 1996 as an expansion of Donabedian's structure-process-outcome framework.The four major concepts are: system, interventions, patients, and outcomes.The model is a dynamic framework that recognizes the reciprocal relationship that occurs between patients, the system where care is provided, and interventions [21].Outcomes are linked to the interactions of a patient with the healthcare system and with healthcare interventions that are focused on the individual, family, or community [19].Interventions are affected by both system and patient characteristics in producing desired outcomes.
The Chronic Care Model consists of six interrelated system changes meant to make patient-centered, evidence based care easier to accomplish [22].The major concepts in the model are: health system, community support, self-management support, decision support, clinical information systems, and delivery system design [23].This model is operationalized through a prepared healthcare team delivering planned interactions, self-management support with effective use of community resources, integrated decision support, and supportive information technology (IT) which are designed to work together to strengthen the provider-patient relationship, improve communication and improve health outcomes [23].Therefore, the development of mI SMART is based on the major concepts and underlying assumptions of the Quality Health Outcomes Model and the Chronic Care Model.
Building, Planning & Modeling
Based on the guiding frameworks, problem identification and overall goals of the intervention, the first version of the mI SMART platform was modeled and a basic plan for implementation was developed.The expertise of an information technology specialist was sought to complete the programming involved.While the initial intervention was planned to be implemented in a free clinic in a rural location, the thought of expanding the intervention was a consideration.The decision was made to make the platform web-based as opposed to a system specific application (app).Hence, the first iteration of mI SMART was web-based, HIPAA compliant, system that includes the use of self-monitoring devices, video conferencing capabilities, real-time feedback, automatic patient specific reminders for self-monitoring and medication, links and video education for chronic illness, and secure messaging.
Seeking Expert Review
After a model of the mI SMART platform was developed, expert review was sought by holding focus groups for in various settings [11].The participants invited to the focus groups were intended to represent both the first users of the system and future potential users.The mI SMART platform was demonstrated to each group and specific questions were used to elicit conversation and feedback.After participation in the focus groups, participants were asked to complete surveys.Focus groups were attended by 37 individuals and surveys were completed by 29 healthcare team members including 7 males and 22 females, age range 23 to 62.The participants included: Medical Assistants, Registered Nurses, front desk staff, Physicians, Nurse Practitioners, Physician's Assistants, Pharmacists, Social Workers, Administrators, and Board Members.The focus group participants identified perceived obstacles of patient use as: ability, willingness, and time.System obstacles were identified as: lack of integration, lack of reimbursement, and cost.Many positive attributes of mI SMART were identified and included: capability for virtual visits, readability, connectivity, user-friendliness, ability to capture biophysical measures, enhanced patient access, and incorporation of multiple technologies.Providers suggested increasing capability for biophysical and symptom monitoring for more common chronic conditions.
Developing the Study Protocol
Based on the feedback provided in the focus groups, changes were made to the mI SMART system prior to developing the study protocol.The system combines a HIPAA compliant, web-based, structure of mHealth sensors and mobile devices to treat and monitor multiple chronic conditions.The mI SMART system allows patients to track diagnoses, medications, lab results, receive reminders for self-management, perform self-monitoring, obtain feedback in real time, engage in education, and attend visits through video conferencing.The system displays a record database to patients and providers that will be integrated into existing Electronic Health Records.Once these changes were made, the study protocol was developed and funding for the project was sought and obtained.The first trial of the mI SMART platform has been completed.The model for developing complex nursing interventions will be used to guide the refinement of the mI SMART platform prior to a larger randomized trial.
Discussion
The use of the model for developing complex nursing interventions was essential to the successful development of a robust, adaptive and empirically ground technology platform (see Figure 1 for the operationalization of the Model for Developing Complex Interventions in Nursing).The mI SMART platform is an improvement over the currently available systems.Combining multiple health sensors, education, reminders, video conferencing, lab results, and secure messaging removes the necessity to access multiple technology interventions individually.This combination of services decreases complexity of care for both healthcare systems and patients.The intervention has been implemented with 30 adult participants living in a rural area who have MCCs and are experiencing difficulties attending clinic visits.Feasibility and acceptability for both the patients and healthcare providers was evaluated and reported in Part B. In addition, efficacy of the intervention was evaluated with patient outcomes which will be reported in a separate paper.The first implementation of mI SMART was targeted to a specific population and clinic.Future use of the mI SMART platform should be adapted to other populations and practice settings.The content of the platform in its present iteration is reflective of current empirical evidence about the use of technology, a specific needs and practice analysis, and has been adapted based on feedback of a wide, but not exhaustive, variety of healthcare providers.
Conclusion
Our long-term objective is to create a new and substantive departure from the status quo by integrating multiple mHealth tools into one platform within an existing rural health clinic to go beyond traditional office visits and shifting to real-time exchanges between patients and providers across geographical boundaries.An efficacious
Figure 1 .
Figure 1.Operationalization of the model for developing complex interventions in nursing.This figure is based on the model found in [15].shift in the traditional rural healthcare delivery paradigm to one that uses technology is expected to result.The model for developing complex nursing interventions is currently being used to update mI SMART based on patient and provider feedback and integration into the EHR is planned.The initial feasibility and acceptability of the mI SMART platform is published in Part B of this article.Other plans include fully disseminating the results of the first implementation of mI SMART, pursuing commercialization and seeking funding for the larger trial. | 2018-12-18T18:17:27.338Z | 2016-04-19T00:00:00.000 | {
"year": 2016,
"sha1": "560796f3ed1825a29bb7150f337bab641bd6ecc6",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=66015",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "560796f3ed1825a29bb7150f337bab641bd6ecc6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257833470 | pes2o/s2orc | v3-fos-license | ChatGPT in Healthcare: A Taxonomy and Systematic Review
The recent release of ChatGPT, a chat bot research project/product of natural language processing (NLP) by OpenAI, stirs up a sensation among both the general public and medical professionals, amassing a phenomenally large user base in a short time. This is a typical example of the 'productization' of cutting-edge technologies, which allows the general public without a technical background to gain firsthand experience in artificial intelligence (AI), similar to the AI hype created by AlphaGo (DeepMind Technologies, UK) and self-driving cars (Google, Tesla, etc.). However, it is crucial, especially for healthcare researchers, to remain prudent amidst the hype. This work provides a systematic review of existing publications on the use of ChatGPT in healthcare, elucidating the 'status quo' of ChatGPT in medical applications, for general readers, healthcare professionals as well as NLP scientists. The large biomedical literature database PubMed is used to retrieve published works on this topic using the keyword 'ChatGPT'. An inclusion criterion and a taxonomy are further proposed to filter the search results and categorize the selected publications, respectively. It is found through the review that the current release of ChatGPT has achieved only moderate or 'passing' performance in a variety of tests, and is unreliable for actual clinical deployment, since it is not intended for clinical applications by design. We conclude that specialized NLP models trained on (bio)medical datasets still represent the right direction to pursue for critical clinical applications.
Introduction
In November 2022 a chat bot called ChatGPT was released. According to itself it is 'a conversational AI language model developed by OpenAI. It uses deep learning techniques to generate human-like responses to natural language inputs. The model has been trained on a large dataset of text and has the ability to understand and generate text for a wide range of topics. ChatGPT can be used for various applications such as customer service, content creation, and language translation'. Since its release, ChatGPT has taken humans by storm and its user base is growing even faster than the current record holder TikTok, reaching 100 million users in just two months after its launch. ChatGPT is already used to generate textual context, presentations and even source code for all kinds of topics. But what does that mean specifically for the healthcare sector? What if the general public or medical professionals turn to ChatGPT for treatment decisions? To answer these questions, we will look at published works that already reported the usage of ChatGPT in the medical field. In doing so, we will explore and discuss ethical concerns when using ChatGPT, specifically within the healthcare sector (e.g., in clinical routines). We also identify specific action items that we believe have to be undertaken by creators and providers of chat bots to avoid catastrophic consequences that go far beyond letting a chat bot do someone's homework. This review makes William B. Schwartz description from 1970 about conversational agents that will serve as consultants by enhancing the intellectual functions of physicians through interactions [94] as up-to-date as ever.
Even though the application of natural language processing (NLP) in healthcare is not new [34,101,111,77], the recent release of ChatGPT, a direct product of NLP, still generated a hype in artificial intelligence (AI) and sparked a heated discussion about ChatGPT's potential capability and pitfalls in healthcare, and attracted the attention of researchers from different medical specialities. The sensation could largely be attributed to ChatGPT's barrier-free (browser-based) and user-friendly interface, allowing medical professionals and the general public without a technical background to easily communicate with the Transformerand reinforcement learning-based language model. Currently, the interface is designed for question answering (QA), i.e., ChatGPT responds in texts to the questions/prompts from users. All established or potential applications of Chat-GPT in different medical specialities and/or clinical scenarios hinge on the QA feature, distinguished only by how the prompts are formulated (Format-wise: open-ended, multiple choice, etc. Content-wise: radiology, parasitology, toxicology, diagnosis, medical education and consultation, etc.). Numerous publications featuring these applications have also been generated and indexed in PubMed since the release. This systematic review dives into these publications, aiming to elucidate the current state of employment, as well as the limitations and pitfalls of ChatGPT in healthcare, amidst the ChatGPT AI hype. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023.
Based on the findings derived from existing publications on ChatGPT in healthcare, this systematic review addresses the following research questions: • RQ1: What are the different medical applications where ChatGPT has already been tested?
• RQ2: What are the strengths, limitations and main concerns of ChatGPT for healthcare, especially with respect to the field they are applied to?
• RQ3: What are the key research gaps that are being investigated or should be investigated according to the existing works?
• RQ4: How can existing publications on ChatGPT in healthcare be categorized according to a taxonomy?
The rest of the manuscript is organized as follows: Section 2 briefly introduces NLP, transformers and large language Models (LLMs), on which ChatGPT is built. Section 3 introduces the inclusion criteria and taxonomy used in the systematic review, and discusses in detail the selected publications. Section 4 presents the answers to the above research questions (RQ1 -RQ4), and Section 5 summarizes and concludes the review.
Natural Language Processing (NLP)
Natural Language Processing (NLP) [22] is an interdisciplinary research field that aims to develop algorithms for the computational understanding of written and spoken languages. Some of the most prominent applications include text classification, question answering, speech recognition, language translation, chat bots, and the generation or summarization of texts. Over the past decade, the progress of NLP has been accelerated by deep learning techniques, in conjunction with increasing hardware capabilities and the availability of massive text corpora. Given the fast growth of digital data and the growing need for automated language processing, NLP has become an indispensable technology in various industries, such as healthcare, finance, education, and marketing.
Transformer
In 2017, Vaswani et al. [109] introduced the Transformer model architecture, replacing previously widespread recurrent neural networks (RNN) [76], Long short-term memory networks (LSTM) [45] and Word2Vec [23]. Transformers are feedforward networks combined with specialized attention blocks that enable the model to attend to distinct segments of its input selectively. Attention blocks overcome two important limitations of RNNs. First, they enable Transformers to process input in parallel, whereas in RNNs each computation step depends on the previous one. Second, they allow Transformers to learn 6 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 30, 2023. ; long-term dependencies. Since their introduction, Transformers consecutively achieved state-of-the-art results on various NLP benchmarks. Further developments include novel training tasks [24,54,114], adaptions of the network architecture [42,64], and reduction of computational complexity [57,64,41]. However, the limited training data and the model complexities remained one of the primary factors of model performance. Transformers have also been used for tasks beyond NLP, such as image and video processing [95], and they are an active area of research in the deep learning community.
Large Language Models (LLMs)
Large language models (LLMs) [17] refer to massive Transformer models trained on extensive datasets. Substantial research has been conducted on scaling the size of Transformer models. The popular BERT model [26], which in 2019 achieved record-breaking performance on seven tasks in the Glue Benchmark [110], possesses 110 million parameters. On the other hand, GPT-3 [18] had already reached 175 billion parameters by 2021. At the same time, the size of the training datasets has continued to grow. BERT, for example, was trained on a dataset comprising of 3.3 billion words, while the recently published LLaMA [107] was trained on 1.4 trillion tokens. Despite the success of the LLMs, LLMs face several challenges, including the need for massive computational resources and the potential of adopting bias and misinformation from training data. Additionally, overconfidence when expressing wrong statements and a general lack of uncertainty remains to be a significant concern in NLP applications. As LLMs continue to improve and become more widespread, addressing these challenges and ensuring they are used ethically and responsibly is essential. ChatGPT is another representative LLM released by OpenAI and other tech giants have released their LLMs, such as the previously mentioned LLaMA from Meta, as a response. Figure 1 illustrates the evolution of LLMs.
7
. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023. ; Figure 1: Evolution of large language models (LLMs) (adapted from [96]).
CC-BY-NC 4.0 International license
It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023.
Methodology
The search strategy used in this systematic review is illustrated in Figure 2, following the PRISMA guidelines. We use PubMed as the only source to search candidate publications. Since the majority of the papers are very short (without abstracts), eligibility is determined at first screening based on the inclusion criteria below.
Inclusion Criteria
The review is expressly dedicated to the ChatGPT released in November 2022 by OpenAI, excluding its predecessors (GPT-3.5, CPT-4 ), other large language models (LLMs) such as InstructGPT and general NLP medical applications [69]. By March 20, 2023, a total of 140 publications are retrieved in PubMed (https://pubmed.ncbi.nlm.nih.gov/) using the keyword ChatGPT. Among them, article written in languages other than English (e.g., French [84]), without full text access (e.g., [62]), or whose main content has little to do with (or is not specific to) either ChatGPT (e.g., [46,104,33,37]) or healthcare (e.g., [97,103,27,6,39,13,88,21,66,115,102,43]) are excluded. Other representative exclusions include [44,55], which deal with CPT-3, and [56,30,90,2], where the authors claimed that ChatGPT assisted with the writing of the papers or case reports, but did not provide any discussion of the appropriateness of the generated texts and how the texts were incorporated into the main content. Generic comments that are not specific to healthcare, such as [105,115,16,50], where the authors comment on the authorship of ChatGPT and using ChatGPT in scientific writing, are also excluded. Several repetitive articles were found from the PubMed search results. Table 1 and Table 2 show the full list of selected publications based on the inclusion (exclusion) criteria.
Taxonomy
We propose a taxonomy, as shown in Figure 3, to categorize the selected publications included in the review. The taxonomy is based on applications, including 'triage', 'translation', 'medical research', 'clinical workflow', 'medical education', 'consultation', 'multimodal', each targeting one or multiple enduser groups, such as patients, healthcare professionals, researchers, medical students and teachers, etc. An application-based taxonomy allows more compact and inclusive grouping of papers, compared to categorizing papers by specific medical specialities. For example, scientific progress and findings generated through clinical practices are documented in the form of publications and/or reports, and literature reviews and novel ideas are usually required for medical researchers of all disciplines to publish their works. Thus, papers on 'scientific writing', 'literature reviews', 'research ideas generation', etc., can be grouped into the 'medical research' category. Similarly, the 'consultation' category comprises papers where ChatGPT is used in medical consulting settings for both corporations (e.g., insurance companies, medical consulting agencies, etc.) and 9 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023. ; individuals (e.g., patients) seeking medical information and advice. The 'clinical workflow' category includes ChatGPT's applications in a variety of clinical scenarios, such as diagnostic decision-making, treatment and imaging procedure recommendation, and writing of discharge summary, patient letter and medical note. Furthermore, clinical departments, regardless of medical specialities, may benefit from a translation system for patients/visitors who are non-native language speakers ('translation'). A triage system [10] guiding patients to the right departments would reduce the burden of clinical facilities and centers in general. Note that different categories are not necessarily completely independent, since all applications are reliant upon the QA-based interface of ChatGPT. By formulating the same questions differently according to different scenarios, ChatGPT's role can change. For instance, reformulating multiple choice questions about a medical speciality in medical exams to open-ended questions, ChatGPT's role changes from a medical student ('medical education') to a medical consultant ('consultation') or a clinician providing diagnosis or giving prescriptions ('clinical workflow'). To avoid such ambiguity, categorization of a paper is solely based on the scenario explicitly reported in the paper. The connections between the applications and end-users in Figure 3 are also not unique. In this review, only the most obvious connections are established, such as 'medical education' -'students/teachers/exam agencies', 'medical research' -'researchers'. The following of the review will show that existing publications on ChatGPT in healthcare can all find a proper categorization based on the proposed taxonomy.
11
. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023. ; Besides the taxonomy, we further assign a tag (Level 1 -Level 3 ) to the selected papers to indicate the depth and particularity of the papers on the 'ChatGPT in Healthcare' topic: • Level 1 : Generic comments about the potential applications of ChatGPT in healthcare or in a specific medical speciality and/or scenario; • Level 2 : Comments with one or more example use cases of ChatGPT in a specific medical speciality and/or scenario and moderate discussion about the correctness of ChatGPT's answers; • Level 3 : Qualitative and quantitative evaluation of ChatGPT's answers to a decent amount of speciality-and/or scenario-specific questions, with insightful discussion about the correctness and appropriateness of the Chat-GPT's answers.
Shortly prior to our review, a systematic review of ChatGPT in healthcare was published by Sallam, M. [91]. An inclusive taxonomy and a proper differentiation among the selected publications (tag: Level 1, Level 2, Level 3 ) is, however, lacking. We believe that the tag helps readers quickly filter and locate papers of interest. This review put more emphasis on Level 3 papers, since they provide a clearer picture of the real capability of ChatGPT in different healthcare applications.
General Profile of Level 1 and Level 2 Papers
A list of Level 1 and Level 2 papers are summarized in Table 1. It is not unexpected that the majority of shortlisted papers fall into the Level 1 and Level 2 category. As seen from Table 1, most of Level 1 and Level 2 papers are short editorial comments or letters to the editor from multidisciplinary journals like Nature (https://www.nature.com/) and Science (https://www.science.org/), or speciality journals like nuclear medicine [3,59], plastic surgery [79,38], synthetic biology [106] and orthopaedic [81]. These publications usually deliver high-level comments about the potential impact and pitfalls of ChatGPT in healthcare [113], with a focus on medical publishing. Scientific journals are among the immediate stakeholders of the publishing industry on which Chat-GPT will exert a significant impact. Thus, publishers introduce new regulations regarding the use of ChatGPT in scientific publications, in particular whether ChatGPT is eligible as an author and ChatGPT-generated texts are allowed. Answers from leading publishers like Science are in the negative [105,16]. Nature also bans ChatGPT authorship but takes a slightly more tolerant stance regarding ChatGPT-generated content, subject to a clear statement of whether, how and to what extent ChatGPT contributed to the submitted manuscript [103,27]. Main argument for the decision is that ChatGPT cannot properly source literature where its answers are derived from, causing unintentional plagiarism, nor can it take accountability as human authors do [105,27]. The decision is echoed by the academic community [58,97,115,66], agreeing that 12 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023. ; ChatGPT-generated content must be scrutinized by human experts before being used [58], as the generated content, such as references [105,12,40,31] could be fabricated. Lee, J.Y. et al. [66] reiterated from a legal (e.g., copy-right law) perspective the inappropriateness of listing ChatGPT as an author, emphasizing that a non-human cannot take legal responsibilities and consequences. However, banning ChatGPT from scientific writing is not easily enforceable, since ChatGPT is trained to produce human-like texts that even scientists and specifically-trained AI detector sometimes fail to detect [29,7]. In short, even though the prospect is promising [92,25,43,102], new regulations and substantial improvements are needed before ChatGPT can be safely and widely used for scientific writing, publishing, or medical research in general [105]. The scenario column in Table 1 corresponds to the taxonomy categorization. If the article concerns healthcare or a medical speciality in general, it is categorized as 'miscellaneous'. The category column indicates the type of the publications.
Reviews of Level 3 Papers
Level 3 papers feature extensive experiments conducted to assess the suitability of ChatGPT for a medical speciality or clinical scenario. For open-ended (OE) questions, human experts are usually involved to assess the appropriateness of the answers. To quantify the subjective assessments, a scoring criteria and scheme (e.g., 5-point, 6-point or 10-point Likert scale) is usually required. For multiple choice questions, it is desirable to not only quantify the accuracies but to evaluate whether the 'justification' given by ChatGPT and the choice are in congruence. When it comes to comparisons (with humans or other language models), statistical analysis is usually performed. As shown in Table 2, many of Level 3 papers are still pre-prints (under review) at the time of writing this review. Most of current ChatGPT evaluations are on 'medical education' (medical exams in particular), which requires no ethical approval to conduct. Representative works include [36,61], where the authors test ChatGPT in the US Medical Licensing Examination (USMLE). Even though the evaluations were carried out independently ( [36] and [61] were published almost at the same time), similar results were reported, i.e., ChatGPT achieved only moderate passing performance. [36] further showed that ChatGPT outperformed two other language models, InstructGPT and GPT-3, in the exam. In both studies, ChatGPT was asked to give not only the answers but also the justifications, which were taken into consideration during evaluation (by physicians). [36] further found that ChatGPT performed better on fact-check questions than on complex 'knowhow' type questions. It is worthy of noting that the exam contains questions from different medical specialities. However, Mbakwe, A.B. et al. [74] raised concerns that ChatGPT, a language model, passing the exam indicates the flawness of the exam system 1 . Besides USMLE, ChatGPT was also tested on the Chinese National Medical Licensing Examination [112] and the AHA BLS / CLS Exams 2016 [32], on both of which ChatGPT failed to achieve passing scores. 1 ChatGPT does not fulfill the 'USMLE Mission Statement', but still passes the exam.
13
. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023. ; ChatGPT achieved similar performance to students examinees on a Doctor of Veterinary Medicine (DVM) exam containing 288 parasitology exam questions. One major limitation of using ChatGPT in medical exams is that, current release of ChatGPT can only process text inputs, whereas some questions are diagram-/figure-based 2 . Such questions are either excluded or translated into text descriptions.
Besides the standard medical exams, ChatGPT achieved promising results on cancer-related questions [47,53]. In [53], ChatGPT's answers to common cancer myths and misconceptions were evaluated by expert reviewers and compared with the standard answers from the National Cancer Institute (NCI). Results showed that ChatGPT is able to achieve very high accuracies, showing that current ChatGPT is already a reliable source of cancer-related information for cancer patients [47]. Furthermore, [83] tested ChatGPT with 100 questions related to retina disease. The answers were evaluated based on a 5-point Likert scale by domain experts. It is found that ChatGPT answers with high accuracy on general questions, while the answers are less satisfactory, sometimes harmful, when it comes to treatment/prescription recommendations. On 85 multiple-choice questions concerning genetics/genomics, ChatGPT achieved similar performance to human respondents [28]. Interestingly, based on the test results, [28] also reached the conclusion that ChatGPT fares better on 'memorization (fact-lookup)' type questions than on those requiring critical thinking, similar to [83]. The performance of ChatGPT on these question-answering scenarios 3 shows its potential for medical consultation and education.
A few studies evaluate the use of ChatGPT in medical research, particularly in scientific writing [67] and generating research questions [63] and systematic review topics [38]. In [67], the authors use ChatGPT to generate full abstracts, providing only the title and result sections of the abstracts from 50 real scientific publications. Even though previous studies [29] have shown that scientists can not tell apart abstracts generated by ChatGPT from those written by humans, [67] found that the two groups can simply be differentiated based on Grammarly scores. Discriminative features of ChatGPT-generated texts include mixed use of English dialects and language perfectness e.g., very few typos, more unique words, proper prepositions usage and no misuse of conjunction and comma. These characteristics can be captured by Grammarly scores. The finding indicates that Grammarly could potentially be adopted by scientific journals to enforce the 'no-AI-generated-texts' policy. In [63], the authors use ChatGPT to identify research questions in gastroenterology. The answers generated by Chat-GPT proves to be highly relevant but lack depth and novelty. In [38], ChatGPT is used to generate systematic review topics in plastic surgery. Similar to [63], ChatGPT-generated research topics are generally not novel. The version column in Table 2 shows the version of ChatGPT used for evaluation. [63] found that newer versions of ChatGPT tend to have better performance on the same questions. In contrast to using ChatGPT directly for writing, which is expressly 2 ChatGPT developers revealed that future versions of ChatGPT will have vision capabilities, and can comprehend images. 3 Exams are essentially also question-answering.
14 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023. ; banned by many scientific journals, exploring new research ideas/topics with the assistance of ChatGPT faces less ethical issues. However, [63,38] demonstrated that the current version of ChatGPT is not sufficiently qualified for such tasks. Humans still play dominating roles in ingenious and innovative research. [87,86,4,68] evaluate the application of ChatGPT in clinical workflow. In [87], ChatGPT is used to decide the appropriate imaging procedure (e.g., Mammography, MRI, US, etc.) for breast cancer screening and breast pain, given a description of the patients' conditions. ChatGPT's responses were evaluated against the corresponding American College of Radiology (ACR) appropriateness criteria. Results showed that ChatGPT achieved moderate overall results, and its performance is noticeably better for breast cancer screening than breast pain. The finding is in accordance with previous discussions that ChatGPT is already highly accurate on cancer-related information [47,53]. The authors concluded that, even though ChatGPT showed impressive performance on the task, specialized AI tools are desired to support the clinical decision-making process more reliably. In a follow-up study [86], the authors tested ChatGPT with 36 clinical vignettes from the Merck Sharpe & Dohme (MSD), covering the entire clinical workflow (differential diagnosis, final diagnosis and subsequent clinical management of the patients). Overall, ChatGPT obtained a 71.8% accuracy in the test, and its performance on differential diagnosis is significantly lower than on final diagnosis. ChatGPT achieved the highest accuracy on a cancer vignette. The patients and their conditions in these vignettes are only hypothetical, which removes the ethical barrier to conduct the evaluation. In [4], ChatGPT is used to write patient clinic letters in 38 hypothetical clinical scenarios (e.g., basal cell carcinoma, malignant melanoma, etc.), where ChatGPT communicates the diagnosis results and treatment advice to the patients in a friendly and easily-understandable manner. The letters are evaluated from the perspective of factual correctness and humanness by clinicians, and ChatGPT achieved high scores on both criteria. In [68], ChatGPT is supplied with seven types of clinical decision support (CDS) alerts (e.g., pediatrics bronchiolitis, immunization, postoperative anesthesia nausea and vomiting, etc.) and asked to give suggestions. However, ChatGPT's answers, even though highly relevant to the alerts, were not adequately acceptable by the standard of CDS experts.
15
. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023. ; is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint
Results
The following presents the answers to the four research questions (RQ1-RQ4) based on the discussion in Section 3.
Medical Applications of ChatGPT
According to Table 1, Table 2 and the taxonomy (Figure 3), it is straightforward to see that ChatGPT is mostly evaluated in medical education, consultation and research, as well as in various scenarios in the clinical workflow, such as diagnosis, decision-making and clinical documentation (patient letter, medical note, discharge summary, etc.). However, it is important to note these 'applications' are carried out in a 'laboratory environment', by providing ChatGPT question samples from standard medical exams (question banks), CSD alerts from Epic EHR or clinical vignettes from Merck Sharpe & Dohme (MSD), through its QA interface. None of the reviewed publications have reported an actual deployment of ChatGPT in clinical settings. Furthermore, due to the current strict policies on AI-generated content imposed by publishers, the unsolved ethical issues as well as its incapability in generating novel research topics, using ChatGPT for medical research remains experimental as well. For medical consultation, the fact that ChatGPT is already capable of providing highly accurate cancerrelated information can not be generalized to all medical specialities, since reliable sources of cancer information, such as the National Cancer Institute (NCI), are publicly accessible and could have already been part of ChatGPT's training set. Its qualification as a medical consultant remains to be further evaluated.
Strengths and Limitations of ChatGPT in Healthcare
Strengths The QA design of ChatGPT's interface makes it easy to be integrated into existing clinical workflow, providing feedback in real-time. ChatGPT can not only give answers to specific questions but provide 'justifications' to its answers. Sometimes, ChatGPT's 'justifications' and answers to open-ended question contain novel insights and perspectives, which might inspire novel research ideas. ChatGPT also shows superior performance in healthcare compared to other general large language models, such as InstructGPT, GPT-3.5.
Limitations The current release of ChatGPT can only take input and give feedback in texts, so that ChatGPT cannot handle questions requiring the interpretation of images. ChatGPT is incapable of 'reasoning' like an expert system, and the 'justifications' provided by ChatGPT is merely a result of predicting the next words according to probability. It is possible that ChatGPT makes a correct choice, but gives completely nonsensical explanations. Accuracy of ChatGPT's answers depends largely on the quality of its training data, and the information ChatGPT is trained on decides how ChatGPT would respond to a question. However, ChatGPT itself cannot distinguish between real and 19 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023. ; fake information fed into it, so that its answers could be highly misleading, biased and dangerous when it comes to healthcare. For example, one of the most concerning issues of current release of ChatGPT, as confirmed by the reviewed publications, is that it can 'fabricate' information and convey it in a persuasive tone. Therefore, its answers should always be fact-checked by human experts before adoption. Furthermore, ChatGPT's answers, even if can be highly relevant, stay most of the time superficial and lack depth and novelty. Most importantly, ChatGPT is not fine-tuned for healthcare by design, and should not be used as such without specialization. Last but not least, the use of ChatGPT is not without barriers. Reformulating the prompt to the same question might change ChatGPT's answer as well. Proper formulation of prompts is another factor to obtaining desirable answers from ChatGPT. Last but not least, ChatGPT is a proprietary product, and therefore feeding sensitive patient information into its interface in order to obtain a feedback might violate privacy regulations.
Research Gaps and Future Works
Prior to the deployment of any product in clinical settings, extensive evaluations of the product in a laboratory environment are required to identify the limitations and improve the product iteratively. Since ChatGPT was released no more than half a year ago, it has only been tested in a limited number of scenarios (Table 2). ChatGPT clearly is still at an experimental stage, and clinical deployment faces substantial unsolved technical and regulatory challenges. The Level 3 publications provide a sound paradigm on how ChatGPT should continued to be evaluated in different specialities, for future works to follow. However, before further pursuing the direction, researchers should be aware that, even though these evaluations provide, at best, a general picture of ChatGPT's capability in a medical speciality, little contribution to the improvement of the underlying language model is made. The limitations identified through these evaluations have also long been known in NLP research and are not specific to ChatGPT. Most importantly, whether or not ChatGPT has achieved good performance in an application scenario, it is unlikely that the ChatGPT with general knowledge will be clinically deployed in the future. Specialized AI models in healthcare, which the NLP community has long been working on, are more promising for practical and reliable clinical applications, compared to ChatGPT.
Categorization of Publications based on a Taxonomy
Finally, we have shown in our review that existing publications on ChatGPT in healthcare can be compactly grouped according to applications and target user groups. Thus, we come up with a application-and user-oriented taxonomy to categorize the selected publications, as discussed in Section 3.
20
. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 30, 2023. ;
Discussion and Conclusion
In this systematic review, we review published works (from Nov. 2022 to Mar. 2023) that used ChatGPT within the healthcare sector. In doing so, we extract publications from PubMed using the keyword 'ChatGPT' and propose a two-sided taxonomy (application-oriented and user-oriented) to categorize these publications, which we see as a building block for new publications on ChatGPT in healthcare. Even though the current taxonomy is already quite inclusive, it can be easily extended to emerging new applications or user groups. This first taxonomy is not limited to ChatGPT, rather it can also be applied to other (existing or upcoming) NLP models, like Bard from Google. On the one hand, the taxonomy helps interested readers to identify relevant works. On the other hand, it also helps identify areas where ChatGPT has not yet been applied to. An automatic processing of multimodal input, like text and images, is an exciting development for future healthcare. In example, Contrastive Language-Image Pre-Training (CLIP) [85], a neural network trained on large-scale image-text pairs, possesses both vision and language capabilities, and is therefore a promising research direction towards AI-assisted multimodal healthcare. In general, a physician takes also several sources of information into account when making diagnosis and treatment decisions, such as the written reports and image acquisitions from a patient. ChatGPT-4, a enhanced version of ChatGPT released recently, is able to analyse and summarize images and texts, as seen from a live demo given by its developers.
The barrier-free user interface, the ability to produce human-like texts and the breadth of its knowledge on a variety of topics are the key reasons why ChatGPT has amassed a phenomenally large user base shortly after its release. Besides the architectural design of the LLM, the immeasurable human efforts invested in training the LLM through reinforcement learning contribute greatly to its impressive performance in human-like conversations. Even though Chat-GPT technically represents the productization of a NLP model by OpenAI, rather than a fundamental technological advance or breakthrough, it is undeniable that ChatGPT is a living embodiment of state-of-the-art NLP techniques. The efforts devoted to making the product a reality still greatly push forward the field as a whole. Speaking from the perspective of a tech product, existing publications on ChatGPT's healthcare applications boil down to 'reviews and testing of a new NLP product in healthcare'. However, the product is not intended for medical applications by design, and it is therefore not unexpected that most 'test reports' evaluated ChatGPT as 'unqualified' or 'of merely passing grade' for healthcare. However, the reported limitations (see Section 4) of ChatGPT are not specific to the product, but are applicable to language models in general, as discussed in Section 2. These limitations can mostly be addressed by improving the underlying language model through NLP innovations. Nevertheless, the fact that ChatGPT is monetized 4 and therefore not (fully) open-sourced makes it difficult for the community to pinpoint the issues and come up with specific 4 OpenAI has already introduced a subscription plan for ChatGPT (Plus).
21
. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 30, 2023. ; solutions for future improvement. In particular, the sources of datasets used for training the language model, which determine the type of questions and topics of the conversations ChatGPT can handle, remain unclear. As suggested by van Dis et al. [27], the community should invest in truly open LLMs that perform on par with proprietary NLP products like ChatGPT, in order to fully address these limitations. Currently, for healthcare applications, specialized AI models trained on biomedical datasets, such as BioGPT [70], are always more desirable than ChatGPT.
As discussed in this review (Section 3), these evaluation studies on Chat-GPT's performance in healthcare provide a general picture of the capability of the current release of ChatGPT. By and large, the training set and the underlying language model decide the quality (accuracy, unbiasedness, humanness, etc.) of the responses of an AI chat bot to certain questions. Therefore, this review concludes that healthcare researchers in particular should retract from the AI hype generated by the product and focus their attention on NLP research in general and developing/evaluating specialized language models for healthcare applications.
. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 30, 2023. ; . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 30, 2023. ; . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 30, 2023. ; | 2023-03-31T01:27:15.395Z | 2023-03-30T00:00:00.000 | {
"year": 2023,
"sha1": "19cd2250f419666d4df441bae7ade1dd9a2f6bf9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.cmpb.2024.108013",
"oa_status": "HYBRID",
"pdf_src": "MedRxiv",
"pdf_hash": "19cd2250f419666d4df441bae7ade1dd9a2f6bf9",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
90786073 | pes2o/s2orc | v3-fos-license | Crop-Livestock Integration: The Ensuing Conflicts and Resolution Strategies among Rural Dwellers in Ogun State, Nigeria
Crop and livestock production constitute main economies of the rural households with most of them cultivating arable crops and rearing small farm animals for both consumption and marketing. It is however chagrin to observe that the small farm animals are conflict-laden owing to their behavioural instincts and free range management system in the rural areas. Interaction with the rural dwellers in the study area through the use of interview guide and field observation showed that the small farm animals infringe on social and economic rights of the rural dwellers in terms of grazing the cultivated farms, feeding on or soiling of agro-produce undergoing processing, littering of the environment with faeces by sheep and goats; scattering of cultivated heaps or mounds in search of food, feeding on emerging seedlings, overturning and soiling of processed foods by local chickens. The ensuing conflicts in this regard often take the form of inter-personal, interfamily and community-to-person conflict. For resolution of the conflicts, statistical test of the study hypotheses showed that restraint of the farm animals from roaming about and siting of farms away from the villages had mutual acceptability among the rural dwellers. It was concluded that the rearing of small farm animals on free range constitute a conflict potential at the micro level of the rural areas; and was recommended that the small farm animal should be kept under intensive or semi-intensive management system.
Introduction
The rural economy is largely dominated by agriculture, with the farm families engaging in crop cultivation and farm animal rearing for satisfaction of their livelihood needs. Crop enterprise production, which range across arable or food crops such as maize, cassava, yam, rice, spices and vegetables however dominate the farm enterprise production among the rural farmers. Common production of these crops by farmers of Ogun State cannot be unconnected with the fact that these crops constitute the staple food for most households in the State and generally in southern part of the country Nigeria
Lawal -Adebowale*
Lawal -Adebowale (Lawal-Adebowale & Oyegbami, 2004). On this account, arable crops have a ready market acceptability thereby forming the thrust of the rural farmers' income generation among all other farm enterprises. World Bank, 2000;Roessler, Drucker, Scarpa, Markemann Lemke, Thuy & Valle-Zárate, 2007). Alongside crop production by farmers in Ogun State is livestock rearing, with some of them keeping farm animals such as sheep, goats and chicken, either alongside crop production or other off-farm and non-farm economic endeavours (Oluwatayo & Oluwatayo, 2012). The farm animals, which are usually kept in small numbers by the farmers, often serve as meat or animal protein source for the farm households and occasionally as sources of income generation to the farmers (Coppock et al, 2006). This livelihood sustenance potential of both crop and animal production thus underscores the importance of integration of both farm enterprises by rural farmers, not only in Ogun State, but across the developing countries of the world.
Emphasising the value of crop and livestock integration to the rural farmers, Lungu (n.d.) indicated that small scale farmers often integrate crop and livestock production purposively such that the farm animals provide manure for the cultivated crops and the crops provide residue for consumption of the farm animals, especially during the dry season. The animal manure, as indicated by de Haan, Steinfield and Blackburn (1997), improves soil fertility, increases soil nutrients and soil water holding capacity, and improves soil structure. In addition, the crop-livestock integration allows for diversification of risks, using labour more efficiently, recycling wastes thus preventing nutrient losses, adding value to crops and crop products while providing cash for purchasing farm inputs. Despite the attendant values of crop and livestock production, it is chagrin to note that these two farm enterprises are becoming a source of conflict among the rural families, particularly due to incursion and encroachment of the farming environment and household surroundings. In view of this, it becomes essential to assess the small farm animal behaviours underlying the emerging conflicts among the rural farmers in Ogun State. To achieve this, the following objectives serve as guide.
Describe the socio-economic characteristics of the rural farmers Identify the conflict-laden small farm animals among the rural dwellers Examine the conflict ensuing behaviour of the small farm animals Ascertain the form of emerging conflicts in the rural social system Identify the employed resolution strategy for sustained peaceful coexistence in the study area (2008), conflict theory is mostly applied to explain conflict between social classes, proletariat versus bourgeoisie; and in ideologies, such as capitalism versus socialism. These conceptions thus portray conflict as human struggle against one another either for survival or control of the existing situation or protection of one's or groups' interest. In this respect was Jeong (2000) indication that conflict entails a struggle between two or more interacting people over values and claims to status, power and resources which may result in injury and/or elimination of their rivals. Consequently, conflict had widely expressed in terms of full blown wars on global scale, violence demonstration and physical combat between individuals and groups.
Study Hypotheses
Notwithstanding the conception of conflict as hostility and violent clashes, conflict is not necessarily and of course not limited to physical confrontation. As indicated by Barash and Webel (2002), conflict also connotes differences in perception and ideologies in relation to certain issue of concern or situation particularly between individuals. In essence, conflict technically means an existing state of disconnect between two or more parties on a prevailing issue. Rossel and Collins (2001) reflected such disconnect or misunderstanding between individuals as micro conflict theory. Such micro-level conflict, as indicated by Collins (1990), is grounded in the interactions of everyday life. In the light of this was Nicholson (1992) reflection of conflict as an existing state of disagreement between two or more people which often arose as result of misunderstanding, perceived maltreatment or oppression by one of two of the interacting parties. In other words, it is a form of social relation whereby two or more interacting parties could not strike a discord on certain issue of concern or interest. Emphasising the context of micro-level, Ragle (2007) expresses that conflict could arise whenever people, be it close friends, family members, co-workers, or romantic partners, disagree about their perceptions, desires, ideas, or values. Such difference may range from the trivial, such as who last took out the garbage, to a more significant disagreement on fundamental issues of beliefs and concerns thereby erupting strong feelings between the concerned parties. Against the backdrop of micro level conflict is concentration of most literature on emerging or full blown struggle and hostility form of conflict in human society, particularly the conflicts that are politically, ethnically and religiously underlain (Shinar, 2003;Bassey, 2007;Oladoyin, 2001;Asiyanbola 2010;Blench, 2010;Oyeniyi, 2011). Such focus cannot be unconnected with the intense degree of bloodshed, destruction of lives and properties. This condition equally accounted for extensive report or research work on pastoralists-farmer conflicts in the agricultural sector (Breusers & Nederlof, 1998;Oksen, 2000). This is however not the case with the micro level conflicts, particularly among the rural system whereby people contend with each other as a result of infringements against a person or the other by the small farms left on free range. Although, the micro level conflict had not really resulted in wanton destruction of lives and properties, as it were in the context of pastoralist-farmer conflict, it rather ensues a state of prolonged individual or communal disputes usually arising from infringements caused by roaming small farm animals. In this way, the study sets out to investigate the emerging conflicts as a result of small farm animal behaviours among the rural farm families. This study technically differs from the farmer-pastoralist conflict in the sense that the pastoralists often lead their animals to graze all over the place, particularly outside their place of resident, while the small farm animals and the owners are based within the same community. Although, hardly do the small farm animal behaviours result in violent clashes, it certainly brings about infringement, discomfort and inconvenience in the rural social system. Based on this, the outcome of the study will create the platform for up-built of the existing conflict theories, particularly at the micro level, in a new dimension of animal induced conflicts in human relations. Economic activities in the State ranged across farm, off-farm and non-farm occupations. While the non-farm occupations such as merchandising, civil service, banking and educational services, is largely concentrated in the urban areas of the State, particularly the State Capital -Abeokuta -farm and off-farm occupations are largely concentrated in the rural areas. Commonly cultivated food crops in the State include cassava, maize, yam, cocoyam, rice, spices and vegetables. Others are tree crops such as kolanut, cocoa, citrus, mango and oil palm; and pomological crops like pineapple and pawpaw. Farm animals readily found in the State among the livestock farmers include cattle, sheep, goats, chicken and pig. Study Domain: Based on regular visit to certain villages in Abeokuta zone of OGADEP for extension outreach, rural communities such as Ilugun, Kila, Alabata, Itoko-Ajegunle, Araromi and Bampopa were selected as the study domain. The communities were typically characterised by little or no social amenities and basic infrastructure such as good roads, schools, hospitals, pipe borne water. While certain facilities such as electricity, schools and health centres may be available to the villages that were closer to highways-inter-State roads; the interior villages lacked these provision and as such had to visit the nearby communities with such provisions to access to the amenities. Houses in the surveyed areas were largely constructed with mud, but some of the rural dwellers had the buildings plastered with cement/mortar as way to modernise the houses.
Sampling Procedure: Although all individuals who reside in rural areas constitute the study population, the study lacks sampling frame on the ground that no official documentation of the total number of individuals residing in the surveyed rural communities was available for use. In view of this, saturated point selection method, which is a non-probability sampling technique proposed by Glassier and Straus (1968) and adapted by Idowu (1988), was used for selection of as many individuals that readily gave out the time to respond and interact with the researcher. The sampled rural dwellers by this method across the selected rural communities thus amounted to 173. This approach, which is similar to trickledown sampling technique by Bailey (1987), technically differs from convenient or accidental sampling technique in the sense that the sampling technique was not based just on available individuals or those that a researcher may come by, but by careful search of rural dwellers who had actually experienced one form of livestock ensued-conflict or the other. Identification of such individuals was ascertained by preliminary and informal interactions with the rural dwellers over a period of three months.
Data Collection: Primary data for the study were collected by means of interview guide, field observation and iterative discussion. Collected data were on the socio-economic characteristics of the rural farmers, the commonly reared small farm animals, the conflict-ensuing behaviour of the reared small farm animals, forms of emerging conflicts in the rural social system and the employed resolution strategy for sustained peaceful coexistence in the study area.
Data Analysis: Collected data were subjected to both descriptive and inferential statistics. The descriptive statistics took the form of frequency count and percentages; while the inferential statistics took the form of categorical regression (for H 01 ), Kruskal-Wallis (for H 02 ), Cochran Q and Kendall's W tests (for H 03 ). These inferential statistical tools were found appropriate given that the study variables were measured at nominal level. Under the categorical regression test, which allows for cross tabulation of a dependent variable against one or more independent variables measured by categorisation or nominal level, was cross tabulations of the forms of conflicts against the varying conflict-laden behaviours of the small farm animals kept on free range in the study area. The Kruskal-Wallis test, which allows for correlation test of multiple independent variables with an independent variable, was done by cross tabulating the forms of conflicts with the instituted resolution strategies with a view to establishing whether a particular form of manifest conflict is directly related to a taken resolution strategy or not among the rural dwellers. The Cochran Q and Kendall's W tests, which allow for ascertaining the test of variation in the mean rank of variables measured by categorisation (nominal level), was applied with a view to ascertaining the degree to which a particular resolution strategies, among all the instituted ones, is well acceptable to a larger proportion of the rural dwellers.
Socio-economic Characteristics of the
Surveyed Rural Dwellers: Table1 shows that 54.9 per cent of the respondents were female; 41.0 per cent of them fall within the age range of 41 and 50 years with mean age of 47.2 years. As much as 42.2 per cent of the surveyed rural dwellers practised Islam as religion; more than half (59.5 per cent) of them had between primary and secondary school education; 56.7 per cent of them practised farming as major occupation; 73.4 per cent were married with 50.8 per cent having between and 6 and 7 persons as household members. The observed result suggests that demography of the surveyed rural communities comprises the male and female, adolescents, youth and older ones. The mean age of the surveyed rural dwellers however suggests that they are relatively younger with vigour for farming activities. Although they are all engaged in crop cultivation and animal rearing, field observation shows that crop production outweighs animal farming. This cannot be unconnected with the fact that related crop produce constitutes the staple food of most households and had a high market value in the study area. Socially, the surveyed rural dwellers are engaged in one form of religion or the other arising from the need to have their spiritual quests satisfied. In addition to this was acquisition of basic education -primary and secondary education; which made the surveyed rural dwellers enlightened with a measure of reading and writing skills. Observation of a relatively large family size among the rural dwellers was though a function of the cultural trait of the area, the practice was underscored by the need for each of the households to have more hands on any task to be executed both at farm and household levels.
Conflict Ensuing Behaviours Among the Rural
Dwellers: Table 2
Emerging Forms of Conflicts and Resolution
Strategies Among the Rural Dwellers: Arising from the instinct of the farm animals and the employed free range system for their management is the emergence of conflicts between the owners of the farm animals and those whose rights have been infringed upon by the kept animals. As indicated in Table 3 In view of the emerging conflict is the need for resolution for the purpose of ensuring peaceful coexistence of the farmers and those rearing farm animals among them in the rural communities (Rothchild, 1997;Blench, 2010). Consequently, a number of conflict resolution strategies were observed among the rural surveyed rural dwellers. Table 3 shows that all the respondents called for restraint of the animals by owners as the major strategy that could readily stop or prevent any form of conflict among members of the communities. This is premised on the fact that the restraint animals, either by tethering or housing, certainly prevents the animals from moving about and as such would in no way strayed into the farms for grazing, contaminates foods and farm produce undergoing processing, litters the environment with feaces or refuse. Alternative to restraint of the farm animals was locating cultivated farms far away from the residential areas (75.7 per cent) where most of the farm animals were equally kept. This action is meant to prevent the farms from being destroyed or grazed by farm animals such as sheep and goats. About a quarter of the respondents (25.4 per cent) however called for prohibitions of animal rearing in the area. Most of the rural dwellers were however skeptical about this strategy as they considered rearing of farm animals as very essential to their livelihood.
Other observed resolution strategies among the surveyed rural dwellers included seeking alternative locations for spreading or sun drying of farm produce undergoing processing (12.1 per cent) as a way to prevent the animals from contaminating the farm produce undergoing sun drying. In this context, some of the rural dwellers indicated spreading of the farm produce to be sun dried on rocky mountains or raised platforms. Compensation for the damage or destruction caused by the freely moving farm animals were indicated by 23.1 per cent of the respondents. Quite a number of the farmers however do not find this strategy acceptable on the ground that the given compensation is often far below the incurred losses from the destruction made by the animals. Consequently, some of the rural dwellers (35.8 per cent) rather took the option of either killing or maiming of the farm animals that might infringe on their social and economic endeavours. This action is however believed to fern the ember of fire as the owners of the killed or maimed animals become furious and may want to retaliate in any other way. Tables 4 -6 show the outcome of the categorical regression test of the relationship between the animal conflict ensuing behaviours and the form of taken conflicts by the rural dwellers. Table 4 shows that conflict ensuing behaviours such as soiling/contamination of spread out farm produce for sun drying (ß = 0.20; F = 7.03) and defecating in and around the houses in the communities (ß = 0.26; F = 11.55) significantly influenced inter-personal conflict among the rural dwellers at p < 0.05. In essence, it implies that an individual readily goes into conflict or contentions with the owner of any Lawal -Adebowale farm animals that might have soiled/ contaminated farm produce that were spread out for sun drying or when the animals defecate around the houses. This could be due to cost implication of the contaminated farm produce as this loses market value and as such puts the owner of the farm produce at a loss. The same holds for defecating around the neighbourhood as not only it results in filthy environment but poses the danger of contaminating foods and farm produce that mistakenly drops into the animal faeces.
Categorical Regression Test of Relationship Between the Animal Ensuing Conflict Behaviours and Personal Conflict by The Rural Dwellers:
On another note is a significant relationship between the farm animals turning over of food and farm produce (ß = -0.17; F = 5.09) and the resultant inter-family conflict at p < 0.05 (Table 5). In this, the affected individual's family members go into contention with the owner or family members of the owners of the conflict laden animals for expression of displeasure and annoyance as a result of damage done by such one's roaming animal(s). In this aspect, the roaming animals overturned unguarded processed farm produce, prepared foods or stored food stuff in an attempt to feed on the produce. Consequently, the family as a whole affected, especially when it is the prepared food or raw food for meal preparation that is overturned by the animals. Table 6 on the other hand though shows no significant relationship between the farm animals' conflict ensuing behaviours and interperson-community conflict at P <0.05 level but shows a significant relationship between turning over of food products ß = 0.13; F = 2.99); night blaring of the animals ß = -0.13; F = 2.97) and inter-person-community conflict at P < 0.1 level.
This suggests the overturning of kept farm produced, particularly processed produce and night blaring of the animals collectively affect members of the community thereby resulting in collective action or contentions with the owners of the roaming animals. This observation becomes an issue because most of the rural dwellers generally process farm produce at home and have such spread out for sun drying in the open space. Consequently the roaming animals readily reached the spread produce for consumption and eventually contaminated the produce. In the same vein, the blaring of the animals in the night was a communal issue given that everyone's night sleep is greatly disturbed. The resultant effect of this is contention with owners of the roaming animals by members of the communities, though usually through the community leadership or institutions. Table 7 is the result of the Kruskal-Wallis test of association between the taken form of conflicts by the rural dwellers and the instituted resolution strategies. It observed that no significant association exists between the three forms of manifested conflicts by the rural dwellers and any of the entrenched resolution strategies among them. In essence, all the three forms of manifested conflicts by the rural dwellers have the same measure of need for resolution of the conflicts. Consequently, none of the six modes of instituted resolution strategies has special link with any of the dimensions of conflict manifested conflicts in the study area.
Kruskal-Wallis Test of Association between the Forms of the Emerging Conflicts and the Instituted Resolution Strategies Among The Rural Dwellers: In
Just as any form of conflict manifestation suggests a level of disaffection among the rural dwellers so is resolution, in whatever form it may take, considered as way to douse tension and soothe grievances of any of the affected members of the surveyed rural communities.
Cochran Q Test of Equal Attributions to the Instituted Conflict Resolution Strategies by the Rural Dwellers:
Cochran Q test of equal attributions to the instituted conflict resolution strategies by the rural dwellers, as indicated in .000 a. 2 is treated as a success. | 2019-04-02T13:14:13.162Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "dbf148cc273b2c721c0af23747083c4d736df078",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.25175/jrd/2018/v37/i1/122693",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ceb385e15295e7e876f2109447d0bdd4c1c118d1",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Biology"
]
} |
251571857 | pes2o/s2orc | v3-fos-license | Prevalence of psychological distress and associated factors among orthopedic trauma patients at Tikur Anbessa specialized hospital, Addis Ababa, Ethiopia: A cross-sectional study
Background : Orthopedic Trauma exerts a holistic influence on survivors’ physical health including a range of mental health problems that interfere with survivors’ recovery. Psychiatric disorders and behavioral disturbances are reported to be 3-5 times more common among people with injuries and are a predictor of poor outcome and ongoing disability. Assessing psychological distress among orthopedic trauma patients plays a pivotal role to implement further intervention. Methods : Institutional based cross-sectional study was conducted at Tikur Anbessa specialized hospital. Hospital anxiety and depression scale was used to assess psychological distress by using face to face interview. A systematic sampling technique was used to select a total of 407participants. Data has been analyzed using SPSS 20. Bivariate and multivariate logistic regression was done to identify associated factors. Variables with p-value <0.05 have been considered as statistically significant. Result: prevalence of psychological distress was 35.4%. independent variables like being female (adjusted odd ratio (AOR)=1.65 95% confidence interval (CI) (1.89,3.04)), poor social support (AOR=3.51 95%CI (1.39,8.88)), moderate social support (AOR=2.75 95%CI (1.13,6.72)), having chronic medical illness (AOR=2.24 95%CI (1.16,4.32)), presence of amputation (AOR=2.90 95%CI (1.97,8.73)) and having severe pain (AOR=2.50 95%CI (1.20,5.18)) found to have significant association at p–value <0.05 with psychological distress. Conclusion and recommendation; the prevalence of psychological distress was high. Being female, having poor social support, having a chronic medical illness, the presence of amputation and having severe pain were significantly associated factors with psychological distress. It is good if clinicians give emphasis to orthopedic patients especially for females and with chronic medical illnesses.
Background
Injuries to the musculoskeletal system constitute 16% of the total burden of disease worldwide [1], this makes it the leading cause of morbidity. Most orthopedic traumas may lead to death even though some of them survive with different unpleasant health outcomes [2]. One world health organization data indicates that 20-50 million people have experienced non-fatal musculoskeletal injuries annually throughout the world [3].
The burden of orthopedic trauma is enormous on survivors, their families and the society at large. It greatly affects the survivors' mental health which interferes with their recovery [4]. Survivors may develop psychological distresses soon with months and years after experiencing the trauma [5]. Globally, the mental health problem in orthopedic trauma survivors is reported to be as common as three to five times than the general population [6].
Even though a significant number of orthopedic trauma survivors develop serious psychiatric disorders, only a few of them are praised to get the appropriate mental health service by trained professionals [7]. Millions of traumatic injury victims suffer from physical disabilities which may last longer during their year of work-life [8].
Different emotional and behavioral conditions in sufferers following experiencing musculoskeletal system is a common source of patient complaints [9,10]. The magnitude of psychological distresses after sustaining orthopedic trauma varies depending on the screening tool, site of injury and the duration of the study period to the injury. A study conducted in India found that 1 in 5 (22%) patients met the criteria for psychological illness. Mental health problems has been reported to have an association with reduced health-related quality of life [11,12].
In another study that was conducted after sufferers experienced severe orthopedic trauma, psychological distress has been found in 14% of them [13]. Psychological distress has been also studied among male orthopedic trauma survivors. According to the result, 30% satisfied the criteria for psychological distress [14].
A 24-month follow-up study among severe orthopedic trauma survivors reported a 42% prevalence of psychological disorder. Among this, only 22% of such patients reported receiving mental health services [10]. In a study conducted at the Philippines among medically ill patients prevalence of psychological distress in the orthopedic unit was found to be 43.9% [15].
Little attention has been given to the negative mental health outcomes of orthopedic trauma in Ethiopia which is against its burden. This might be due to the reason that there are limited researches in this area showing its burden. Therefore, this study described the prevalence of psychological distress and associated factors among the orthopedic population in the study area. The results of this study will serve as a source of direction for intervention by being input in planning future services for survivors of orthopedic trauma.
Study setting and population
Institutional based cross-sectional study was conducted at Tikur Anbessa specialized hospital. The study population includes orthopedic trauma patients visiting Tikur Anbessa specialized hospital during the data collection period. Those orthopedic trauma patients who are on follow up and aged 16-65 were tried to be included in the study and those orthopedic trauma patients who were severely sick and unable to communicate were excluded from the study.
Sample size determination
The sample size was calculated by using a single population proportion formula, Considering the following assumptions; prevalence p= 50% because no similar study done in our country among orthopedic population, 95% confidence interval, margin of error 5%, a nonresponse rate 10%. Therefore the final sample size was 423. A systematic sampling technique was applied to select study units at the orthopedic outpatient clinic during the study period. Sampling interval (k) was determined by dividing the total study population during the one-month data collection period by a total sample size then the starting point was randomly selected.
Data collection
A structured interviewer-administered questionnaire was used which has five subsections: a socio-demographic questionnaire to assess the patients' background information. Hospital anxiety and depression scale (HADS) were applied to determine psychological distress [16]. The reliability of HADS found to have Cronbach's α of 0.78. A cutoff point of ≥ 16 is considered for participants to be positive for psychological distress [17]. Substance use history was assessed by yes/no answers of respondents and is operationalized according to different literatures. Similarly, chronic medical illness and family mental illness were assessed by yes/no answers of respondents. Social support was measured by the Oslo-3 social support scale. It has the sum score ranging from 3-14 [18]. The numeric pain rating scale (NPRS) was used to measure the intensity of pain. The scoring ranges from 0-10 and classified into four scales as no pain, mild pain, moderate and severe pain [19].
Data processing, analysis, interpretation, and presentation
The completed data was entered using Epi-info 7 then it was exported to SPSS 20 version statistical software for analysis. Descriptive statistics, bivariate analysis, and multivariate logistic regression were used. The significance was declared at p-value < 0.05. The strength of association was described using the adjusted odds ratio (AOR) with its respective 95% CI. Results are presented in the form of tables and graphs using frequency and summary statistics such as mean and percentage to describe the study population in relation to relative variables and discussed with previous results.
Socio-demographic characteristics
A total of 407 participants with a response rate of 96.21% were included in the study. Among this 260 (63.9%) were males. The mean age of the participants was 37 years with standard deviation of (SD= ± 13.5 yrs) ranges from 16 to 65 years, more than one fourth 109 (26.8%) were in age group of 26-35 years, 266 (65.4%) were orthodox Christian religion followers, 227 (55.8%) were married, 257 (63.1%) reported as they have children which range from 1 to11, 162 (39.8%) were Amhara by ethnicity followed by Oromo accounted for 133 (32.7%) of participant (Table 1).
Clinical and substance-related factors of participants
According to clinical factors, 99 (24.3%) of them have reported as they had comorbid medical illness among this half 49 (49.5%) of the participants were hypertension patients followed by diabetes mellitus patients which were 34 (34.3%) and 18 (18.2%) of them had other cardiac problems.
Regarding current and lifetime substance use, 99 (24.3%) of them used khat in their lifetime and 25 (6.1%) of them used khat within the last 3 months, more than half 222 (54.5%) of the respondents were lifetime alcohol users, and 69 (17.0%) were current alcohol users.43 (10.6%) of participants used tobacco products in their lifetime and 10 (2.5%) were current tobacco product users.
Prevalence of psychological distress
My study showed that the prevalence of psychological distress was 144 (35.4%) with 95% CI (30.5, 40.0). The prevalence rate was higher among females since 77/164 (47.0%) is higher when compared to 67/243 (27.6%) of males met the screening criteria for psychological distress in the study.
Factors associated with psychological distress
Bivariate analysis of factors for depression revealed that independent variables; sex, marital status, educational status, monthly income, social support, chronic medical illness, pain severity, having family history of mental illness, having lower extremity injury, developing complication and presence of amputation were found to be candidate variables for multivariate analysis at p-value <0.2.
These factors were entered into multivariate logistic regression for further analysis in order to control confounding effects. As a result being female, poor social support, presence of amputation and having severe pain are found to be statistically significant with psychological distress at the p-value of less than 0.05. Females were 1.65 times more likely to develop psychological distress than males (AOR=1.65,95% CI:1.89,3.04), those who had poor social support were 3.51 times more likely to develop psychological distress as compared to those who had good social support (AOR=3.51,95% CI: 1.39,8.88), the odds of developing psychological distress among those with moderate social support 2.75 times higher as compared to those with strong social support (AOR=2.75, 95% CI: 1.13, 6.72), the odds of developing psychological distress among those who undergo amputation were 2.90 times higher as compared to those who didn't undergo amputation (AOR=2.90, 95% CI: 1.97,8.73), those who had chronic medical illness were 2.24 times more likely to develop psychological distress as compared to those who didn't have chronic medical illness (AOR=2.24,95% CI:1.16,4.32) and Participants who had severe pain within 24 hours were 2.50 times more likely to develop depression than participants who didn't have pain (AOR= 2.50, 95% CI: 1.20, 5.18) ( Table 3).
Discussion on the prevalence of psychological distress
The study revealed that the prevalence of psychological distress was 35.4%. The current study finding for psychological distress was higher than the studies conducted in India 22% [12] and a study in the USA 19% [20]. The reason for the above difference might be due to the difference in sample size and study population who were only sportrelated injuries while receiving physical therapy in the USA.
On the contrary finding of this study on the prevalence of psychological distress was lower than a study conducted at USA 42% [10] and another study in USA 45% [21] and in the Philippines 43.9% [15]. This difference might be attributed to measurement tool which was beck depression inventory (BDI) used in US [21], time onset of the study since injury and study participants who were only on patients with severe lower limb injury and study type which was prospective cohort at USA a follow up study and [10] and sample size which was a large scale study among general medical inpatients in Philippines [15].
Discussion on Factors associated with psychological distress
This study revealed that variables like being female, having poor and moderate social support, those who have chronic medical illness, amputation and having severe pain were found to be statistically significant for depression. Females were 1.65 times more likely to develop psychological distress than males. This study was in line with a study conducted in China (AOR=2.62) [22]. This study was supported by studies conducted in the US, United Kingdom, Korea, Jordan, Hong Kong Pakistan and India [23][24][25][26][27][28][29].
The odds of developing psychological distress among those who have poor social support were 3.51 times higher when compared to those who have strong social support. This may be due to the reason that good social support is known in buffering the negative consequence of traumatic events [30,31]. Our study result has been found to be consistent with other studies in the US and Pakistan [25,32].
Those who had chronic medical illnesses were 2.24 times more likely to develop psychological distress than those who haven't chronic medical illnesses. This may be due to a reduction in functional independence and long term survival time with being a comorbid victim of physical injury and additional systemic illness [33] which appears to undermine the victim's mental wellness and increases patients' susceptibility for mental health problems. This may also be due to immune suppression and neurotransmitter disturbances which are the major causes of morbidity including mental health problems [34,35].
The odds of developing psychological distress among those who undergo amputation were 2.90 times higher as compared to those who didn't undergo amputation. This may be attributed to adjustment reactions to the new event and loss of sense of independence and having to rely on others for some of the most common everyday needs after loss of one or more limbs [36], because victims may come up with difficulties in carrying out daily activities as well as other tasks and it affects their recovery after orthopedic injury. This may result in an increased chance of physical and psychological disabilities which are major causes of emotional distress [4,36,37]. This may also be due to the fact that distortion of the patients' body image and decreased selfesteem after amputation which sets a series of emotional, perceptual and psychological reactions [38]. This was supported by a study conducted at Jordan [24].
Those who had severe pain within the last 24 hours were 2.50 times more likely to develop psychological distress than those who didn't have pain within the last 24 hours. This may be due to increased discomfort on patients which leads to increased emotional distress. It may also be due to the fact that pain is shown to cause altered synaptic connectivity at the prefrontal cortex and hippocampus [39], as well as altered dopamine signaling from the ventral tegmental area [40], these changes have been known to trigger negative symptoms of depression [41]. This was supported by studies conducted in the UK and Korea [23,28,29].
Conclusion and recommendation
Our study found a high prevalence of psychological distress when compared to the general population. Factors like being female, having poor social support, having a chronic medical illness, the presence of amputation and having severe pain were significantly associated with psychological distress. It is good if clinicians working at orthopedic clinics give emphasis on patients' psychological state during evaluating especially for females, those having a comorbid medical illness, those with poor social support, patients who undergo amputation and those with severe pain. It is also good if other researchers conduct a prospective cohort study to investigate the temporal relationship between factors such as comorbid medical illness and amputation, and psychological distress.
Declarations Ethical approval and consent to participate
Ethical clearance was obtained from both the University of Gondar and Amanuel mental specialized hospital Ethical Review committee. Written Informed consent was obtained from participants aged 18 years and above. Written assent was also obtained for those who aged below 18 years from patients' caregivers coming with them. Each respondent was informed about the objective of the study that it will contribute necessary information for policymaker and other concerned bodies. Anyone who was not willing to participate in the study was not forced to participate. They were also informed that all data obtained from them would be kept confidentially by using code instead of any personal identifier and is meant only for the purpose of study. For the participants who were found to be positive for psychological distress during the study, linkage to nearby psychiatric clinic were done in order to have further assessment on their condition and it was done for a total of 18 patients.
Consent for publication
Not applicable | 2020-06-04T09:06:13.455Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "2d732ff56888a4bcc22114b46391ebaa11a850af",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/NNR-2-114.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "943082c19a440432ae71756564ab64aa9ccac703",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
9315360 | pes2o/s2orc | v3-fos-license | An islet in distress: β cell failure in type 2 diabetes
Abstract Over 200 million people worldwide suffer from diabetes, a disorder of glucose homeostasis. The majority of these individuals are diagnosed with type 2 diabetes. It has traditionally been thought that tissue resistance to the action of insulin is the primary defect in type 2 diabetes. However, recent longitudinal and genome‐wide association studies have shown that insulin resistance is more likely to be a precondition, and that the failure of the pancreatic β cell to meet the increased insulin requirements is the triggering factor in the development of type 2 diabetes. A major emphasis in diabetes research has therefore shifted to understanding the causes of β cell failure. Collectively, these studies have implicated a complex network of triggers, which activate intersecting execution pathways leading to β cell dysfunction and death. In the present review, we discuss these triggers (glucotoxicity, lipotoxicity, amyloid and cytokines) with respect to the pathways they activate (oxidative stress, inflammation and endoplasmic reticulum stress) and propose a model for understanding β cell failure in type 2 diabetes. (J Diabetes Invest, doi: 10.1111/j.2040‐1124.2010.00021.x, 2010)
INTRODUCTION
Glucose is the primary fuel source for the maintenance of energy homeostasis, and the production and uptake of glucose by various tissues is largely regulated by insulin. Disruption of insulin function, through loss of insulin production and/or through resistance to insulin action, leads to the development of all forms of diabetes. Over 200 million people worldwide suffer from some form of diabetes, and studies predict that this number will rise to above 350 million by 2030 1 . The diagnosis of diabetes is typically made by use of American Diabetes Association criteria 2 , which now include hemoglobin A1c as a measure ( ‡6.5%). By far, the majority of these individuals are diagnosed with type 2 diabetes, a disease that has traditionally been defined by tissue (liver, muscle, fat) resistance to insulin action. Contributing factors to insulin resistance include both lifestyle (obesity and inactivity) and rare genetic disorders (e.g. lipodystrophy) [3][4][5] . The increase in insulin resistance leads to an increased demand for insulin production, thereby resulting in hyperinsulinemia in these individuals. Importantly, the insulin secretory capacity appears to be a key factor in determining whether an individual shows normoglycemia or hyperglycemia. In this regard, pancreatic islet b cells are the only source for physiologically-relevant insulin in mammals, and in recent years b cells have become a major focus of diabetes research. Several animal models of obesity and insulin resistance show normal to near-normal glucose homeostasis, primarily because of islet hyperplasia and enhanced insulin production by b cells, a condition often referred to as adaptive islet hyperplasia 6 . A similar situation is believed to occur in human subjects with obesity and insulin resistance, and autopsy studies dating back as far as the 1930s showed that obese subjects without diabetes exhibit adaptive islet hyperplasia 7,8 .
Within the b cell community there is some controversy as to whether insulin resistance precedes hyperinsulinemia, or whether early hyperinsulinemia gives rise to initial insulin resistance; it appears, however, that the majority of publications in the field favor the former mechanism. Regardless of the early instigating mechanisms, only about 15-30% of obese individuals with insulin resistance actually carry the diagnosis of diabetes 9 . Evidence is accumulating that b cell dysfunction and the consequent inability to maintain appropriately elevated insulin secretion might be the precipitating factor in the development of diabetes in susceptible individuals. Recent clinical longitudinal studies have been particularly useful in establishing an important role for b cell dysfunction in type 2 diabetes. For example, a 5-year prospective study carried out in Pima Indians showed that an insulin secretory defect is a predictor for the transition from impaired glucose tolerance (or pre-diabetes) to frank diabetes 10 . More recently, a much larger scale study derived from the Whitehall II cohort of over 6000 non-diabetic subjects examined the development of diabetes during a follow-up period of 9.7 years 11 . In the group that eventually developed diabetes, insulin sensitivity began declining at a faster rate than in the control cohort 5 years before diagnosis, a decline that was accompanied by more rapid elevations in blood glucose after an oral glucose challenge. Notably, in the diabetic cohort, b cell function (as determined by the homeostasis model assessment, HOMA) showed a dramatic decline in the 2 years just before diabetes diagnosis. Collectively, these and other studies [12][13][14] suggest inverse trajectories of b cell function and glycemia in the years immediately preceding the diagnosis of diabetes, where the rapid increase in blood glucose levels coincides with a dramatic fall-off in b cell function ( Figure 1).
Do these clinical studies suggest that b cell dysfunction is causative of type 2 diabetes, or merely coincidental? Compelling, causative evidence comes from recent advanced genome-wide association studies, which have identified candidate genomic variants that contribute to the risk of type 2 diabetes. Interestingly, a number of these variants are located in genes that are known to regulate b cell function and/or development, including HNF4A, TCF7L2, IDE, EXT2, HHEX and ALX4 [15][16][17][18][19] . Thus, these studies support a central, and potentially causative, role of b cell dysfunction in type 2 diabetes. However, the pathogenesis of b cell dysfunction is only recently coming to light. In the sections that follow, we summarize what is known about the triggers (or mediators) of b cell dysfunction in type 2 diabetes, and discuss how these triggers subsequently influence convergent cellular execution pathways (mechanisms) leading to b cell failure.
TRIGGERS OF b CELL FAILURE
Although pre-diabetes and diabetes are frequently perceived as disorders of glucose homeostasis, they should instead be viewed as a continuum in a 'syndrome' in which a host of insulindependent metabolic actions is in disarray. Thus, subjects with diabetes and pre-diabetes show several metabolic and pathological abnormalities, including hyperglycemia, dyslipidemia, elevated serum cytokines and islet amyloid deposition among others. Therefore, it is likely that the totality and/or cross-talk of these abnormalities, rather than any single one, contributes to the development of b cell dysfunction. We view these abnormalities as triggers for the activation of pathways leading to b cell demise.
Glucotoxicity
Hyperglycemia, as seen in established type 2 diabetes or as seen post-prandially in pre-diabetes, has long been felt to have a negative consequence on b cell function. The precise etiology of glucotoxicity, however, has been the subject of much debate, primarily because the models (in vitro vs in vivo, cell lines vs islets, human vs rodent etc.) used to study the phenomenon have varied greatly. The topic of glucotoxicity, therefore, has been the subject of recent reviews 20,21 . Acutely, glucose has a stimulatory effect on transcription of the gene encoding preproinsulin (Ins) and on insulin release. Glucose enters the b cell via facilitated transport through the Glut2 transporter, after which it is converted to glucose 6 phosphate by the action of the high Km kinase glucokinase. The flux through the glycolytic cascade, and the production of adenosine triphosphate (ATP) in this process, ultimately leads to membrane depolarization and insulin granule docking and release 22 . Teleologically, it is understandable that the repeated and prolonged exposure to hyperglycemia should lead to b cell degranulation and eventual exhaustion, but the mechanisms underlying this process are believed to be complex and not readily explicable. For example, the ultimate effect of hyperglycemia on b cell function might be related to both the level of glycemia as well as the duration of glycemic exposure. Early studies of prolonged hyperglycemia in vivo and in vitro showed clear reductions in Ins gene transcription, and eventual reduction in insulin secretion itself. These reductions are thought to be secondary to reductions in the transcription or activity of the b cell transcription factors Pdx1 and MafA [23][24][25] . Reductions of several other b cell and islet transcription factors and proteins have been described in response to prolonged hyperglycemia, suggestive of a process of b cell 'dedifferentiation' or reversion to an embryological equivalent of a less glucose-responsive cell type 26,27 . The direct effect of hyperglycemia on these altered gene expression patterns is supported by studies in which phlorizin treatment (which reduces glucose levels independent of insulin levels in animals) reverses or partially reverses the gene expression phenotype 28,29 . Several mechanisms have been proposed to explain hyperglycemia-induced b cell dedifferentiation and dysfunction, but a major factor appears to be oxidative stress, as discussed later. Multiple pathways contribute to oxidative stress, including the polyol pathway, activation of advanced glycation end-product receptors, and mitochondrial dysfunction 30,31 . Other pathways linked to hyperglycemia include endoplasmic reticulum (ER) stress and possibly hypoxia-induced stress 32,33 .
Lipotoxicity
The term 'lipotoxicity' is often applied to the phenomenon in which elevated free fatty acid (FFA) levels in the setting of insulin resistance contribute to b cell dysfunction. In actuality, the effect of FFA on b cell function is much more complex, and includes both beneficial and detrimental effects 34, 35 . The concentration of FFA, chronicity of exposure to elevated FFA and the coexistence of hyperglycemia all determine the extent to which FFA contribute to b cell function. Under physiological concentrations, FFA are crucial to the maintenance of glucose-stimulated insulin secretion (GSIS), and early studies showed that depletion of intra-islet FFA leads to impaired GSIS, which is restored on exogenous FFA administration 36 . The mechanisms by which healthy concentrations of FFA promote GSIS have been studied extensively, and at least two distinct pathways have emerged. The first is through the FFA receptor 1 (or Gpr40) 37,38 ; Alquier et al. recently showed that the knockout of GPR40 led to impairments in glucose and FFA-stimulated insulin secretion in islets without affecting intra-islet glucose or palmitate metabolism 39 . The second pathway is through intracellular FFA metabolism (to generate lipid signaling molecules) and glycerolipid/FFA cycling 40 . In the aggregate, these mechanisms are believed to maintain glucose-responsive insulin secretion under normal circumstances, and possibly contribute to the early hypersecretion of insulin in the initial stages of high-fat diet-induced obesity 41,42 .
In contrast to the GSIS-promoting effect of FFA in the shortterm, chronic exposure of b cells to FFA appears to have the opposite effect. In several models in vitro and in vivo, exposure to FFA in the long term leads to impaired Ins gene transcription, impaired GSIS and eventual b cell apoptosis [43][44][45][46] . Importantly, the deleterious effects of FFA in virtually all of these circumstances have been observed in the presence of elevated glucose concentrations [47][48][49] , and hence the term 'glucolipotoxicity' is perhaps more appropriate in describing the phenomenon. The 'permissive' effect of glucose on FFA toxicity in the b cell has been suggested to be secondary to a partitioning effect on lipid metabolism, such that elevated glucose and FFA levels results in the accumulation of long chain acyl CoA esters in the cytosol, which are detrimental to b cell function 50 . The nature of the FFA themselves also appears to be relevant to glucolipotoxicity, whereby saturated fatty acids (e.g. palmitic acid) confer the greatest toxicities and monounsaturated fatty acids (e.g. palmitoleic acid) might actually have a neutral or protective effect because they are more readily esterified into triglycerides 48,51,52 .
Several mechanisms have been proposed to explain the chronic effects of FFA on GSIS and b cell apoptosis. Prolonged exposure to palmitic acid diminishes Ins gene transcription and GSIS in isolated rat islets, accompanied by attenuated binding of the b cell transcription factors Pdx1 and MafA on the Ins promoter 53,54 . The underlying cause for the diminished activities of Pdx1 and MafA was shown in studies in vivo, in which islets from intralipid-infused Wistar rats showed a shift in Pdx1 localization from the nucleus (where it normally regulates gene transcription) to the cytosol 55 . Unlike its effect on Pdx1, palmitic acid appears to diminish MafA transcription, leading to lower MafA protein levels 53 . Other mechanisms of glucolipotoxicity include palmitic acid-induced activation of protein kinase C d (a mediator of apoptosis) 56 , palmitic acid-induced synthesis of ceramides (which inhibits the anti-apoptotic protein Bcl-2 and downregulates IRS-1/2 signaling) 52,57-59 , FFA-induced upregulation of UCP2 (and subsequent reduction of glucose-stimulated ATP generation) [60][61][62] , and activation of oxidative stress 58,63 and the unfolded protein response 64 pathways.
Emerging data additionally implicate a possible role for cholesterol metabolism in b cell lipotoxicity. Oxidized low density lipoprotein particles appear to diminish Ins gene transcription and promote apoptosis in isolated b cells 65 . Disruption of the ABCA1 reverse cholesterol transporter in mice results in defects in cholesterol efflux from the b cell, and subsequent accumulation of intra-islet cholesterol; this accumulation leads to impaired GSIS and glucose intolerance 66 . In this regard, recent studies by our group suggest that activation of ABCA1 in human islets by LXR agonists might be one approach to diminish islet cholesterol burden and improve GSIS 67 .
Islet Amyloid Polypeptide
Islet amyloid polypeptide (IAPP), also known as amylin, is a small 37 amino acid peptide that is synthesized in the islet b cell and co-secreted with insulin [68][69][70] . Although the physiological role of IAPP is unclear, its presence as 'amyloid' deposition within the islet was seen more frequently in pancreatic specimens from humans with type 2 diabetes compared with obese, non-diabetic control subjects [71][72][73] . Species differences in IAPP are particularly significant in terms of the consequences of amyloid deposition in islets, such that the human, monkey, dog and cat orthologs possess amyloidogenic potential (i.e. the ability to oligomerize and form intracellular fibrils), whereas mouse and rat orthologs do not 74 . Whether or not amyloid deposition is a cause or consequence of type 2 diabetes has been the subject of much controversy, but more recent studies of transgenic rodents harboring the human form of IAPP seems to strongly suggest a causal role for human IAPP in the development of islet dysfunction. Islet specific expression of human IAPP in transgenic mice and rats leads to amyloid fibril deposition, b cell apoptosis and diabetes 75,76 . Interestingly, pharmacological inhibition of fibril formation fails to prevent IAPP-induced b cell apoptosis, suggesting that the IAPP oligomers are the likely nature of the detriment 77 . Because IAPP is co-secreted with insulin, the insulin hypersecretory state of early insulin resistance is thought to predispose to IAPP hyperproduction and possibly intracellular accumulation 78 . Intracellular accumulation of IAPP has been correlated with oxidative stress 79 , Fas-associated death receptor signaling 80 and the unfolded protein response/ER stress [81][82][83] .
Cytokines
Adipose tissue, which used to be thought of as 'passive' fat storage tissue, is now recognized as an 'active' endocrine organ whose secretions have profound effects on other tissues. Just as importantly, the nature of the adipose tissue (e.g. visceral vs subcutaneous) has profound implications for the types of factors ª 2010 Asian Association for the Study of Diabetes and Blackwell Publishing Asia Pty Ltd secreted and their ultimate effects on glucose homeostasis (with visceral being more detrimental than subcutaneous) [84][85][86] . The many bioactive cytokines (or adipocytokines) released by adipose tissue include leptin, adiponectin, resistin, tumor necrosis factor-a (TNF-a), interleukin-6 (IL-6) and MCP-1 [86][87][88][89][90][91][92] . Obesity (with increases in visceral adipose tissue) is associated with lower secretory rates of beneficial adipocytokines (adiponectin) and higher secretory rates of leptin and pro-inflammatory adipocytokines (TNF-a, IL-6, MCP-1) 87,88,93-95 . TNF-a signaling in the islet is particularly detrimental; TNF-a negatively regulates both IRS-2 function (through JNK-mediated IRS-2 Ser phosphorylation) and stability (through enhancement of IRS-2 degradation) in b cells 96,97 . NF-jB, a major downstream mediator of the TNF-a response in b cells, induces proinflammatory responses and inducible nitric oxide synthase activation, both of which might trigger the unfolded protein response/ER stress 98 . Recent studies suggest that another adipocytokine, leptin, might affect islet function in the setting of obesity. Islet b cells express the full-length leptin receptor ObR, which activates the JAK-STAT3 pathway in response to leptin binding 99 . Leptin signaling inhibits GSIS in b cell lines and in normal mice [99][100][101][102][103] , suggesting that leptin signaling might serve as a 'brake' for insulin release in normally functioning b cells. Interestingly, however, leptin signaling in the islet appears to be required for the adaptive islet hyperplasia as seen in high-fat diet feeding 102 . Thus, it appears that impaired leptin signaling in some states of obesity might be detrimental to islet function and might therefore contribute to glucose intolerance and diabetes.
IL-1b is another cytokine that has been shown to directly contribute to b cell dysfunction in type 2 diabetes. Recent clinical studies 104,105 show a positive effect of IL-1b receptor antagonists on glycated hemoglobin and b cell function in type 2 diabetes, with durable effects even after discontinuation. The source of IL-1b in type 2 diabetes has remained controversial, but could include production by locally infiltrating macrophages into islets or adipose tissue, or possibly production by islets themselves 106,107 .
MECHANISMS LEADING TO b CELL FAILURE
Whereas the triggers discussed earlier (glucose, lipid, IAPP and cytokines) can be viewed as distinct entities that variably exist in states ranging from insulin resistance to frank type 2 diabetes, the end result of these triggers are convergent pathways that lead to b cell dysfunction and eventual death. An increase in apoptotic b cells is evident in pancreata of type 2 diabetic subjects, whereas numbers of replicating b cells are unchanged 7 ; this finding suggests that the net balance in type 2 diabetes favors b cell loss. The mechanisms by which the above described triggers lead to initial b cell dysfunction, then eventual death, are discussed below.
Oxidative Stress
An abundance of evidence now suggests that chronic exposure of b cells to elevated glucose (glucotoxicity), and likely also FFA and IAPP, leads to the production of reactive oxygen species (ROS). The sources for ROS are numerous, and include oxidative phosphorylation (mitochondria), protein kinase C activation and sorbitol metabolism, among others (see reference 30 for a review). Ironically, b cells possess less anti-oxidative capacity compared with other highly oxidative cells, with diminished activities of protective enzymes including Cu/Zn-superoxide dismutase (SOD), Mn-SOD, catalase and glutathione peroxidase 108,109 . A marker of oxidative stress, 8-hydroxy-2¢-deoxyguanosine , is observed in islets of type 2 diabetic subjects 110 , and is also seen in animal models of type 2 diabetes (e.g. the Goto-Kakizaki or GK rat) 111 .
Several studies suggest that attenuation of oxidative stress might lead to recovery of b cell function. Oxidative stress can be prevented by treatment of islets with the antioxidant N-acetyl cysteine or by overexpression of glutathione peroxidase 112,113 . Notably, a recent study by Robertson et al. showed that transgenic overexpression of glutathione peroxidase in islets of obese diabetic db/db mice led to restoration of islet function, glucose homeostasis and MafA nuclear localization 114 . Similarly, reductions in oxidative stress might underlie the islet protective effect of thiazolidinediones (PPAR-c agonists) in humans and diabetic mouse models [115][116][117] , although this effect might also involve reductions in ER stress pathways 118 (signaling a possible link between oxidative stress and ER stress 119 ).
Inflammation
The role of inflammation in the pathogenesis of islet dysfunction was thought to be largely confined to type 1 (autoimmune) diabetes. However, with the recognition that adipose tissue serves as a major source for cytokines and chemokines also comes the realization that inflammatory signaling pathways within the islet might contribute to b cell dysfunction. A large body of literature points to the role of the proinflammatory cytokines IL-1b TNF-a, and interferon-c (IFN-c) in activating several signaling cascades, including NF-jB, mitogen activated protein kinase (MAPK), and janus kinase/signal transducer and activator of transcription (JAK/STAT) 120 . Another important cascade induced by cytokine signaling in the b cell is arachidonate metabolism. In response to cytokines, 12/15-lipoxygenase (12/15-LO) is strongly induced to cause the breakdown of arachidonic acid to highly active metabolites (e.g. 12-hydroxyeicosatetraenoic acid), which themselves are believed to lead to oxidative stress and mitochondrial dysfunction [121][122][123][124][125] . Recent work by Nadler et al. showed that islets of 12/15-LO knockout mice are protected from the cytokine-induced deterioration of high-fat diet feeding 126 , suggesting a potentially proximal role for arachidonate metabolism in the islet response to systemic cytokines.
Collectively, the multiple cascades induced by cytokines lead to further production of inflammatory cytokines and cell death signals resulting in b cell dysfunction and ultimately death. Whereas in the case of type 1 diabetes, the source of proinflammatory cytokines is thought to be primarily the immune system (activated T cells and macrophages), the scenario in type 2 diabetes is more complex. Certainly, as discussed earlier, visceral adipose tissue is thought to be a major source. However, a role for the immune system might well be possible. Macrophage infiltration into islets is increased in several type 2 diabetes animal models, such as high-fat fed C57BL/6 mice, GK rat and db/db mouse 127 . Consistent with these animal model studies, macrophage number is also increased in islets of type 2 diabetic subjects compared with non-diabetic subjects 128 . Several small clinical studies showed that administration of high doses of the anti-inflammatory drug, salicylate, improved glycemic control in diabetic subjects 129 .
As discussed earlier, IL-1b is another candidate cytokine that is known to trigger the inflammatory cascade in islets. Mature IL-1b is produced though cleavage by caspase-1, which itself is activated by the NLRP3 inflammasome. The inflammasome is composed of the Nod-like receptor protein NLRP3, CARDI-NAL, ASC and caspase-1 130 . In recent studies, it was shown that thioredoxin-interacting protein (TXNIP) interacts with NLRP3 and contributes hyperglycemia-responsive IL-1b production 131 . TXNIP binds to the redox-domain of thioredoxin to block reductase activity, and releases thioredoxin in response to oxidative stress. Interestingly, TXNIP transcription is increased by glucose stimulation in islets 132,133 , suggesting that TXNIP might serve as a signaling molecule to link glucose-induced oxidative stress to inflammation.
Endoplasmic Reticulum Stress
The ER is a dynamically active organelle that plays a central role in the translation and proper folding of mRNA and their encoded proteins, respectively. The role of the ER is central to the function of the b cell, which relies heavily on this organelle to process proinsulin. In addition to its role in protein folding, the ER is also crucial for intracellular Ca 2+ homeostasis and mobilization through the function of the ER-embedded sarco/ endoplasmic reticulum Ca 2+ ATPase (SERCA) 134 . In the setting of adaptive islet hyperplasia, the role of the ER is especially crucial, as the increased demand for insulin production and release requires mobilization of chaperone proteins and SERCA activity. When insulin demand exceeds ER capacity, the consequent accumulation of misfolded proteins leads to the induction of a process known as the unfolded protein response (UPR). The UPR has two primary functions; first, to halt protein synthesis to mitigate accumulation of unfolded proteins and second, to generate chaperone proteins to aid in the folding of intraluminal proteins 135,136 . Three major transmembrane proteins serve as the transducers of the UPR: (i) inositol requiring enzyme 1 (IRE1); (ii) activating transcription factor 6 (ATF6); (iii) and protein kinase-like endoplasmic reticulum kinase (PERK). Activation of the UPR causes these three proteins to dissociate from the protein BiP/Grp78, which is then available to chaperone further protein folding 137,138 . In cases of prolonged stress (e.g. unmitigated insulin resistance), the UPR shifts from this 'survival' mode to apoptosis mode (ER stress), which correlates to expression of the protein CHOP (CCAAT/enhancer-binding protein homologous protein) [139][140][141] . b cell ER stress has been observed in several animal models. Akita mice, which bear the C96Y proinsulin mutation, show misfolded proinsulin accumulation in the ER and develop islet failure and diabetes 142,143 ; deletion of the CHOP protein in heterozygous Akita mice results in delayed development of diabetes 144 . Islets from 10 to 12-week-old obese db/db mice show evidence of ER stress, including activation of CHOP, and deletion of CHOP on this background results in massive islet compensation and significantly reduced hyperglycemia 119 . From a clinical perspective, type 2 diabetic subjects showed greater CHOP expression in islets compared with non-diabetic controls 83 . Taken together, these findings suggest that activation of programmed cell death pathways through unmitigated ER stress might lead to islet loss during the transition from insulin resistance to frank type 2 diabetes.
Islet dysfunction and death, which have traditionally been viewed as hallmarks of type 1 diabetes, are now gaining increasing attention in the pathogenesis of type 2 diabetes. The present discussion of both the triggers and mechanisms of islet dysfunction and death is admittedly incomplete, as the diversity of signaling pathways is as great as the genotypic heterogeneity of type 2 diabetes itself. Importantly, also, the direct demonstration that any of these pathways play a direct role in the dysfunction of human b cells is largely lacking. Thus, much of our knowledge of islet dysfunction must come from rodent data. Nonetheless, the model shown in Figure 2 might serve as Figure 2 | Triggers of b cell dysfunction impinge on intercommunicating pathways. b cell dysfunction is depicted as emanating from specific extracellular (glucotoxicity, cytokines and lipotoxicity) and intracellular (IAPP) signals, which then activate an intercommunicating network of pathways (oxidative stress, ER stress and inflammatory stress) leading to b cell dysfunction and demise. The figure is intended to be descriptive of the events observed in models in vitro and in vivo, and is not intended to suggest that those mechanisms depicted are the only mechanisms that occur. FFA, free fatty acid. a framework for understanding the pathways that ultimately lead to the demise of the b cell in type 2 diabetes. We propose that specific mediators (glucotoxicity, lipotoxicity, IAPP and cytokines) serve as triggers for multiple different, often intercommunicating, pathways within the islet (oxidative stress, ER stress and inflammatory stress). Although several additional pathways not described in detail in the present review (e.g. Fas ligand signaling 145 , mitochondrial dysfunction [146][147][148][149] , defective IRS-2 signaling 150 and epigenetic alterations 151 ) likely also contribute to b cell dysfunction, what remains to be determined from a therapeutic perspective is whether any one pathway is more relevant than another at a given timepoint in the progression of disease, or in a given individual overall. In this regard, ongoing genomic and epigenomic profiling studies might eventually allow for correlation to specific pathways, and perhaps the eventual development of directed individualized therapies. | 2016-05-04T20:20:58.661Z | 2010-03-25T00:00:00.000 | {
"year": 2010,
"sha1": "8e5feb7bebaf7dc0a17a2ae635750a0b4c6e545f",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.2040-1124.2010.00021.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8e5feb7bebaf7dc0a17a2ae635750a0b4c6e545f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59543653 | pes2o/s2orc | v3-fos-license | Personalisation services and care: The case of England
Personalization services are developing in England as a social policy response to user demands for more tailored, effective and flexible forms of health and social care support. This process is being implemented under the personalization which is also seen as a vehicle for promoting service user rights through increasing participation, empowerment and control while also promoting self-surveillance by having users manage the costs of their health and social care. There has been an accelerating interest in the implementation of personalisation policies relying upon a relentless political campaign to legitimise an enforced obligation to care, i.e., UK Prime Minister Cameron’s notion of a “Big Society”. The use of personalisation that focuses on self-assessment and inspection, can, in this policy and austere climate, become a means of self-surveillance. It is argued that Michel Foucault offers a set of strategies [1] for understanding how the discourses on personalisation construct service user’s experiences and their identities, as constructed subjects and objects of social policy and managerial knowledge. Correspondence to: Jason L Powell, Ph.D, Department of Social and Political Science, University of Chester, UK, E-mail: j.powell@chester.ac.uk Received: May 10, 2016; Accepted: May 23, 2016; Published: May 25, 2016 Introduction The thesis interrogated in this article is that an increasing interest in personalisation and care is central to understanding recent personal care policy in England. It will be argued that personalisation legitimates practice in which the state monitors and co-ordinates but does not intervene. This has led to a social situation that has transformed social care practice of its traditional rationale as ‘caregiver’. One consequence of these policies has been to transfer the financial and emotional responsibilities for care to service users and informal carers under the aegis of ‘personalisation’ [2]. The price to be paid, however, is that the relationship between the State and older people has been reduced to one of surveillance and the enforcement of a notion of what community obligation might entail. As with other forms of implied control, generic methods of surveillance are presented as ‘concern’ models [3]. This act of observation confers a uniformity that emphasises the ‘protective’ role of the professional rather than the substantive requirements of service users. The rise of personalisation and care Personalization in social care is linked to both the principle and process that every adult who receives support, whether provided by statutory services or funded by them, will have choice and control over the shape of that support in all care settings. This adult social care policy agenda is firmly focused on the development of Personalization of support. Powell and Chamberlain [2] state that this has been repeatedly stated in key policy documents including Improving the Life Chances of Disabled People (published by former UK Prime Minister’s Tony Blair’s Strategy Unit in 2005), and the British 2006 Community Services White Paper, Our Health, Our Care, Our Say, which announced the piloting of Personal Budgets [4,5]. Personalization had its early beginnings in Direct Payments (introduced in 1997 when New Labour came to power), whereby people who are eligible for social care can choose to receive ‘cash for care’ in lieu of services [2]. Despite repeated efforts to encourage take-up, and extension of the legislation to include further groups of people within eligibility, direct payment expenditure still accounts for only 1% of local authority spending on social care. Personal Budgets are being piloted in across English localities. Personal Budgets bring together a range of different funding streamsin addition to social care expenditure to support independent living. The model for personal budgets was largely derived from work developed by In Control that instigated self-directed support for people with learning disabilities and is engaged in supporting Personalization developments in more than 90 local authorities [6]. Personal Budgets are central to the aim of ‘modernising’ social care policy and practice in England. They build on the experiences of direct payments and In Control and are intended to offer new opportunities for personalised social care [7]. Its overall aim is for social care service users to have control over how money allocated to their care is spent. It includes within its remit direct payments, Personal Budgets, userled services, self-directed support. Self-assessment is a cornerstone of Personalization that gives service users the opportunity to assess their own care and support needs and decide how their Personal Budgets are spent that is a process transforming social care. At the same time, the coalition government of Cameron-Clegg in 2010-2015 had spoken of the importance of personalization in budget devolvement as most important issue in social care (which has been replicated at press with the Cameron government) for personal budgets for pregnant women. Indeed, public services in communities also face new demand side challenges in a global economic recession. At the same time, individuals and populations in western culture have expectations of the State to deliver to meet their health and care needs providing resources and services to provide support. These increased expectations are strongly felt in public services and challenge the traditional relationship between the State and vulnerable groups in modern societies such as older people, the physically, mentally and intellectually challenged and people who are frail and sick [7]. Significantly, the personalization agenda dissolved all these traditional user groups and their corresponding local authority specialist provider structure into a single entity of ‘adult social care’. Powell JL (2016) Personalisation services and care: The case of England Volume 1(3): 66-71 Nurs Palliat Care, 2016 doi: 10.15761/NPC.1000118 The traditional focus for social care has been the role of the state and its effectiveness in the re-distribution of wealth and promotion of social justice for individuals and groups [8]. However, the later part of the twentieth century and first part of the twenty-first century has seen a re-casting of that relationship away from state directed resource allocation to user controlled support with the UK borrowing from the North American consumer led schemes such as “Cash and Counseling”. Consequentially personalization and consumer led support has become entrenched as a new language of responsibility in western culture regarding social welfare [7] providing new debates about how best to achieve the balance between civil liberties and self-constraint. Put simply personalization, using the language of sustainability namely, the effective use of resources, empowerment, participation, control, choice and human rights [9], re-casts the focus for health and social support onto the individual and away from the State. Users of welfare services are now reinvented as welfare citizens with responsibility for providing to meet their own needs from a ‘personalized’ individual budget while parallel processes of risk management and safeguarding protect the state from unnecessary exposure [6]. Using the UK as a case study, this paper will shed light on wider contemporary trends in social policy in general and personal support in particular in western society. But is this too simplistic a conceptualization? Why and how is personalization relevant to social policy and modern society? How is it researched? How is personalization reconciled in a formidable structural climate of decreasing public resources and the globalization of health and social care provision? This is not just a global economic recession but one of which affects all nation states. Many of these questions can be connected to why personalization services are needed, what is provided and how it is coordinated. The personalization agenda offers an opportunity to make social care (and other services) more responsive and flexible so that it is actually doing what people who use budgets and services want and need, rather than being constrained in rigid task and time specifications [7]. Personalization is inextricably linked to process that every person who receives support, whether provided by statutory services or funded by themselves, will have choice and control over the shape of that support in all care settings Individual Budgets Evaluation Report [IBSEN report], [10]. Carr [11] suggests its overall aim is for social care service users to have control over how money allocated to their care is spent. It includes within its remit direct payments, individual budgets, personal budgets, user-led services and self-directed support [10]. Selfassessment is the cornerstone of personalization. It gives service users the opportunity to assess their own care and support needs and decide how their individual budgets are spent while at the same time providing the dynamic for transforming social care [11]. In circumstances where the service user has limited capacity to either engage in self-assessment or direct their support a range of possibilities arise such as, family and friends, community based organizations, community based advocacy groups, brokers and agency staff [12]. These in turn, highlight a set of relationships compatible with the new UK administration’s focus on the ‘Big Society’. However, it is also prudent to note the persistence of a moral undertone as people with substance and alcohol issues tend to be excluded from using individual budgets, as are those leaving custody. In order to explore the conceptual, policy and research literature on personalization, this report attempts to set out in more detail what personalization is, what it will mean and how it may work, with the aim of exploring to what extent the objectives are likely to be realised. It c
Introduction
The thesis interrogated in this article is that an increasing interest in personalisation and care is central to understanding recent personal care policy in England. It will be argued that personalisation legitimates practice in which the state monitors and co-ordinates but does not intervene. This has led to a social situation that has transformed social care practice of its traditional rationale as 'caregiver'. One consequence of these policies has been to transfer the financial and emotional responsibilities for care to service users and informal carers under the aegis of 'personalisation' [2]. The price to be paid, however, is that the relationship between the State and older people has been reduced to one of surveillance and the enforcement of a notion of what community obligation might entail. As with other forms of implied control, generic methods of surveillance are presented as 'concern' models [3]. This act of observation confers a uniformity that emphasises the 'protective' role of the professional rather than the substantive requirements of service users.
The rise of personalisation and care
Personalization in social care is linked to both the principle and process that every adult who receives support, whether provided by statutory services or funded by them, will have choice and control over the shape of that support in all care settings. This adult social care policy agenda is firmly focused on the development of Personalization of support. Powell and Chamberlain [2] state that this has been repeatedly stated in key policy documents including Improving the Life Chances of Disabled People (published by former UK Prime Minister's Tony Blair's Strategy Unit in 2005), and the British 2006 Community Services White Paper, Our Health, Our Care, Our Say, which announced the piloting of Personal Budgets [4,5]. Personalization had its early beginnings in Direct Payments (introduced in 1997 when New Labour came to power), whereby people who are eligible for social care can choose to receive 'cash for care' in lieu of services [2].
Despite repeated efforts to encourage take-up, and extension of the legislation to include further groups of people within eligibility, direct payment expenditure still accounts for only 1% of local authority spending on social care. Personal Budgets are being piloted in across English localities. Personal Budgets bring together a range of different funding streams-in addition to social care expenditure -to support independent living. The model for personal budgets was largely derived from work developed by In Control that instigated self-directed support for people with learning disabilities and is engaged in supporting Personalization developments in more than 90 local authorities [6]. Personal Budgets are central to the aim of 'modernising' social care policy and practice in England. They build on the experiences of direct payments and In Control and are intended to offer new opportunities for personalised social care [7]. Its overall aim is for social care service users to have control over how money allocated to their care is spent. It includes within its remit direct payments, Personal Budgets, userled services, self-directed support. Self-assessment is a cornerstone of Personalization that gives service users the opportunity to assess their own care and support needs and decide how their Personal Budgets are spent that is a process transforming social care. At the same time, the coalition government of Cameron-Clegg in 2010-2015 had spoken of the importance of personalization in budget devolvement as most important issue in social care (which has been replicated at press with the Cameron government) for personal budgets for pregnant women.
Indeed, public services in communities also face new demand side challenges in a global economic recession. At the same time, individuals and populations in western culture have expectations of the State to deliver to meet their health and care needs providing resources and services to provide support. These increased expectations are strongly felt in public services and challenge the traditional relationship between the State and vulnerable groups in modern societies such as older people, the physically, mentally and intellectually challenged and people who are frail and sick [7]. Significantly, the personalization agenda dissolved all these traditional user groups and their corresponding local authority specialist provider structure into a single entity of 'adult social care'.
The traditional focus for social care has been the role of the state and its effectiveness in the re-distribution of wealth and promotion of social justice for individuals and groups [8]. However, the later part of the twentieth century and first part of the twenty-first century has seen a re-casting of that relationship away from state directed resource allocation to user controlled support with the UK borrowing from the North American consumer led schemes such as "Cash and Counseling". Consequentially personalization and consumer led support has become entrenched as a new language of responsibility in western culture regarding social welfare [7] providing new debates about how best to achieve the balance between civil liberties and self-constraint. Put simply personalization, using the language of sustainability namely, the effective use of resources, empowerment, participation, control, choice and human rights [9], re-casts the focus for health and social support onto the individual and away from the State. Users of welfare services are now reinvented as welfare citizens with responsibility for providing to meet their own needs from a 'personalized' individual budget while parallel processes of risk management and safeguarding protect the state from unnecessary exposure [6]. Using the UK as a case study, this paper will shed light on wider contemporary trends in social policy in general and personal support in particular in western society.
But is this too simplistic a conceptualization? Why and how is personalization relevant to social policy and modern society? How is it researched? How is personalization reconciled in a formidable structural climate of decreasing public resources and the globalization of health and social care provision? This is not just a global economic recession but one of which affects all nation states. Many of these questions can be connected to why personalization services are needed, what is provided and how it is coordinated. The personalization agenda offers an opportunity to make social care (and other services) more responsive and flexible so that it is actually doing what people who use budgets and services want and need, rather than being constrained in rigid task and time specifications [7].
Personalization is inextricably linked to process that every person who receives support, whether provided by statutory services or funded by themselves, will have choice and control over the shape of that support in all care settings Individual Budgets Evaluation Report [IBSEN report], [10]. Carr [11] suggests its overall aim is for social care service users to have control over how money allocated to their care is spent. It includes within its remit direct payments, individual budgets, personal budgets, user-led services and self-directed support [10]. Selfassessment is the cornerstone of personalization. It gives service users the opportunity to assess their own care and support needs and decide how their individual budgets are spent while at the same time providing the dynamic for transforming social care [11]. In circumstances where the service user has limited capacity to either engage in self-assessment or direct their support a range of possibilities arise such as, family and friends, community based organizations, community based advocacy groups, brokers and agency staff [12]. These in turn, highlight a set of relationships compatible with the new UK administration's focus on the 'Big Society'. However, it is also prudent to note the persistence of a moral undertone as people with substance and alcohol issues tend to be excluded from using individual budgets, as are those leaving custody.
In order to explore the conceptual, policy and research literature on personalization, this report attempts to set out in more detail what personalization is, what it will mean and how it may work, with the aim of exploring to what extent the objectives are likely to be realised. It considers the opportunities these changes are presenting service users and illuminates the key research findings from the IBSEN report [10] that provides a series of research benchmarks to measure how pilots of personalization and individual budgets are being experienced.
A word of caution however; overall, it is fair to say that the evidence base in relation to the critical success factors of personalization is extremely scarce [10,11]. This also means that it is very difficult to bring evidence together in any cumulative sense to gain an impression of the overall or aggregate impact of personalization. A key point to state is that the available literature is on what the implications would be rather than what the implications evidentially are. Samuel [13] makes the cogent point that there has been such political enthusiasm for individual budgets from both New Labour and Conservative parties however; such enthusiasm has run ahead of the evidence with government adopting a whole new personalization approach to social care policy while investing at least £500,000,000 in making it happen before even its own research findings were available to offer an adequate evidence base [13]. Hence, greater use of methodological interrogation of experiences is required in tapping the narrative and experiential contours of personalization and Individual Budgets (IBs). There have been scarce longitudinal research designs [10], in which interventions and their beneficial/dystopian effects on IB can be studied over time [6,11]; or evaluation designs, for example where ostensibly similar interventions or the work of comparable agencies are undertaken in different settings as the process is only starting to unfold [10]. Nevertheless, it is easy to see the attractions of personalization in policy terms as governments look to distance themselves from decisions over the shape of welfare, how it should be delivered, who delivers and at what quality.
'Taking Aim' at personalization
In the UK, the Brown administration (2007-2010) identified personalization as a mechanism to promote individual rights and as a vehicle to transform the shape of adult health and social care services. Following the principle that the relationship between the service user and the State is one where citizens are encouraged and enabled to take control of their needs, the service user has a budget through which they can purchase goods and services to meet a range of self-assessed needs in ways they choose [14]. In the process social care will transform from a system where people have had to accept what is offered and professionally driven definitions of need, to one where people have greater control, not only over the type of support offered, but also how and when it is offered, how it is paid for and how it helps them achieve the outcomes that are important to them [15].
The effect of service users participating to meet their own needs will be the transformation of social care. Indeed, Leadbeater [16] suggests that in order to understand personalization we must locate it in its broad political context of 'participation' as service users become actively involved in selecting and shaping the services they receive. According to Carr [11], personalization has the potential to reorganize the way we create public goods and deliver public services. Leadbetter [16] in a report for Democracy Think Tank 'DEMOS' suggests that personalization, by engaging the tradition of participation, makes the connection between the individual and the collective, connecting the public and the private spheres of life by allowing users a more direct, informed and creative say in 'rewriting the script' by which the services they use is designed, planned, delivered and evaluated. Leadbetter [16] identifies a number of over-arching principles related to personalization that link to sustainability and in particular the level to which the state impinges on individual autonomy. Furthermore, he raises the cogent point that service users should be supported and enabled by professionals rather than be dependent on their judgements. They should be able to question, challenge and deliberate while also making suggestions about and making demands for more appropriate forms of support. Nor are users merely consumers, choosing between different packages offered to them, providing the paradox between discourses of consumerism and participation that lies at the heart of the policy. Rather, service users should be intimately involved in shaping and "coproducing" the service they want. The question this raises is what does this actually mean? The answer is fivefold: (i) finding new collaborative ways of working and developing local partnerships, which (co) produce a range of services for people to choose from and opportunities for social inclusion; (ii) tailoring support to people's individual needs; (iii) recognising and supporting carers in their role, while enabling them to maintain a life beyond their caring responsibilities [17]; (iv) access to universal community services and resources -a 'total system' response; (v) and early intervention and prevention so that people are supported early on and in a way that's right for them.
It will be argued that these social policy initiatives have a number of common threads which establish a shift in services away from care and support and toward the self-surveillance of those being cared for. The form that this shift has taken varies depending upon the site of interaction and subsequent power relations between professional workers and service users. For mental health services, surveillance is directly aimed at the nominated 'consumer' or 'patient'.
Increased surveillance is often presented in social policy as a tactical response to crises at margins of personalisation policy, the accidental accretion of responses to unintended consequences. The argument pursued, here, however, will suggest that increased surveillance is part of a strategic agenda of wider questions of morality and control. It is not that personalisation has made more of an awareness of the fragmented variants of social care, but that personalisaton gives meaning to care and before its advent, technologies such as "care assessment" were the welfare equivalent of a solution looking for a problem. Personalisation, in particular, fills a vacuum at the centre of social care policy, giving it an ideological legitimisation function it had previously not had; a policy flag for Cameron to hide behind in terms of ideology and cuts in public services.
Self-surveillance, personalisation and care
This article will explore personal care issues in a number of ways. First, the methodological 'box of tools' drawn from the work of Michel Foucault [1] will be used to expand upon discontinuities between personalisation policy and its consequences. Two themes will then be expanded, firstly, questions of morality to highlight change and the social policy technology available to execute it, namely, care management. Secondly, the relationship between overt concerns and covert consequences will be analysed in order to examine how benevolent intentions, without critical analysis, can result in negative outcomes for the recipients of state intervention.
Foucault's main interest is in the ways in which individuals are constructed as social subjects, knowable through disciplines and discourses. The aim of Foucault's work has been to 'create a history of the different modes by which, in our culture human beings are made subjects [18]. In Madness and Civilisation [19], Foucault traces changes in the ways in which physical and mental illness was spoken about. Foucault employs a distinctive methodology for these studies, archaeology, which aims to provide a 'history of statements that claim the status of truth' [20]. Foucault's later work, Discipline and Punish focuses on the techniques of power that operate within an institution and which simultaneously create 'a whole domain of knowledge and type of power' [1]. This work is characterised as genealogy and sets out to examine the 'political regime of the production of truth' [20]. Both archaeology and genealogy are concerned with the limits and conditions of discourses but the latter takes into account political and economic concerns relevant to personalisation policy.
Indeed, the work of Foucault has engendered an awareness that modern institutions operate according to logics that are often at excessive variance with the humanist visions embedded in policy analysis [21]. In other words, the overt meanings given to a certain policy of activity may not correspond to their consequences. Whether these outcomes are intended or accidental was less important to Foucault than the analysis of power. As Smart [22] points out, Foucauldian analysis asks of power: 'how is it exercised; by what means?' and second, 'what are the effects of the exercise of power?' Within those strategies, investigation would need to be centred on the mechanisms, the 'technologies' employed and to the consequences of any social momentum for change.
An example of the discordance between social policy, the philosophy that overtly drove a certain initiative and its effects, comes from Foucault's analysis of utilitarianism. Indeed, a pervasive theme of Foucault's [1] work is the way in which the panopticon technique 'would make it possible for a single gaze to see everything perfectly' [1]. Foucault describes how panopticism (based on the design of Jeremy Bentham) becomes a process whereby certain mechanisms permeate social systems beyond actual, physical institutions. Techniques are thus 'broken down into flexible methods of control, which may be transferred and adapted ..(as)... centres of observation disseminated throughout society' [1].
The mechanisms used to extend the reach of centres of power will vary depending upon the ground upon which they are required to operate. Their function is to evoke and sustain moral interpretations of particular social behaviours throughout intermittent observation such that their objects come to internalise their own surveillance.
One important facet of Foucauldian analysis is the author's preoccupation with historical periods in which conventional values are in flux as in the case of madness, discipline and sexuality [1,19,22] and how the emergence of professional discourses interpenetrate the evolution of new commonsensical understandings of 'normality'. There are, in other words, periods in which particular sites of control, for example, institutional care, family relations, intimate relationships are subject to novel mechanisms and technologies in order to facilitate the transition from one state of affairs to another. These technologies may be overtly applied during periods of flux until moral relations have been accepted, and, during the process of their application they both modify and are modified by the professional groupings charged with their implementation. Whilst Foucault does not impose any sense of causality on the development of such discourses, it is possible to discern the need for both an explicit moral reason and a method of operation, shaped to whatever new contexts are appropriate. Government morality would act as permission for activities such as surveillance. A professional technology would provide a means of implementation depending upon the site (for example, in institutions of the state) of the targeted activity.
As Rouse [23] has pointed out, an examination of the relationship between power and knowledge is central to interpret and understand social phenomena through a Foucauldian gaze. This is particularly apposite where there is an attempt of a disaggregation of a stated policy and its mechanisms in order to discover what is thereby hidden or obscured. One of the consequences of power and knowledge is that rather than the focus on the explicit use of a particular technique of knowledge by someone in power to cause a certain effect, attention is drawn to the reflexive relationship between both elements. There is a concern then: 'with the epistemic context within which those bodies of knowledge become intelligible and authoritative. How statements were organised thematically, which of those statements counted as serious, who was empowered to speak seriously, and what questions and procedures were relevant to assess the credibility of those statements that were taken seriously. ...The types of objects in their domains were not already demarcated, but came into existence only contemporaneous with the discursive formations that made it possible to talk about them' [23]. Returning to an earlier theme, the process by which a particular domain is established may not be the same as the reasoning given to explain what events take place and their effects. Indeed, as his understanding of this relationship developed, Foucault [18] indicated that 'power is tolerable only on condition that it mask a substantial part of itself. Its success is proportional to its ability to hide its own mechanisms'.
Furthermore, in personalisation policy, there is an open intention to 'empower' through allowing older people to live in their own communities and monitoring support, may have become a means of policing informal care and through that the conduct of older people. Throughout the past 15 years, community care policy has drawn upon a number of sources of flux to achieve momentum. These have included a concern over familial obligation to care and changing social work practices, from a traditional providing role to that of managing and purchasing of services; to movements from directed care to personal budgets. However, the 'no-cost' option of a social policy reliant on personal budgets comes to look increasingly fragile. It is therefore in need of a shift to the moral ground of obligation and personalisation into which the Cameron administration continually attempts to tap through the 'Big Society' [24].
However, another complicating factor to an 'obligation' based personalisation and social policy manifests from the idea that informal care is at root a voluntary activity. It is not, therefore, bound by any formal code of social practice, as would be the case for paid workers. Hence, there is no formal reason for intervention should a policy of informal care meet resistance. Thus, a social policy exists that contains fiscal policy and morality and makes informal care legitimate responsibility. The threat of 'personalisation' provides the excuse for this invasion of the private sphere, a shift from 'consent' to 'coercion' and from 'support' to 'surveillance'. However, to be fully effective, a technology would also need to be found that would implement the logic of that policy.
The technologies of care management to facilitate personalisation?
The core technology by which community care can be implemented exists in the role of care management. It can be conceptualized as the co-ordination of services into a 'package of care' in order to maintain 'clients' in community settings. The managerial technology is indirect in three ways. First, the pivotal function of the care manager is seen as being the management of a package that draws on services made available through a 'mixed economy of welfare'. Second, there is a shift toward supporting informal carers rather than directly working with the nominated client. Third, there is the emphasis on assessment and monitoring of provision that is supplied by service providers.
This quality of indirectness 'makes sense' as a means of managing a 'mixed economy of welfare' which requires that those who purchase care, or their agents, are separated from those who provide it. Because of the intensification of marketization, this limits the development of cartels, allows purchasers to choose between competing alternatives, thus placing them in the role of 'honest brokers' who assess need, supply information on the alternatives and then co-ordinate purchases. It does not, however, make sense in terms of direct care, intervention or interaction between older people and social workers other than as a sort of 'professional travel agency', advising clientele on the options, best deals and cash options. Care assessment and monitoring have now become an integral feature of social work practice and reflect a trend toward justifying welfare activities in terms of quality assurance [24]. By replacing direct intervention with management systems, the technology fails to provide guiding theoretical principles for interpreting and acting on conflict in social relationships. 'Techniques of resistance' [1] by older people to managerial techniques was found by Powell [25] who claimed older people 'were particularly adamant that they did not want to be 'cases' and no-one needed to 'manage' their lives'.
However, despite this resistance, the introduction of the 'mixed economy of welfare' in the U.K has consequences for the surveillance of older people. The mixed economy reflects political rationalities and technologies of government. Welfare pluralism is used to mobilise the use of resources -and thereby embody power relations -and thereby supply an economic vocabulary to legitimise the allocation of those resources and associated schemes of inspection and surveillance of services for older people. Chua [26] notes that 'social actors', such as care managers, try to translate values into their own terms, to provide standards for their own actions and in so doing, facilitate 'rule at a distance'. A mixed economy of welfare is a means of doing this, it fabricate representations of 'empowerment' for older people. As Chua [26] points out, not dissimilar to the social construction of health care accounting software, services become devices which transform real relations. In a sense, 'older people' become 'consumers', 'social workers' become 'managers', 'social service departments' become 'purchasers' all crystallised by the formation of community care policies [27]. In this case, services provide schemas for the 'conduct of conduct' [28] dominated by power/knowledge and characterised by the discretionary autonomy of care managers. It is within this disciplinary matrix of policy, practice and autonomy that power operates on older people, ultimately reinforcing the fragmentation that surveillance engenders in the psyches of older people at the centre of the professionals' gaze. This form of surveillance: 'clearly indicates the appearance of a new modality of power in which each individual receives as his status his own individuality, and in which he is linked by his status to the Hence, the older client is marked out for perpetual surveillance throughout the remainder of his or her care service. Carers and professionals also come under scrutiny as part of the continuous review of the client's needs through monitoring of personalised budgets. All are caught by a gaze who is 'always receptive' [1] to older people and provides a further rationale for surveillance of the 'elderly population'.
The panoptic culture
Why personalisation that is arguably essentially empty of interpersonal meaning is be 'legitimised' by the accretion of surveillance? The answer to this measure lies in the fact that it was not created as a philanthropic metaphor but as a mechanism for engineering the cost and structure of social welfare. Personalisation has been part of a strategy to reduce the costs of state welfare by adopting market principles [29]. Attempts at cost reduction have taken on two forms. First, there is the active encouragement of a private welfare economy in order to depress wages and related costs. Second, a hollowing out of the local state, through mechanisms such as care management and inspection, so that the primary role of social service departments has become that of monitoring and supporting direct care rather than provision itself. These trends may not simply reflect a flow through from market ideology but also wider pressure on the nation state as a consequence of globalization [24].
Awareness that the welfare state can be understood, not so much as a series of social service institutions and neo-liberal responses to social problems, but as an instrument of wider state power and governance is not new [30,31]. What is perhaps striking is the extent to which the techniques used by welfare workers have been drained of creative and radical meaning concerning resistance with marginalised groups and had drawn workers into the day to day management of scarce resources in personal budgets [32].
Until the advent of a panoptic culture, personal budgets with older people lacked a convincing unifying metaphor for its activity. With its instigation, a previously inchoate accretion of initiatives around 'community care' achieves harmony and force. Once the vigilance advocated by the Department of Health's guidelines on personalisation, are added to the indirect functioning of care management technology and the moral backdrop of obligation, the discourse of community care acquires a coherence of power/knowledge [33]. It is, however, a power/ knowledge to be deployed against older people's voices rather than for their emancipation.
Indeed, once older people are established as a socially significant object of power/knowledge, managerial techniques deem it necessary find the 'truth' about their care needs; to analyse, describe and to understand. The focus towards personalisation takes place in a wider process in which attention is being directed towards individual bodies and control of 'ageing populations'.
Conclusion
This paper has explored a number of factors in personalisation and care. It has been argued that the delivery of personalisation has lent coherence to a number of nascent tendencies in this policy that reinforce each other. These tendencies include an increased morality toward informal care and a move toward indirect monitoring of the locative sites of such care. The development of a surveillance culture helps stabilise community care policy at a time of considerable underlying uncertainty. Such uncertainty has arisen from the changing structure of informal care and of specific services.
The neo-liberal strategy, to socialise care, has become an extension of the techniques of observation, monitoring and control into community settings. A new system for the surveillance of informal carers has replaced the idealistic dream of freedom with an extension of constraint [34].
Indeed, the shift in the focus of assessment contains a number of alignments. First, assessment decisions seem to be taking place within an existing discourse on abuse rather than user need. Whilst 'need' is given recognition, the dominant decisions to be made would seem to concern risk of personalisation. Second, the focus of monitoring seems to have moved from the performance of elements of the purchased package of care to the 'conduct of conduct' [28] of older people and informal carers. Third, parallels with child protection are clearly alluded to through at-risk registers and the value of records as evidence.
Following Foucault's [1,28] analysis of the relationship between power and knowledge, this change can be seen as the development of a matrix in which to speak seriously about the support of informal care, the employment of discourses of surveillance and abuse would have to be entailed. Personalisation serves to reconfigure power relations during a period of flux and 'makes sense' of a previously disjointed care policy formulation. How it plays out in the next five years, only time can tell. | 2019-02-03T14:13:34.336Z | 2016-05-01T00:00:00.000 | {
"year": 2016,
"sha1": "7da57293887b520caf4d42714ce0400e5f2ca6b0",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/NPC-1-118.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f400c5c3c70abc38492764b6634935425b47ca59",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49564799 | pes2o/s2orc | v3-fos-license | Educating speech-language pathologists working in early intervention on environmental health
Background The goals of this study were (1) to determine early intervention (EI) Speech-Language Pathologists’ (SLPs) level of training and knowledge on environmental toxicants and their effect on infant and child development; and (2) to examine the effectiveness of a continuing education (CE) event designed to enhance the knowledge of EI SLPs on environmental toxicants and their effects on child development. Methods A survey was launched via Qualtrics and posted on the American Speech-Language Hearing Association’s Early Intervention Community page to assess environmental health knowledge of SLPs. Results from this survey were used to create an environmental health CE event targeted towards EI SLPs. Attendees were given a pre- and post-test to assess the effectiveness of our program. Results One hundred and fifty-eight participants completed the online survey and a majority (61%, n = 97) of participants reported some level of dissatisfaction with their previous training in regards to environmental exposures. Fifty-six percent (n = 89) of the participants also reported feeling unprepared to be a health advocate regarding environmental exposure concerns within their community. Forty-eight people (26 SLPs and 22 SLP master’s students) attended the CE event. Paired t-tests revealed significant improvements from the pre- to the post- test results among all attendees. Conclusions These findings suggest that SLPs who work in EI feel undertrained and unprepared to advocate for environmental health to the families they serve. This study reveals that CE is one way by which to increase the knowledge base of SLPs on environmental health. Electronic supplementary material The online version of this article (10.1186/s12909-018-1266-3) contains supplementary material, which is available to authorized users.
Background
Environmental health factors play a large role in children's health due in part to the fact that children are more susceptible to these factors than adults [1]. Childhood is a period of rapid growth and development characterized by changes in organ system functioning, metabolic capabilities, physical size, and child behaviors (e.g., hand-to-mouth and hand-to-toy behaviors), which can all be modified by toxicant exposures. Put simply, children are at increased risk because they breathe more air, drink more water, eat more food per kilogram than adults, and play closer to the ground where many contaminants are found. Childhood exposures begin prenatally and extend through early adolescence. Different developmental phases, known as "windows of vulnerability," can result in different susceptibilities to the effects of toxicants or manufactured environmental toxicants (METs) [2,3]. Recently, the public media and the scientific community alike have begun to examine the connections between exposures to METs in relation to the unexplained rise in complex disorders with multifactorial origins, such as Autism Spectrum Disorders, developmental delays, attention deficit hyperactivity disorders (ADHD), asthma, learning disabilities, cancer, endocrine pathology, and autoimmune disorders [4,5].
The economic and neurological consequences of environmental exposures are high. The annual costs of environmentally attributable diseases such as lead poisoning, asthma, childhood cancer, and neurobehavioral disorders is about $54.9 billion across American children [5]. These financial costs fall not only on the families of affected children, but also on government programs, such as early intervention and public schooling. Additionally, the neurodevelopmental and disease costs can be devastating for both children and their families. According to the World Health Organization (WHO), more than 30% of the global burden of disease evident in children is due to environmental factors [6]. Lead, mercury, polychlorinated biphenyls (PCBs), flame-retardants, and pesticides have all been shown to result in intellectual deficits in children [4,7]. In fact, Bellinger compared common children's health problems (birth defects, preterm birth, ADHD, Autism, brain injuries) to lead, organophosphate pesticides, and methylmercury [8] and found that the three environmental exposures together would decrease population-wide children's IQ by 40 million points compared to the 34 million IQ points for preterm birth, 17 million IQ points for ADHD, and 7 million IQ points for Autism. This study suggests that parents and health care providers should be aware of common environmental exposures and how to prevent them, especially given their relationship to developmental delays. A review by Dzwilewski and Schantz suggests that these overall reductions in intellectual function likely hinder language development as well; therefore, it is important for clinicians and researchers in communication sciences and disorders to be aware of these findings [9].
While pediatricians and other medical professionals can help educate families on environmental health, early ntervention (EI) Speech Language Pathologists (SLPs) can play a major role, as they often spend their sessions in the family's home or childcare facility, where the child could potentially be exposed to dangerous toxicants on a daily basis. Additionally, the environmental toxicants could account for some of the origin of the delays that are evident in the child that they are assessing or treating. Therefore, SLPs are uniquely suited to both educate caregivers on the importance of environmental health and to treat some of the neurodevelopmental outcomes that are the result of exposures to various toxicants; however, it remains unclear if and to what extend SLPs are trained in or knowledgeable about environmental health issues. Therefore, this study had two aims: 1) to determine EI SLPs' training and knowledge on environmental toxicants and their effect on infant and child development (Phase 1); and 2) to examine the effectiveness of a continuing education event designed to enhance the knowledge of SLPs working in EI on environmental toxicants and their effects on infant and child development (Phase 2).
Methods
Methods phase 1: EI SLPs' knowledge of environmental health A Qualtrics® survey, software version 2016 of Qualtrics (Provo, Utah), tailored to SLPs working in EI was designed to assess overall knowledge of environmental health. There were fourteen questions in our survey. The first three questions asked participants about their SLP job characteristics. Questions 4-13 focused on environmental health training and knowledge. Question 14 asked what specific areas of environmental health issues EI SLPs would like to learn more about. See Additional file 1: Qualtrics Survey Questions for the complete survey. Once the survey page was designed, an initial review of the survey was completed internally in the lab, where the appropriateness of each question was assessed. Next, the link was sent to colleagues to ensure that the survey was functional and fully operational without any system errors and to confirm that the data could be accurately exported. After preliminary testing, the survey link was posted on the American Speech-Language-Hearing Association's (ASHA) Early Intervention Community (2.5 k members) webpage for approximately one month. The survey was designed to be brief (5 min) and targeted to attract the maximum number of respondents. Participants in the online survey were notified that the participation was optional and that the results would be published; therefore, completion of the survey indicated implied consent.
Methods phase 2: SLP continuing education event
Based on the survey results from Phase 1, a continuing education (CE) event at Northeast of the United States, entitled, "An Early Intervention Speech-Language Pathologist CE Event" was created, see Fig. 1. EI SLPs in the greater Boston area were invited to the event and master's students in our Communication Sciences & Disorders program were encouraged to attend. The two-hour event featured the following one-hour presentations:"Environmental Health Exposures: What Early Intervention Speech-Language Pathologists Need to Know" and "Developments in Play for Infants and Toddlers with Delay: Implications for Intervention"-both topics of interest to SLPs working in EI. The learning outcomes of this event specific to the environmental health presentation were as follows: 1. To determine the effect of common environmental exposures on infant and child development. 2. To examine how environmental exposures affect speech and language development. 3. To determine the role of early intervention SLPs in relation to environmental health in homes.
In order to examine the efficacy of the event, an environmental health pre-and post-test was created (see Additional file 2: Environmental Exposure Pre-Test and Additional file 3: Environmental Exposure Post-Test, respectively). Participants in the CE event were told that the participation in the pre-and post-testing was optional and that their results would be published; therefore, completion of these documents implied consent. The pre-and post-test consisted of the same 14 questions designed to ask specific questions about environmental health. The pre-test was taken prior to the beginning of the presentation and was designed to assess the participant's prior environmental health knowledge. The post-test was distributed and taken in the last five minutes of the presentation and was designed to activate retrieval of newly acquired environmental health information and to determine if the learner outcomes were attained. Participants also completed a 7-question program evaluation for each presentation, which was scored using a Likert scale of 1-5 (5 being the highest score). The 26 SLPs who attended the event were all female and on average 36 years old (± 10.83), had 12.20 years (± 10.43) of experience working as an SLP, and 5.12 years (± 6.62) working in EI. The 22 SLP master's students who attended the event were on average 23 years old (± 3.82) and 21 were female.
Results phase 1: EI SLPs knowledge of environmental health
One hundred and fifty-eight participants completed the survey. A majority (60%; n = 95) reported that they had not received specific training regarding the effect of environmental exposures on child development. Regardless of training, 78% (n = 124) of participants reported that the role of environmental health on child development is very important; however, only 24% (n = 38) reported always considering environmental health factors during EI. The SLPs surveyed reported talking to parents/caregivers about diet/food choices, housing/home environment, school/childcare environment, and drugs. Participants were asked "are there any specific environmental issues (that you are aware of ) that impact the population that you treat? (You may select more than one)" and they responded as follows: "air pollution (including secondhand smoke exposure)" (26%, n = 42), "lead exposure" (24%, n = 38), "pesticide exposure" (8%, n = 13), "drug/alcohol exposure" (4%, n = 7), and "diet" (3%, n = 5). Seventeen percent (n = 27) of participants reported "none" and 35% (n = 56) reported other issues. A majority of participants (61%, n = 97) reported some level of dissatisfaction with their level of training in regards to environmental exposures and their effect on child development. In fact, 56% (n = 89) reported feeling unprepared to be a health advocate about environmental exposure concerns within their community. When asked what specific areas of environmental health they would like to learn more about, participants reported poor nutrition, pesticides, drug use, household cleaners and chemicals, air pollution, and genetically modified organisms (GMOs).
Results phase 2: SLP continuing education event
Twenty-six SLPs and 22 SLP master's students attended the CE event (48 total). Paired t-tests revealed there to
Discussion
This is the first study to examine the knowledge of EI SLPs on environmental health. The outcomes from Phase 1 of the study were extremely informative in that they showed that SLPs reported having very little education on environmental toxicants and being dissatisfied with the amount of training they have received. This information is disconcerting, especially considering that EI SLPs have the unique opportunity to go into client's homes and interact with their families, giving them the ability to act not only as clinicians, but also as educators. Furthermore, the survey indicated an interest on the part of SLPs to receive more education on this topic, which led to the creation of the environmental health CE event in Phase 2.
The pre-and post-surveys from the CE event revealed that all participants significantly increased their knowledge of environmental health. It is important to note that the pre-and post-tests were designed to be challenging for the participants and looked at very specific details regarding environmental toxicants and their effect on infant and child development. Therefore, the fact that all participants improved on this measure indicates that the event was effective at educating SLPs and students alike on both broad and specific environmental health issues. It is important to note that, we only sampled short-term learning outcomes and did not follow our cohort over time. We speculated that while the short-term learning may disappear, we hoped that our CE event enhanced an interest in areas of environmental health to spur SLPs and students to further their learning and engagement in this area. Subsequent studies should examine the long-term effects of these types of CE events as well.
The program evaluation scores of the CE event were high. Participants reported that the case studies and environmental health examples were very helpful components of the presentation. Case study presentations led to a discussion regarding the role of the SLPs in environmental health and leveraged the learning theories of activating response organization as well as emphasizing features for selective perception, which have been shown to improve learning gains [10,11]. Within ASHA's statement on the role of an EI SLP, there are four guiding principles that reflect the current consensus on best practices for providing effective EI services [12]. Specifically, services Fig. 2 The pre-(dark gray) and post-(light gray) test results from the CE program for SLPs and master's students should be (a) family-centered and culturally responsive; (b) developmentally supportive and promote children's participation in their natural environments; (c) comprehensive, coordinated, and team-based; and (d) based on the highest quality internal and external evidence that is available [12]. Environmental health knowledge clearly fits within all of these guiding principles.
Taken together, the results from Phases 1 and 2 indicated the need for further education of SLPs working in EI, and other areas of practice, regarding environmental toxicants and their effects. Though we began by surveying EI SLPs, as they are the most likely provide services in clients' homes, the results from the Phase 1 survey revealed that all SLPs would benefit from more environmental health training. There are several ways by which to disseminate environmental health knowledge to SLP clinicians and students, including: 1) offering a course or lecture series in graduate school, 2) hosting continuing education events, and 3) providing practicing SLPs with resources. We asked 13 SLPs who attended our poster session presenting these data at the 2016 ASHA National Conference about the best way to disseminate this knowledge. Sixty-nine percent of participants reported preferring dissemination through continuing education, 23% of preferred teaching this subject in graduate school, and 8% reported that SLPs could seek out this information independently with no formal mechanism necessary.
Guidelines should be developed to specify what knowledge and training SLPs and other medical professionals should have in regards to environmental health. These should include the following topics: common environmental exposures in the home/school/childcare environment, food choices, air pollution, and other chemical exposure mixtures. SLPs should also be prepared to discuss environmental topics in the news. This might include, for example, lead levels in water and/or where families can have their child tested for lead exposures. University education programs and the accrediting agency would need to take responsibility for providing such education, as we would not expect SLPs to seek it individually.
There were several limitations to this study. While Phase 1 was circulated nationally, it only accounted for 6.30% of all SLPs accredited by ASHA. Thus, these results may not generalize to all SLPs and results should be taken with caution. In addition, 158 is a relatively small sample size, especially when considering that the ASHA community we posted the survey on has 2500 members. Future studies may collect a larger sample size, perhaps including SLPs practicing in other areas of the field (i.e., geriatric home health).
Conclusion
Taken together, results from this study suggest that SLPs who work in EI feel undertrained and unprepared to advocate for environmental health to the families they serve. Our study reveals that CE is one way by which to increase the knowledge base of SLPs on environmental health. The next steps towards educating SLPs on environmental health will be to expand our current CE event to include SLPs working in various settings (e.g., home health, rehab, hospitals) and to other medical professionals (physicians, nurses, occupational therapist, physical therapists, etc.,) across the country and internationally.
Additional files
Additional file 1: Qualtrics Survey Questions. The fourteen questions in our Qualtrics survey were tailored to speech-language pathologists working in early intervention. The survey assessed overall knowledge of environmental health. Questions ranged from multiple choice to short-answer. (DOCX 19 kb) Fig. 3 SLPs' evaluation of CE event on environmental health across seven categories, using a Likert scale rating with five being the highest possible score | 2018-07-04T02:30:39.923Z | 2018-07-03T00:00:00.000 | {
"year": 2018,
"sha1": "00facb81035e26e3f7cd42f1e92b1d4034cdc24e",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-018-1266-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f21e9f1e0aa37da995d220a0d3f45ef42e37552c",
"s2fieldsofstudy": [
"Environmental Science",
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
6850443 | pes2o/s2orc | v3-fos-license | Are"Bondi-Hoyle Wakes"detectable in clusters of galaxies?
In clusters of galaxies, the reaction of the intracluster medium (ICM) to the motion of the co-existing galaxies in the cluster triggers the formation of unique features, which trace their position and motion. Galactic wakes, for example, are an apparent result of the ICM/galaxy interactions, and they constitute an important tool for deciphering the motion of the cluster galaxies. In this paper we investigate whether Bondi-Hoyle accretion can create galactic wakes by focusing the ICM behind moving galaxies. The solution of the equations that describe this physical problem provide us with observable quantities along the wake at any time of its lifetime. We also investigate which are the best environmental conditions for the detectability of such structures in the X-ray images of clusters of galaxies. We find that significant Bondi-Hoyle wakes can only be formed in low temperature clusters, and that they are more pronounced behind slow-moving, relatively massive galaxies. The scale length of these elongated structures is not very large: in the most favourable conditions a Bondi-Hoyle wake in a cluster at the redshift of z=0.05 is 12 arcsec long. However, the wake's X-ray emission is noticeably strong: the X-ray flux can reach ~30 times the flux of the surrounding medium. Such features will be easily detectable in Chandra's and XMM-Newton's X-ray images of nearby, relatively poor clusters of galaxies.
INTRODUCTION
In clusters of galaxies, the interactions of the intracluster medium (ICM) with a moving cluster galaxy is expected to modify both the local properties of the surrounding medium and the galaxy itself. One of the manifestations of such interactions is the Bondi-Hoyle (B-H) accretion (Bondi & Hoyle 1944). This physical process can be pictured as follows: as the galaxy is moving in the cluster, ICM particles are deflected by the galaxy's gravity and concentrate behind it, into the galactic wake.
Intuitively, one might think that B-H accretion creates overdense and cool regions of enhanced x-ray emission behind the galaxies. As a consequence, the hot interstellar media (ISM) of these galaxies look as if they have been disfigured: instead of being azimuthally symmetric they appear elongated, or as if they have a 'plume' of x-ray emission attached to them. Such elongated features have now been identified in the x-ray images of clusters of galaxies (e.g., around NGC 1404 in the Fornax cluster of galaxies; Jones et al. 1997). An up-to-date list of wake candidates can be found in Stevens, Acreman & Ponman (1999).
Bondi-Hoyle accretion is not the only manifestation of the ISM/ICM interactions which creates elongated features behind moving galaxies. Ram pressure stripping can also shape the ISM in such a way that it appears elongated. The difference in the nature of the two elongated structures is apparent: a B-H wake comprises of ICM material, while a ram pressure-induced wake contains galactic material (ISM).
Unfortunately, instrumentation prior to Chandra and XMM-Newton has not generally allowed the separation of the pure galactic and the wake components, either spatially or spectroscopically. Only the elliptical galaxy M86 has offered us the opportunity to gain more insight into the nature of its wake (Rangarajan 1995). The observational fact that the metallicity of M86's wake is higher than the metallicity in the surrounding medium, has been used to assign it a galactic origin. It seems most probable that in the case of M86, stripping of its ISM is currently in action, and that its wake consists mostly of galactic material. No other well studied example is currently known. The danger arising from the inability to separate the wake's emission from the galaxy itself, is that the analysis of the galaxy's X-ray data would lead to false conclusions for the characteristics of its X-ray halo. The dilution of ISM gas by the ICM gas in the B-H wake could, for example, lead to confusing results for the metal abundances in elliptical galaxies.
It should be understood that under certain conditions, B-H accretion and ram pressure stripping may occur simultaneously, creating wakes which contain a mixture of both galactic and intracluster material. Recent numerical simulations by Stevens et al. (1999) have shown that this picture can be indeed correct. However, it is not clear yet which are the effects of each separate process, and how the dominance of one process over the other depends on the environmental parameters. It is expected, though, that dense ICMs and high galactic speeds favour ram pressure stripping. However, the results of the B-H accretion cannot be foreseen so easily. If we want to understand the action of B-H accretion and be able to find the environmental dependences, we have to study this process in its first principles.
The aim of this paper is to disentangle the two physical processes by studying the action and results of the B-H accretion. We address questions such as: under which conditions B-H accretion occurs in clusters of galaxies; which clusters are the best candidates for detecting B-H wakes; and whether x-ray observations with the Chandra and XMM-Newton observatories can reveal B-H wakes.
The remainder of this paper is organized as follows: in §2 we present the methodology we followed to calculate the properties of B-H wakes, section §3 discusses the constraints and input parameters imposed by the problem itself. The results of the simulations are presented in §4. Finally, in §5 our results are compared to available observations, and simulations of similar physical processes.
CREATION AND EVOLUTION OF A B-H WAKE
Consider a test volume behind a moving galaxy. As the galaxy travels in the cluster at the speed of v gal , particles of the ICM are deflected by the galactic gravitational potential, and directed into this volume. The ICM particles that are influenced by the galaxy's attraction, and modify their direction of motion, are the ones contained within a cylinder of radius equal to the accretion radius (Racc) : (1) (Bondi 1952), where M gal and cs are the mass of the galaxy and the local speed of sound respectively. In time dt, the particles that enter the wake, at a position x along the accretion axis, are the ICM's particles which were initially in a shell of width db and length ds = v gal dt, and had impact parameter b (see Fig 1): where nICM is the number density of the surrounding medium. The particles that start with an impact parameter b land in the wake along the accretion axis at a distance x from the galaxy. Assuming that the ballistic approximation is valid (see §3.1.2) we find that the position x is given by: ( The velocity, vin(x), at the position x on the accretion axis relates to the initial velocity of the ICM's particles (v gal ) by: The effect of these incoming ICM's particles is i) the increase of the internal energy [Eint(x, t)] of the wake, and ii) its confinement by the pressure [Pacc(x, t)]: where d 2 zacc(x) is given by eq. 2, and Rw(x, t) is the radius of the wake at the position x ( Fig. 1). In eq. 5, v ′ in (x, t) is the velocity of the incoming particles in the wake's frame of reference, and it is given by: As the galaxy travels through the ICM, the constant replenishment of the wake with particles and energy causes a continuous change to its properties. The evolution of the wake is governed by two equations: • the conservation of energy: • and the conservation of the momentum (p): where σ is the surface containing the wake.
In the above two equations, Pw(x, t), and PICM are the pressures of the wake and the surrounding ICM respectively. In eq. 7, the first term [L bol (x, t)] represents the energy radiated away via thermal bremsstrahlung, and it is calculated in the appendix A. The second term is the work done by the wake to the surrounding medium. The third term is the energy added to the wake by the accreted particles : where it was assumed that the incoming particles have enough time to thermalize with the existing particles in the wake, and reach a Maxwellian distribution at the temperature of kTw(x, t) (see §3).
SIMULATIONS
Equations 8 and 7 were solved numerically to predict the temperature [kTw(x, t)], and the number density [nw(x, t)] of the wake at any position x along the accretion axis. The time step of the integration process was constant and such as to allow the gas in the wake to reach a Maxwellian equilibrium. Knowing the temperature and density at any time t, and position x, the luminosity [L bol (x, t)] can be found from equation A1. The central surface brightness along the accretion axis [ΣE 1 −E 2 (0)], and the wake's surface brightness distribution are calculated using eq. A5 and A4 respectively. The initial conditions for the simulations were chosen to comply with the conditions associated with this problem (Table 1) and are derived in the next sections. The chosen values for the parameters of the ICM and the galaxies' velocity represent the conditions found in poor and moderately rich clusters of galaxies, which as will become apparent in the next sections, are the fertile environments for B-H wake creation.
The mass of the galaxy
The accretion radius (eq. 1), defines the size of the region around the cluster galaxy in which the galactic gravitational potential is strong enough to change the direction of motion of all the ICM particles which happen to be inside that region, and direct them into the wake. Particles approaching the galaxy with impact parameters less than Racc are deflected into the wake, while particles with b > Racc continue their motion passed the galaxy unaffected. Therefore, an apparent, and simple condition for a galaxy to have a B-H wake, is that its accretion radius should be larger than its size.
As is obvious from eq. 1, the potential of a wake being formed behind a cluster galaxy depends on the velocity and the mass of the galaxy. If we assume an average velocity for the galaxies in a cluster of v gal = √ 3σ, where the velocity dispersion σ is given by the σ − T relation in clusters of galaxies: (White et al. 1997;Wu, Fang & Xu 1998), and a sound speed cs of: we find that the condition Racc ≥ R gal gives: M gal gr ≥ 5.6 × 10 22 kTICM keV R gal cm (12) In Fig. 2 we plot the permitted ranges of galactic masses and radii for a range of temperatures of the ICM. Galaxies which lie in the range between the plotted lines and the y-axis will produce large-scale B-H wakes. Figure 2 demonstrates that in high temperature environments, only very compact objects can have galactic wakes produced by the B-H accretion. On the other hand, cool environments are more likely to host galactic wakes.
The velocity of the galaxy
To derive the condition of eq. 12, an average galaxy velocity was assumed, which corresponds to the velocity dispersion σ. In reality, the galaxy velocities are distributed around this value, so that in a cluster there are galaxies which move at lower and larger velocities than σ. The accretion radius of fast moving galaxies becomes very small, and they are not expected to show B-H wakes. It is only the slow moving galaxies which can have large-scale wakes. Comparing eq. 10 and 11, it can be concluded that the subsonic velocity regime is the most prolific condition for B-H wakes.
For the calculations of §2 the ballistic approximation was used. The question that arises is whether such an approximation is valid in the clusters considered. The ballistic approximation assumes that there is no interaction between the deflected particles as they stream past the galaxy on their way to the wake. This assumption is valid only if the kinetic energy of the deflected particles at any distance r from the galaxy is larger than their thermal energy. This condition translates to: Substituting in eq. 13 the distance r by eq. 1 we find that the ballistic approximation is valid only when: For any galaxy velocity that obeys eq. 14, the ballistic approximation is correct. Clearly, the lowest limit that eq. 14 defines for v gal is always lower than the average velocity of a cluster galaxy (compare equations 14 and 10). From the above discussion it is clear that the special conditions of the problem indicate that only slow moving galaxies have wakes with ICM material. We therefore have to restrain the simulations to subsonic regime, which means that virialized cluster galaxies should not show the leading bow-shock, which is a B-H wake's characteristic when the galaxy moves supersonically. We decided to use galaxies velocities equal to the local speed of sound and 0.632cs, as the condition of eq. 14 requires. The additional advantage of the subsonic velocities is that no shock waves are formed as stated above, which would require a different treatment than the one presented here.
RESULTS
In the following sections we report on the results of the simulation runs (see Table 1). Figure 3 presents the evolution of the temperature, number density, and central surface brightness of the wake for the simulation No. 1. The results are shown for a distance of 0.5 kpc (solid line) and 15 kpc (dotted line) from the centre of the galaxy along the accretion axis.
The temperature of the wake
As expected, independently of the environment, the trend in temperature is a continuous cooling with time. The change of the temperature of the wake at a distance of 5 kpc from the galaxy in the three different environments studied is shown in fig. 4.
The signature of the environment is in the rate at which the wake is cooling. In lower temperature clusters the wake cools down more rapidly than in hotter environments: in a 3 keV cluster, kTw reaches 10 per cent of the temperature of the surrounding medium in 3 × 10 7 yr, while a wake in a 1 keV cluster needs approximately half the time to reach the same level. This difference is understood because in low temperature environments, the external pressure is not as large as in hotter clusters: the pressure exerted by the accreted particles (eq. 5) and PICM are lower in cooler clusters. As a result the overdense wake expands more rapidly, and cools quicker. The temperature of the wake was also found to depend on the velocity of the galaxy. Figure 5 shows that the lower v gal , the quicker the wake cools. This finding can again be understood in terms of the lower external pressure in cool clusters; the lower v gal is the lower Pacc is.
Along the accretion axis we find a decrease in kTw. At any time, and any environment the temperature at the extremes of the wake is lower than in regions closer to the galaxy (by 10-30 per cent; see fig. 3). Such temperature variations might be measurable with the new x-ray satellites Chandra and XMM-Newton. Figure 5. Dependence of the temperature of the wake on the velocity of the galaxy, in a cluster of kT ICM =1 keV, and n ICM = 1 × 10 −3 cm −3 . The temperature is measured at a distance of 0.5 kpc from the galactic centre, along the accretion axis, and at a time of 1 × 10 7 yr.
The density of the wake
Not surprisingly, the density of the wake increases constantly: in any environment the increase is approximately one order of magnitude in ∼ 10 7−8 yr. As fig. 6 shows, in higher temperature clusters the density of the wake is lower than in cooler ones. Although the number of ICM particles accreted into the wake per unit time (d 2 zacc) is larger in high kTICM environments (because the galaxy velocity is larger according to eq. 2), the number density of the wake in poorer environment is larger. This unexpected result can be explained because it is the higher flux of particles (d 2 zacc/dσ) that is responsible for the high number density in lower temperature clusters.
Although the effect is not dramatic, in any environment, the wake's number density has its higher value close to the galaxy, and decreases with distance from the galaxy along the accretion axis. This result can be understood in terms of a stronger gravitational potential closer to the galactic centre. Finally, Figure 7 shows the dependence of the wake's number density on the velocity of the galaxy. Dependence of the density of the wake on the velocity of the galaxy, in a cluster of kT ICM =1 keV, and n ICM = 1 × 10 −3 cm −3 . The number density is measured at a distance of 0.5 kpc from the galactic centre, along the accretion axis, and at a time of 1 × 10 7 yr. Figure 8 compares the evolution of the surface brightness Σ (0.1−10) keV (0) in a 1 keV, 2 keV, and 3 keV clusters. The way that it was calculated is demonstrated in the Appendix A. The conversion to any other energy ranges can be performed by applying eq. A3.
Surface brightness
As fig. 8 shows the (0.1-10) keV central surface brightness in any environment increases with time until it reaches 10-30 times the brightness of the surrounding medium. Afterwards, the wake's emission starts declining until it is im- In all cases the galaxy is moving at the local speed of sound (516, 730, and 894 km s −1 respectively), and the profiles presented in this plot correspond to a distance of 5 kpc from the core of the galaxy. mersed in the background, and the wake becomes undetectable. The time that wakes 'disappear' from the x-ray images depends on the richness of the cluster, with higher temperature clusters being able to retain the wake signatures for longer time. Additionally, regions which are further away from the galactic centre fade away quicker than regions closer to the galaxy (fig. 3). The consequence is that a wake becomes shorter with the course of time. As can be inferred from fig. 8, wakes in richer clusters live longer.
The length of the wake
The maximum length that a wake can reach is simply defined by the ballistic theory as the distance from the galaxy (along the accretion axis) which corresponds to an impact parameter equal to the accretion radius Racc. However, as it was shown earlier, the surface brightness of the wake reaches the background level at times that depend on the richness of the environment. Therefore, it is apparent that the length of the wake depends on the time that it is observed, and the parameters of the environment which they are in. Figure 9 shows for how long the maximum length of a wake is measurable, before the wake is immersed into the background emission. Although wakes in richer environments are shorter, they are detectable for longer periods of time.
Comparison with observations
It has been demonstrated that the Bondi-Hoyle accretion alone can give rise to asymmetries in the x-ray images of normal galaxies in relatively rich clusters of galaxies. However, the scale of these asymmetries is small when compared to the size of the galaxies: in normal conditions the length of a B-H wake cannot exceed ∼20 kpc. In a cluster at a redshift of z=0.05 this length corresponds to ∼ 12 arcsec, just above the resolution of the ROSAT HRI detector. This fact clearly justifies why wakes have been elusive, and why there are only a few examples reported in the literature. Even with the new x-ray satellites Chandra and XMM-Newton, wakes will be detectable only in nearby low temperature clusters.
The best known candidate for a B-H wake might be NGC 1404 in the Fornax cluster of galaxies. As seen in the ROSAT PSPC image , its wake points away from NGC 1399, which is the central galaxy in the cluster. In the PSPC data its length appears to be ∼25 kpc. From Rangarajan et al. (1995) we find that at the distance of NGC 1404 the temperature and number density of the Fornax cluster ICM is kTICM ≃ 1.1 keV and nICM ≃ 0.6 × 10 −3 cm −3 . Simulating a galaxy of M gal = 2.5 × 10 12 M ⊙ moving in an environment defined by the properties of the ICM around NGC 1404, we find that the length of the wake could be ∼17 kpc if the galaxy is moving on the plane of the sky at a speed of 300 km s −1 , and it cannot exceed 20 kpc if the galaxy is moving at 500 km s −1 . A first comparison of the PSPC data with the results of the simulation may suggest that B-H accretion cannot create a wake as long as the one observed. However, this inconsistency may indicate that the galaxy is moving at larger speeds than the ones assumed here, owing to recent infall into the cluster. Such speculations cannot be verified, and stringent conclusions cannot be reached before higher spatial and spectral resolution data of NGC 1404 become available, to accurately constrain the parameters of the wake from the x-ray observations.
Comparison with other simulations
There have been several studies of the B-H accretion since it was first explored by Bondi & Hoyle (1944) in the context of stellar accretion. Hunt (1971) performed quantitative calculations of subsonic and supersonic accretion flows for the case of a point source moving in an adiabatic gas. Sophisticated 3-dimensional simulations have been performed recently (Ruffert 1996 and references there-in). However, most of these simulations were not designed to represent the conditions encountered in clusters of galaxies. They explored either temperature and density regimes which were not representative of clusters of galaxies, or the galaxy velocities used were too high. Additionally, the results of these studies cannot be easily converted to observables, making a comparison with x-ray observations not straightforward. However, these simulations provide a first picture of the results presented here. The simulations of a galaxy moving subsonically presented by Ruffert (1994) show that a downstream overdensity is created. The offset from the centre of the accretor is small compared to the supersonic examples. Additionally, as it is clear from the results of Ruffert (1994), in the supsonic case there is no bow-shock in front of the galaxy. On the other hand the bow-shock is the most prominent structure generated by the B-H accretion onto a supersonically moving body. As stated by Ruffert (1994), a B-H wake in the subconic case "is the result of the superposition of the slow radial flow into the accretor and the relative motion between the accretor and the medium".
Recently, more in depth predictions of the ICM/ISM interactions in clusters of galaxies have been presented by Stevens et al. (1999). Their simulations allow both dynamical processes to take place: ICM is deflected and concentrated behind the moving galaxy, and ram pressure strips the galactic ISM. The result is the creation of downstream density enhancements, which consist of both galactic and intracluster material. Unfortunately, this study does not assess the fraction of each component expected in the wake for a variety of environmental conditions. As a result, one cannot confidently estimate whether the wakes we observe consist of galactic or ICM material.
However, these simulations give the first indication that B-H accretion might dominate in low temperature clusters, in agreement with our results, and ram pressure creates wakes in richer clusters. If one was to compare the results of the present study with the simulations of Stevens et al., he would have chosen to use their simulation No. 1b, for the sake of consistency. Indeed, simulation No. 1b presents the results of a galaxy/ICM interaction in a cluster with temperature kTICM=1 keV, and density nICM ∼ 4 × 10 −4 cm −3 . These parameters of the ICM correspond well to the conditions studied here. Generally, there is a good agreement between both studies: they both predict an increase of approximately two orders of magnitude in the nw, with a simultaneous temperature drop by two orders of magnitude at 5 kpc along the accretion axis and after 5×10 8 yr (see fig. 3 in Stevens et al.). The agreement between the two studies may lead to the false conclusion that both simulations describe the same physical process successfully. However, special care must be paid, because the galaxy velocity in Stevens et al. is supersonic with v gal = 960 km s −1 . As it was proven in the previous sections, such velocity regimes cannot produce large-scale B-H wakes, because the accretion radius is too small. However, the lack of striking differences make us conclude that B-H dominates in No. 1b simulation of Stevens et al.
SUMMARY AND CONCLUSIONS
The motion of a body through a gaseous medium results in the creation of an overdense, cool wake: the gravitational attraction of the body deflects the surrounding medium's particles, which end up being concentrated behind it. This process (Bondi-Hoyle accretion) was first explored by Bondi & Hoyle (1944) in the context of stellar accretion. The aim of this paper was to investigate if this process is at work in clusters of galaxies, which clusters are the best hosts of B-H wakes, and if such features can be seen in the x-ray data of clusters of galaxies.
The main results of our calculations can be summarized as follows: • In clusters of galaxies, the ICM can be concentrated behind the subsonically moving galaxies, although the results of B-H accretion are expected to be more dramatic when the galaxies move supersonically.
• Large-scale B-H wakes can be found only in low temperature clusters, behind slow moving, and massive galaxies.
• A stable situation is not reached within reasonable timescales: the wake is created, its temperature decreases and its density increases constantly.
• The smaller the temperature of the ICM, the longer the wake is. However, wakes in richer clusters live longer and are brighter. Such features will be easily detectable with the Chandra and XMM-Newton satellites in nearby, poor clusters of galaxies.
• The central, x-ray surface brightness of the wake can reach 10-30 times the brightness of the surrounding medium.
• The properties of a B-H wake depend on the velocity of the galaxy. Fast moving galaxies, for example, have hotter wakes. On the other hand, galaxies with pronounced B-H wakes should not have bow shocks, because their motion is subsonic. | 2014-10-01T00:00:00.000Z | 2000-07-10T00:00:00.000 | {
"year": 2000,
"sha1": "33481c8aa9071bd4faba8b85371c8fccae57d8ed",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/318/4/1164/2823452/318-4-1164.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "33481c8aa9071bd4faba8b85371c8fccae57d8ed",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
145635956 | pes2o/s2orc | v3-fos-license | Scaffolding and interventions between students and teachers in a Learning Design Sequence Teachers ’ scaffolding and interventions
The aims of this paper are to develop knowledge about scaffolding when students in Swedish schools use digital educational material and to investigate what the main focus is in teachers’ interventions during a Learning Design Sequence (LDS), based on a socio-cultural perspective. The results indicate that scaffolding were most common in the primary transformation unit and the most frequent type was procedural scaffolding, although all types of scaffolds; conceptual, metacognitive, procedural, strategic, affective and technical scaffolding occurred in all parts of a learning design sequence. In this study most of the teachers and students, think that using digital educational material requires more and other forms of scaffolding and concerning teacher interventions teachers interact both supportively and restrictively according to students’ learning process. Reasons for that are connected to the content of the intervention and whether teachers intervene together with the students or not.
Introduction
This paper is written within the research project, "Digital Teaching Aids and Learning Design Sequence in Swedish Schools -Users´ Perspective".The study is in the research field of ICT and it is based at the Stockholm Institute of Education.The project's purpose is to deepen the understanding of how digital media are used as a resource for learning in education.The project runs for three years, from the year of 2004 to 2007 and is led by professor Staffan Selander, financed by The Swedish Knowledge foundation.
Ten schools were selected by their active use of ICT.Students were from 6 to 19 years old and they were observed in different subjects.Different researchers have different research questions concerning for example communication, interaction, the digital tool and subject integration with ICT.In this paper we only present two researchers' different questions, on one hand scaffolding and on the other hand teachers' interventions.The two groups of questions were not linked together from the beginning, we just use the same material in one case.Three schools in the suburbs of Stockholm were chosen from the material.The subjects which were included in the Learning Design Sequence in these three schools were Swedish language, Music, ICT, Home economics and Social science.The students' were 8, 13, 14 and 17 years old.
The main questions for scaffolding: in this study are: • What kinds of scaffolding occur when students use digital educational material and where in a learning design sequence do they occur?• Do students understand the same phenomena of scaffolding as teachers?• Do students and teachers believe that schoolwork with digital educational material demands more, less or other kinds of scaffolding for learning?
The main questions for teachers' interventions in this study are: • What is the main focus of the teacher intervention?
• Are teacher interventions supportive or restrictive for students' learning process?
Theoretical framework
The theoretical framework is primarily based on socio-cultural perspective.Vygotsky's socio-cultural theory of learning points out that human intelligence stems from the culture we are living in.Human cognition occurs in the first place on a social level in interaction with other human beings and thereafter inside the individual (Vygotsky, 1978).Learning is a process of engagement and activity together with other people where actions and thinking are situated.The process, form and content are all merged within activities, where communication is important.These activities are interdisciplinary (Lave & Wenger, 2003;Säljö, 2005).
In the socio-cultural perspective the artefact is central.Pupils' thinking is thought to be intimate connected to the artefacts they are using.This is shown in the interaction between pupil and artefact where the pupil for example often can manage complicated actions without being able to verbalize them, which is a common scenario in the material.Säljö means that it is useless trying to understand what goes on in one pupils head -instead we try to understand learning in the interaction between pupils, teachers and artefacts (Säljö, 2005).
Scaffolding
When learning is shaped by the social environment every person, has a larger extent of potential for learning than the definite capacity of the individual when learning is facilitated just by someone with larger knowledge (Wertsch, 1991).This range of a person's potential is called the zone of proximal development and is essential according to Vygotsky's ideas.Learning in the zone of proximal development is a combined activity in which the teacher simultaneously keeps an eye on the goals of the Learning design sequence and on what the student with assistance is capable to do.
Scaffolding is a strategy that teachers use to move learning forward in the zone of proximal development.It is a collaborative process.It involves negotiation of meaning between the teacher and the student about expectations and how to improve the learning process in the best way (Shepard, 2005).Examples of scaffolding could be when the teacher provides the student with different kinds of support e.g.hints encouragement, cognitive structures and reminders during the learning process through an LDS (Wood, Bruner & Ross, 1976).One way of categorize scaffolding are Hill and Hannifin's (2001) four types of scaffolds; conceptual, metacognitive, procedural and strategic scaffolds.Conceptual scaffolds could be maps, outlines and clarifying examples which support the student to make choices about the selection or to prioritize what is important information.Metacognitive scaffolds may include reminders to reflect on the goal or a cognitive model, which helps the student to focus on the target or to estimate what he/she knows and what to do next in the learning process.Procedural scaffolds could be textual charts, graphic representations, site navigation maps or instructions about the working procedure which help the student to value resources and at the same time reduce the cognitive load in the procedure of navigation.Strategic scaffolds may include suggestions for alternative approaches to tackle a task that helps the student to develop an alternative perspective of an issue for example.Two categories which Hill and Hannafin don't include are the affective and the technical scaffolds which are found in Masters' and Yelland's research (2002).The affective scaffolding can consist of encouragement and praise.The technical scaffolding includes technical instruction and technical recovery in form of prompts or guiding questions to recover a technical mistake.
Learning theories and factors that influence teacher interventions
How do teachers intervene when pupils in Swedish schools use digital educational material?To answer this, we need to have a background to how we think learning occurs.According to Säljö (2005), there are three main learning traditions.The cognitive tradition, within which processes are more central than content, the subject didactic tradition, where content is of central importance and the socio cultural tradition, where people appropriate social experiences from situated activities.In this paper, we focus on the socio cultural description of learning.
In the Swedish curricula of 1994 (Skolverket, 1998) working methods must be democratic, with the students' participation in planning of activities and the teacher having a personal responsibility for the students' learning.The teacher should encourage independent as well as social work, including active learning, problem solving, communication and discussion.In Lpo 1994 (Skolverket, 1998) student learning is in centre of school activities, and this in turn impacts classroom strategies.Teaching strategies and daily classroom life are the most important points for student learning (Jedeskog, 2005).Interactive teaching is one way of teaching today.It includes active learning such as collaboration, communication and creation of meaning and understanding.Traditionally, in non-interactive teaching, the teacher is talking most of the time in the classroom.Interactive teaching on the other hand often supports social constructivist or socio-cultural learning.This includes being open for creative thinking, using challenging and open ended questions, encouraging group discussions and initiating more interaction, both between students and between teachers and students (English, Hargreaves and Hislam, 2002).
Teacher interventions are situated in a social context with deep historical and cultural traditions.Säljö (2005) means that institutions are units of social practice, having their own cultural tools.This includes communicative patterns and activities within special institutional frameworks.Different assumptions about the nature of learning are found within these institutions.We can talk about different physical, cognitive, communicative and historical contexts.Social structures have impact on individual actions and vice verse.People in the same social setting use common artefacts and common ways of communicating and acting.
We are also involved in a number of communities of practice.A community is defined by three dimensions: What it is about, how it functions and what capability it has produced.They share some common resources (such as routines, vocabulary and artifacts) which accumulate knowledge within the community.All members are involved in relationships which are important for learning (Lave and Wenger, 2003).Depending on academic subject, there are differences between how teachers plan and teach courses and approach ICT.According to Waggoner (1994), each subject has its own structure with given ontological and epistemological assumptions.We tend to teach as we are taught, and, since our teaching training varies between subjects, so does our teaching style.Santee and Siskin (1994) find that teachers have a deep identification with their subject.The teachers' perspectives on how they teach differ, and values are sometimes in conflict between subjects.In general, teachers within a subject share the same faith in who they are, what they are doing and how they would like to do it.
All of this, communities of practice, institutions, cultural tools and traditions, differences between teachers' subjects and their identification with their subject has impact on how teachers intervene in the classroom.
Method
The method and analysis we have employed is qualitative.According to our aim and research questions different methods of data collection are used, such as descriptive field observation, video recordings and interviews (formal and informal) with students as well as teachers.The material is gathered in classrooms when students use digital educational material in their daily work.It can for example consist of Internet, different software as Word, Power Point and Illustrator, Learning Management systems, digital cameras or scanners.The main part of the material consists of videotaped film.All material is not transcribed; instead we transcribe critical incidents (Tripp, 1993) which we choose according to the questions at issue.We have transcribed and analyzed different sequences according to our research questions.According to scientific practice we take responsibility for the research process being governed by ethical views, carried out in accordance with the ethical rules of the Swedish Research Council (http:/ /www.codex.vr.se/codex_eng/codex/index.html) for research in social science, including information requirement, approval requirement, confidentiality requirement and usage requirement.
Scaffolding
Concerning scaffolding the material is collected from three different schools and the students are 8, 14 and 17 years old.we have analyzed the video material from the LDS model (see below) and Hill and Hannafin's four types of scaffolds; conceptual, metacognitive, procedural and strategic as well as Masters' and Yelland's (2002) two scaffolds: the affective and the technical.An example of conceptual scaffolds were clarifying examples from the teacher, a metacognitive scaffold could be a reminder from the teacher to reflect on the goal, graphic representations were examples of procedural scaffolds, a strategic scaffolds could consist of a question from the teacher which gave the student a different perspective of an issue, the affective scaffolding consisted mostly of encouragement and he technical scaffolding included technical instructions.The theme, and subject content of the three LDS's were for the 8-years old "The history of my life" in Swedish language, for the 14 years old "The travel through Europe" in home economics, social science, Music and ICT and for the upper secondary class it was "History of literature" in Swedish language.Regarding scaffolding, we also took our point of departure from the interviews.
Teachers' interventions
When it comes to teacher intervention the material is collected from two schools in two different classes, 13 and 14 years old students.The theme and subject content of the two LDS's were "Human existence, fears and children rights" for the 13 years old consisting Social Science, Music and ICT.According the 14 years old students it is the same material as scaffolding above with the theme "The travel to Europe" in Home economics, Social science, Music and ICT.The selection of sequences was delimited by the teacher coming to a group of students and intervening with them, and it lasts until the teacher has departed.The material will just allow us to analyze the sequences, not what has happened after that.The units of analysis were speech, activity, display, digital learning resource, which way the teacher comes to the group and the content of the intervention.We would like to point out that very rarely the analyzed sequences coincide with the analyzed sequences for scaffolding.
The LDS model
The analysis is also made according to the LDS model below.Within our research group we have constructed a model which we refer to as an LDS -Learning Design Sequence.An LDS contains everything from the start of a learning sequence until the end, when students' work is finished, including presentations and assessment.It can be a two hour lesson in physics but also a long sequence in social studies which reaches over a whole semester.An LDS is framed by the institutional norms among the teachers and students the intentions in the curriculum and by the resources for learning which are available (left).It is also framed by the interest, group climate and patterns of social interactions (above).Focus in the model is how knowledge is created and reshaped in the process of learning.In the "Primary information resources for learning.In this cycle of information the students transform and form their knowledge of this information.What kind of media is used and which kind of information are shown in different modes.It is also interesting to how and in what way the group is a resource for learning.During this phase the teacher intervene in many ways.Then we have the "Second information unit" with is focus on the students representations and presenting.The students' discussion and reflection over their learning process and results are important in this phase.As you can see in the figure (below), teacher intervene during this unit too.Signs of learning are traced in the students' representations and in their process with new explanations and skills.The arrows inside the circle of the primary and secondary transformation unit symbolize a back and forward movement between transforming and forming knowledge in the primary unit, as well as represent and presenting knowledge in the secondary unit.During the primary and secondary transformation unit teacher intervene in many ways.One kind of intervention is assessment (below).In the primary transformation unit there is a focus on formative assessment and in the secondary a summative assessment (Selander, Engström, & Åkerfeldt, 2007).
unit", the focus is on how student search for and process information, how they collaborate and interact with each other, with the teacher and A teacher intervention can be observed before and during the primary transformation unit and through the secondary transformation unit of a Learning Design Unit.The teacher decides the content of an LDS, sometimes in collaboration with the students.There are many underlying processes which have an impact on learning in an LDS, such as intentions, curriculum, traditions, group climate and students' opportunities to reflect and represent the LDS.Scaffolding and teacher interventions occur in different ways along the whole LDS, from the start to the end.We have documented these sequences with a digital video camera and field observations.Interviews with both teachers and students take place after the Learning Design Sequence.Student's representations are gathered, sometimes as a CD, caught with our video cameras or in printed form.
In the research about scaffolding we videotaped one LDS each from three different schools in the suburbs of Stockholm.The students were 8, 14 and 17 years old.All the classes worked for about two months with their LDS.The video documentation consists of about 10 hours video recordings from each school.The LDS for the 8-years old students was called "History of my life" and included the school subject Swedish language.The students produced a PowerPoint slideshow about their life including text, pictures, photos and sound.One class, the 14-years old, was the same LDS as described down below as "The travel to Europe".The content of the third class' LDS was history of literature in an upper secondary school.The students and teachers used Internet and an LMS-system, software like PowerPoint, Illustrator and In Design.According to the LDS-model the units of analysis were; setting, primary transformations unittransforming information to forming learning and knowledge-and secondary transformation unitpresenting and representing knowledge.
In the research about intervention we followed one LDS from each of two different schools.The students were 13 and 14 years old.Both of the LDSs which we followed involved thematic work with teachers from different subjects, lasting for almost a whole semester.One class had teachers in social science, music, ICT and home economics.The Learning Design Sequence was called "The travel to Europe".It was a storyline theme where the students learned about geography (climate, populations, industries, economy), cooking (inspired from a country in Europe), music (European composers) and science (European scientists) etc.The students pretended to be employed, with the goal of registering differences and similarities between European countries.They worked with computers (laptops and PCs) with various software (Power Point, Word, Internet) and digital cameras.A PowerPoint-production was presented as a result of the thematic work.The other class involved teachers in social science, ICT and music.Students worked with the Learning Design Sequence "Human existence, fears and children rights", using video cameras, digital photos and Apple computers with Movie and Garage Band.A movie about bullying and related topics was produced by the students based on interviews, filmed episodes and facts found from the Internet.As we can see in the LDS-model, teachers' interventions occur through the whole LDS, even before the primary transformation unit and secondary transformation unit.All the units in the LDS affect the teacher intervention and can be shown in the analyze of the results.
Results and Discussion
Scaffolding Regarding all research questions about scaffolding the material is chosen from three different schools and three LDS'.The students are 8, 14 and 17 years old.In all three LDS' the teachers had organized the students' learning-process collaboratively in the social environment (Wertsch, 1991).The content and forms of learning were merged within collaborate thematic work where communication was important in the interdisciplinary activities (Lave & Wenger, 2003;Säljö 2005).In all the three LDS' we found the learning-process organized first on a social level (Vygotsky, 1978) and then on a more individual level continuing to finish on a social level again.The results indicate that scaffolding was a strategy used by all the teachers to move learning forward in the zone of proximal development.We found forms as hints, encouragement, and cognitive reminders in all three classrooms both on social and individual level (Wood, Bruner & Ross, 1976).All types of scaffolds (Hill & Hannafin, 2001;Master & Yelland, 2002) occurred in every part (setting, primary and secondary transformation unit) of an LDS.The most common type of scaffold was the procedural type (Hill & Hannafin, 2001) in all the three LDS'.An example of that is when the teacher in advance has made an instruction sheet to help some of the students to value different resources on the Internet (below).She didn't use it for all students, some didn't need it, and this was one example for us to know that she was aware of the proximal zone of development of her students.
The sheet consisted of the text below:
"There is a lot of information to get on the Internet, some is very good but some websites are not.Don't trust everything you read, see and listen to on the Internet!Be critical!Choose your material from the Internet by using these questions below: Who has made the website?Is it an authority, organization, a company or a private person?Do they know a lot about the subject at issue?What purpose has the website?
To inform, to present facts, to advocate for something, to sell something or to entertain?
What do the website look like?Is there any contact information?
Does the text look serious to you? Do the hyperlinks work?Is there a date on the site?Do they refer to other resources?Can you get information from other resources?
The library or trusted school sites?Have you compared this site with other websites?" The most common phase where the scaffolds occurred was in the primary transformation unit when the students worked on their own or in group and the teacher's role was more of a guide than an instructor, an example of a collaborative process between the teacher and students where negotiation of meaning took place (Shepard, 2005).The technical scaffolds (Master & Yelland, 2002) were more common in the setting and in the primary transformation unit than in the secondary in all three LDS's and they were more frequent among the younger students.In the upper secondary school different types of structures, such as how a story is structured or how to analyze a movie, were a common form of scaffolding.In this class we recognized that the students created their own structures in the working process and we interpret that scaffolding worked as a mediating resource for learning.
Concerning the second research question "Do students understand the same phenomena of scaffolding as teachers?"The results differ a lot between the schools.In one of them the teacher's view of scaffolding is quite similar to the students'.In another school the answers differ totally between the teacher and the students.For example in one case the teacher shows the learning goals for some pupils in order to be a metacognitive scaffold, a reminder to reflect on the goal and not continuing on a sidetrack (Hill & Hannafin, 2001), but it is not at all comprehended as scaffolding by the pupils, as it were meant to be.The pupils we interviewed in this class considered scaffolding as instructions how to do things or direct answers from the teacher, answers which solved the problem the pupils themselves were set to solve.They didn't have the same view of how to learn as their teacher.In the third school the answers and the descriptions of scaffolding are very similar between the teacher and the students.
Regarding the third research question "Do students and teachers believe that schoolwork with digital educational material demands more, less or other kinds of scaffolding for learning?"we have used the interview material for analyzes as well.Among the younger students the teacher and the students agreed upon that schoolwork with digital education material demands more technical scaffolding (Master & Yelland, 2002) to handle the digital artefact.The teacher also pointed out in the interviews that the pupils' thinking about the content is very much connected to the digital artefact (Säljö, 2005) in the interaction with it and the technical scaffold is therefore needed not only for technical matters.In another school the students express their increased need of conceptual scaffolding (Hill & Hannafin, 2001) in the searching phase of the LDS.They express that it is more difficult to understand the information they get at the Internet compared with traditional educational material.The teacher in this school also thinks that it is necessary with more scaffolding to deepen the understanding for the students because it takes a lot of time to handle the artefact and get the technique working smoothly.The teacher in the third school thought more scaffolding is needed in form of metacognitive as well as procedural scaffolding (Hill & Hannafin, 2001) since the information on the Internet is far more comprising than traditional educational material which requires more preparation from the teacher.The students were also of the opinion that working with Internet as a resource requires more cognitive scaffolds to help them to focus on the target.
To sum up most of the teachers and students, in this study, think that using digital educational material requires both more and other forms of scaffolding compared to other forms of educational material.
Teachers' interventions
The main questions for teachers' interventions in this study dealt with the focus and the content of the intervention, and if it supports or restricts the students' learning process.The analyze of the intervention was limited by the teacher coming to the students and lasted until de departure of the teachers.The results are strictly limited to these sequences.The results indicated that the focus of the teachers' interventions' were very important during both the "Primary information unit", and the "Second information unit" (Selander, Engström & Åkerfeldt, 2007).The intention with the examples is to show how the focus of the intervention affects the learning process and make the intervention transparent.All LDS is framed by the institutional norms and patterns of social interactions (Selander, Engström & Åkerfeldt, 2007), which become obvious in our observations.When it comes to teachers' interventions some of them occurred focusing directly towards the digital resources.
We are inside an eight's grade classroom.The mathematics-and ICT teacher Andrew walks around in the classroom, carrying a laptop.One of the boys, Daniel, calls for his attention and he steps forward to the boy's group that works around a table.They work with three laptops.Daniel: "Andrew, come now Andrew … I get crazy on this computer."…"thisstrip, why is there empty space created when I try to paste copied material"…Then Andrew puts down his computer, leans forward, takes over the mouse from Daniel and looks into the screen.He starts working with Daniel's computer, and continues the work for a long time without involving Daniel at all.Daniel walks away, out in the classroom, and starts talking with other students… At last Daniel comes back and sits down, but he doesn't participate in Andrews work with Daniels computer… Andrew finishes saying "Well, now it works.That's it!Don't ask me what I did but now it works."Then the teacher grabs his own portable computer and walks away.This example shows a student calling for help and a male teacher coming.The digital learning resource is in focus and the teacher takes over.As a result, the student doesn't participate in problem solving and doesn't know what has happened, neither on his screen nor with his work.He is not told how to solve similar problems in the future.On the positive side, the student can resume his work after the teacher's support.This teacher intervenes as a technician, having a user support role.
When teachers teach ICT they often guide and instruct the students.The teacher intervention is focused on the ICT and a "how-to-do-it" perspective.The student is watching, being told what to do and then follows the instructions.The teacher is the guide that leads the student forward, which can be helpful in a learning situation.Two girls, Eve and Phoebe, work with one Macintosh computer each.An information and communications technology teacher gets to them and grabs the mouse.She demonstrates how to proceed and says: "Do like this … I Movie works as follows, when you start a … the movie appears … that you need to put on your desktop".The student Eve says: "should we start then".The teacher is standing behind the two girls, helping one of them.When they start working she gives instructions by pointing at the screen.In this example the teacher is the guide who leads the students forward.She is supporting them to continue working.The teacher is both a technician and educator in this intervention.In the sociocultural perspective the artefact is central, and the students thinking is thought to be intimate connected to the artefacts they are using.The interaction between teachers, students and artefacts are closely connected to learning (Säljö, 2005).How teachers teach and interact with digital resources and students will have an impact on the learning process.
There are some examples of restricting the learning processes when the teacher interrupts ongoing activity.An example is when a teacher distracting students when the student group is deeply involved in writing a Power Point presentation.Their task is inspired by storyline and their work proceeds well, when a teacher comes and gives them test results from a test they did a couple of weeks earlier.Pat, social science-teacher of an eight's grade class steps forward to Adam and says: "Adam, I have corrected your essay about XXX here".She opens the essay, gives it to Adam and reads her judgment."Now I have corrected it and I actually liked it very well… you describe how he experiences all these events but … it would have required more analysis of causes and consequences …" She puts down the essay in front of Adam and walks away.The student group leaves their writing and starts to talk about the test, shouting and asking other students in the classroom.This teacher action is obstructive since the students' work is interrupted.The result is that students loose interest in their work, discuss other things and become distracted.After this, the students can't find their way back to the working flow again, not even when the teacher has left.
We have examples where the contents of the work and interaction with the students are in focus, while the artefact is of secondary importance.In this example the teacher looks at the students more than on the computer screen.Daniel calls for Susan."Susan!Can you read this?"The social science teacher gets around the table to Daniel, leans down and watches the screen…She reads through the contents.At the same time another student, Brad, finds something on the web.Susan to Daniel: "…what does lost searching mean?" Daniel: "It means that one walks around, out of the city..." Brad: "Ahh, check here… watch, the number of inhabitants in cities is 62%."Susan to Brad: "Well, fine, then you got an answer to that question, that site is very good."She looks into the students' eyes while communicating with them.Susan points at Daniel's screen.Susan: "Have you found this at the same site too …" Susan talks with Daniel and points at Brad.Brad takes part in the conversation and Susan laughs.She turns to everyone in the group.Susan: "You have chosen the black and red.Is that for any particular reason or?" Daniel: "No, it's just…" Susan talks and watches both Brad and Daniel.In this example, the teacher communicates with the whole group of students, asking questions and giving advice.The teacher is working as a facilitator engaging active learners.The teacher is encouraging group discussion and initiates more interaction to support learning.As a result, the students think together and collaborate, their views are widened and they are given different perspectives, everything situated.Whit in the activities they communicate which emphasis learning (Lave & Wenger, 2003).
We have discovered a lot of interventions that focus on subject contents.When teachers intervene with a group, it is their subject skills that are in focus.The students will ask the social science teacher questions about social science, the ICT teacher ICTrelated questions and so on.An example of an intervention where the teacher has a subject oriented perspective is when a social science teacher comes to a student and discusses what GNP is, from her point of view the content in the work.Peter: "Kathy, here me".Peter calls for Kathy, the social science teacher who comes forward to him.She bends down, points at the screen and reads loudly: "… this number exceeds 100 per cent… since more children than expected started school…" Peter wonders: "Should I just write 100 per cent?" Kathy: "Yes, you can do that… but have you copied this, will you have some sort of reasoning about it?…Peter: "What does GNP mean?" Kathy: "It means Gross National Product … then you will get a definition, the sum of all products and services…" During the whole conversation Peter gets back and forth between a PowerPoint presentation and the web.In this example, when the teachers reach a group, it is the teachers' subject skills that are in focus.Even though the activity is interdisciplinary, the interventions are not.The students ask the social science teacher questions about social science, the ICT-teacher about ICT-related questions and so on.The ICT teacher never comments on thematic contents, nor does the math and science teacher comment on social science work.Each teacher uses his or her particular subject skills.With this approach, each activity is defined within a certain subject or discipline.This teacher is a tutor that solves problems related to her subject matter.She is collaborating with the students and discussing problems.Challenging and open ended questions are used and the teacher is primarily an educator and facilitator for learning, using her subject authority to scaffold the student.The results indicate that the subject on one hand and the identification of the teacher's role with the subject on the other hand impact the behavior in the classroom.Teachers are formed and socialized into a community of practice with relationships and different teaching style.Their identification with their subjects is strong (Lave & Wenger, 2003;Säljö, 2005;Waggoner, 1994;Santee & Siskin, 1994).This was obvious in our video recordings when we saw how the teachers intervened with the students in a subject oriented way.
Also ICT can be taught in a subject oriented perspective.Lynn calls for the ICT teacher, called Anne, and says: "Anne, we have a small problem… we don't want to play the slide show so fast…" The speed of the slide show is considered as a problem.The teacher sits down by the computer, pointing at the screen.Changes of slides are discussed and the teacher informs all the time what is happening on the screen.Anne teaches about the changeover: "… when a change of slides has been completed, the film cut is locked and you can't work with it… Anne points at the screen and gesticulates, and then she looks at all three, one by one.She continues: "… is here at the rear side, for you to pull out…you may also double-click on this paper …" Lynn answers with an "Aha!" Anne says "…I think you may elaborate a bit…" Anne points and instructs how to proceed.Lynn has the mouse and clicks where the teacher points at the screen.The teacher is interacting with the students and discussing the ICT content that are in focus in the discussion.The teacher is the link between the students and the computer and is both a technician and an educator in this intervention, trying to solve a technological educational issue.In these interventions we can see that teachers concentrate on ICT.For interventions concerning ICT there is a lot of guidance, with instructions leading the students forward.
Conclusions
The use of the LDS-model has helped us in the research group to see an LDS from various views that deepen and give a wider perspective on a learning process.In this paper it has been two different but related views that complement each other and the results can be fruitful in the understanding of the teachers work when digital resources are used in a learning environment.
Our conclusions about scaffolding are that it is used as a strategy by the teacher to move learning forward during all parts of an LDS when using digital educational material in school but it is needed even more (Shepard, 2005).Especially technical scaffolding (Masters & Yelland, 2002) among the early users.In the beginning there is huge need of scaffolding to handle and learn (Säljö, 2000), by using the artefact so the students can be more independent and focus on the content and the task instead of technical matters.There is also a need of extended conceptual scaffolds (Hill & Hannafin, 2001) among older pupils using content from the Internet.Maybe the students use insufficient strategies to interpret and process information on the Internet?It is then even more important with more and different types of scaffolding in all phases of the learning process when they are working with Internet and digital educational material.We also found that the comprehension of scaffolding depends on the view on learning and knowledge.An example of that is when the teacher had prepared different types of scaffolding but the students didn't comprehend them at all as scaffolds because they considered learning as instructions or knowledge you get from the teacher.When the teacher didn't tell them exactly how to do, or didn't give them a direct answer to a question they didn't consider it as scaffolding.
We have given some examples of teacher interventions during a Learning Design Sequence, where the teacher is either restrictive or supportive for the students' learning.In what directions is the teacher intervening?Is the focus on the students, artefact or the subject?Sometimes ICT becomes more of an object than a tool for teachers when software or hardware problems occur.Then they have to focus on ICT rather than encouraging more of student thinking related to their subject (Lim & Hang, 2003).The consciousness from teachers how an intervention influences the learning is important, also to plan interdisciplinary work according to the results regarding the strong didactical influences when teachers' intervene.We want to continue the discussion how teachers intervene with digital artefacts and the students.
Our conclusions lead us to further questions.The teacher's role and how they intervene is is very important for learning but could the digital educational material be designed in another way including more scaffolds to complement the teacher's scaffolds?We could see that the teachers intervened in a way strongly influenced by the didactical tradition of teacher education for secondary teachers.Is that the reason why procedural scaffolds were the most common type of scaffold in all the LDSs?Do teachers need to teach more sufficient literacy strategies for learning when working with digital educational material?Do teachers need more competence development in the area of digital literacy? | 2018-12-03T16:34:04.472Z | 2007-12-01T00:00:00.000 | {
"year": 2007,
"sha1": "b59594251d705567519fc307b794f60e94a19101",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/pee/a/sFjfvJjWKbdV8TBFMdNQT5m/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b59594251d705567519fc307b794f60e94a19101",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
117018796 | pes2o/s2orc | v3-fos-license | Coupled-channel analysis of the possible $D^{(*)}D^{(*)}$, $\bar{B}^{(*)}\bar{B}^{(*)}$ and $D^{(*)}\bar{B}^{(*)}$ molecular states
We perform a coupled-channel study of the possible deuteron-like molecules with two heavy flavor quarks, including the systems of $D^{(*)}D^{(*)}$ with double charm, $\bar{B}^{(*)}\bar{B}^{(*)}$ with double bottom and $D^{(*)}\bar{B}^{(*)}$ with both charm and bottom, within the one-boson-exchange model. In our study, we take into account the S-D mixing which plays an important role in the formation of the loosely bound deuteron, and particularly, the coupled-channel effect in the flavor space. According to our calculation, the states $D^{(*)}D^{(*)}[I(J^P)=0(1^+)]$ and $(D^{(*)}D^{(*)})_s[J^P=1^+]$ with double charm, the states $\bar{B}^{(*)}\bar{B}^{(*)}[I(J^P)=0(1^+),0(2^+),1(0^+),1(1^+),1(2^+)]$, $(\bar{B}^{(*)}\bar{B}^{(*)})_s[J^P=0^+,1^+,2^+]$ and $(\bar{B}^{(*)}\bar{B}^{(*)})_{ss}[J^P=0^+,1^+,2^+]$ with double bottom, and the states $D^{(*)}\bar{B}^{(*)}[I(J^P)=0(0^+),0(1^+)]$ and $(D^{(*)}\bar{B}^{(*)})_s[J^P=0^+,1^+]$ with both charm and bottom are good molecule candidates. However, the existence of the states $D^{(*)}D^{(*)}[I(J^P)=0(2^+)]$ with double charm and $D^{(*)}\bar{B}^{(*)}[I(J^P)=1(1^+)]$ with both charm and bottom is ruled out.
And, in the past decade there is abundant literature with the study of the heavy flavor molecular states .
The concept of molecular state with hidden charm was first proposed by Voloshin and Okun thirty years ago and they studied the interaction between the charmed and anti-charmed mesons [24]. Later, De Rujula, Georgi and Glashow suggested that the observed ψ(4040) is a D * D * molecule [25]. By the quark-pion interaction model, Törnqvist investigated the possible deuteron-like two meson bound states with BB * or B * B * component [26,27]. At present, carrying out the phenomenological study of the heavy flavor molecular state is still a hot research topic of hadron physics. * where q and Q denote the light (u, d, s) and heavy (c, b) quarks, respectively. Among the conventional baryon states, the baryons with double charm or double bottom is of the QQq configuration. The SELEX Collaboration reported the first observation of a doubly charmed baryon Ξ + cc in its charged decay mode Ξ + cc → Λ + c K − π + [28] and confirmed it in the decay mode Ξ + cc → pD + K − [29]. However, later the BABAR Collaboration searched for Ξ + cc in the final states Λ + c K − π + and Ξ 0 c π + , and Ξ ++ cc in the final states Λ + c K − π + π − and Ξ 0 c π + π + , and found no evidence for the production of the doubly charmed baryons [30]. The Belle Collaboration reported no evidence for the doubly charmed baryons in the final state Λ + c K − π + , either [31]. Although these doubly charmed baryons were not confirmed by BABAR and BELLE, it is still an interesting research topic to search for such doubly charmed baryons experimentally.
Besides the doubly heavy flavor baryons, it is also very interesting to study other systems with two heavy flavor quarks. The heavy flavor molecular state with two charm quarks provides another approach to investigate the hadron states with double charm. For this kind of hadron, its typical configuration is [cq] [cq]. To answer whether there exist such heavy flavor molecular states with double charm or not, in this paper we apply the one-boson-exchange (OBE) model to perform a dynamic calculation of their mass spectroscopy. This study is not only a natural extension of the previous work of the heavy flavor molecular state with hidden charm, but also provides new insight into exploring the hadron states with dou-ble charm. Besides the hadron states with double charm, we also investigate the hadron states with double bottom and the hadron states with both charm and bottom. This paper is organized as follows. After the introduction, we present the derivation of the effective potential in Section II. We summarize our numerical results and perform some analysis in Section III and draw some conclusions in Section IV. We also give some useful formulas in the Appendix.
In the above, f π = 132 MeV is the pion decay constant. The coupling constant g was studied by many theoretical approaches, such as quark model [33] and QCD sum rule [36,37]. In our study, we take the experimental result of the CLEO Collaboration, g = 0.59 ± 0.07 ± 0.01, which was extracted from the full width of D * + [38]. For the coupling constants relative to the vector meson exchange, we adopt the values g v = 5.8 and β = 0.9 which were determined by the vector meson dominance mechanism, and λ = 0.56 GeV −1 which was obtained by matching the form factor predicted by the effective theory approach with that obtained by the light cone sum rule and the lattice QCD simulation [39,40]. The coupling constant for the scalar meson exchange is g s = g π /(2 √ 6) [10] with g π = 3.73. We take the masses of the heavy mesons and the exchanged light mesons from PDG [41] and summarize them in Table III.
B. The derivation of the effective potentials
Using the lagrangians given in Eqs. (8)(9)(10)(11)(12)(13)(14), one can easily deduce the effective potentials in the momentum space. Taking into account the structure effect of the heavy mesons, we introduce a monopole form factor at each vertex. Here, Λ is the cutoff parameter and m ex is the mass of the exchanged meson.
We need to emphasize that there is alternative approach to deduce the effective potential just shown in Refs. [42,43], where they refuse to introduce the form factor due to the lack of knowledge of form factors. Here, one can also regularize the divergence of the potential at short distance from a I: The different channels for the D ( * ) D ( * ) systems, and similarly, for theB ( * )B( * ) . S is the strangeness while I stands for the isospin of the systems. " * * * " means the corresponding state dose not exist due to the symmetry. For simplicity, we adopt the shorthand notations, For the detailed information of the renormalization approach, the interested readers can refer to Refs. [42,43], where the coordinate space renormalization, i.e., boundary conditions, is adopted. Making fourier transformation one can obtain the effective potentials in the coordinate space. In Eqs. (18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31), we list the specific expressions of the effec-tive subpotentials which are flavor-independent. The effective potential used in our calculation is the product of the flavorindependent subpotentials and the isospin-dependent coefficients which are summarized in the Appendix VI. The flavorindependent subpotentials are for the process PP → PP, for the scattering process PP → PP * , for the scattering process PP → P * P * , for the scattering process PP * → PP * , for the scattering process PP * → P * P, for the scattering process PP * → P * P * , and for the scattering process P * P * → P * P * . To obtain the effective potentials V g (r) for the process P * P → P * P * , one just needs to make the following changes in Eqs.( 27)- (28). Functions H 0 , H 1 , H 3 , M 1 and M 3 are given in the Appendix. Operator C, the generalized tensor operator T and spin-spin operator S are defined as Due to the large mass gap between the mesons D(D 0 , D + ), D + s , D * (D * 0 , D * + ) and D * + s (similarly, in the bottom sector), it is necessary to adopt the nonzero time component of the transferred momentum for some scattering processes. We present the q 0 s used in our calculation in the Appendix. Notice that m D * − m D > m π leads to the complex potential for the scattering process DD * → D * D, and we take its real part which has an oscillation form, see Eq. (24).
III. NUMERICAL RESULTS
Using the potentials given in the subsection II B, we solve the coupled-channel Schrödinger equation and summarize the numerical results which include the binding energy (B.E.), the system mass (M), the root-mean-square radius (r rms ) and the probability of the individual channel (P i ) in Tables IV, V, VII, VIII, IXand X.
In our study, only the cutoff is a free parameter. However, due to the lack of the experimental data, one can not determine the cutoff exactly. Thus, it is very difficult to draw definite conclusions. Luckily the one-boson-exchange potential model is very successful to describe the deuteron with the cutoff in the range 0.8 < Λ < 1.5 GeV. Following the study of the deuteron with the same formalism and taking into account the mass difference between the heavy meson and the nucleon, we take the range of the cutoff to be 0.9 GeV < Λ < 2.5 GeV. However, this choice is a little arbitrary to some extent. We sincerely hope that in the near future there will be enough experimental data with which one can determine the cutoff exactly.
Besides, we also consider the stability of the results when we draw our conclusions. For the systems with strangeness S = 0, in order to highlight the role of the long-range pion exchange in the formation of the loosely bound state, we first give the numerical results with the pion-exchange potential alone, which are marked with OPE, and then with the heavier eta, sigma, rho and omega exchanges as well as the pion exchange, which are marked with OBE, see Tables IV, V and VI. The state D ( * ) D ( * ) [I(J P ) = 0(0 + )] is forbidden because the present boson system should satisfy the Boson-Einstein statistic. However, the state D ( * ) D ( * ) [I(J P ) = 0(1 + )] is very interesting. Using the long-range pion exchange potential, we obtain a loosely bound state with a reasonable cutoff. For our present D ( * ) D ( * ) [I(J P ) = 0(1 + )] state, with the cutoff parameter fixed larger than 1.05 GeV, the long-range pion exchange is strong enough to form the loosely bound state. If we set the cutoff parameter to be 1.05 GeV, the binding energy relative to the DD * threshold is 1.24 MeV and the corresponding root-mean-square radius is 3.11 fm which is comparable to the size of the deuteron (about 2.0 fm). The dominant channel is [DD * ] − ( 3 S 1 ), with a probability 96.39%. With such a large mass gap (about 140 MeV) between the threshold of DD * and that of D * D * , the contribution of the state D * D * ( 3 S 1 ) is 2.79%. However, the probability of the D-wave is around 1%. When we tune the cutoff to be 1.20 GeV, the binding energy is 20.98 MeV and the root-mean-square radius changes into 0.84 fm. When we use the one-boson-exchange potential, we notice that the binding becomes deeper. For example, if the cutoff is fixed at 1.10 GeV, the binding energy is 4.63 MeV with OPE potential. However, it changes into 42.82 GeV with the OBE potential for the same cutoff, see Table IV. We also plot the potentials in Fig. 1. From the potentials, one can see that the heavier rho and omega exchanges cancel each other significantly, which can be easily understood since for the isospin-zero system the isospin factor of ρ is −3 while that of ω is 1. From the the potentials V 11 , V 22 , V 33 and V 44 of Fig. 1, one can also see clearly that the total potential is below the π-exchange potential and the contributions of the η and σ exchanges are very small. This implies that the total potential of the ρ and ω exchanges is helpful to strengthen the binding. We note that the OBE potentials deduced by introducing the form factor generate spurious deeply bound states [42]. In order to fix this problem, we also plot the wave function in Fig. 2 from which one can see that there is no node except the origin. In other words, it is really a ground state.
To see the effect of the σ, ρ and ω exchanges, we turn off the contributions of the π and η exchanges and do the calculation again. We obtain a loosely bound state with binding energy being 0.78 MeV and root-mean-square radius being 3.74 fm when the cutoff parameter is fixed to be 1.44 GeV, which is much larger than 1.05 GeV used in the one-pion-exchange case with almost the same binding energy. Again, this means that the contribution of the long-range pion exchange is larger than that of the heavier vector meson exchange in the formation of the loosely bound D ( * ) D ( * ) [I(J P ) = 0(1+)] state. This is different from the conclusion of the paper [44] in which the authors studied the charmed meson-charmed anti-meson systems with a effective field theory. In their power counting, the leading order contribution arises from the four-meson contact interaction and the one-pion-exchange interaction is perturbative. The interested reader can refer to the paper [44] for detailed information. With the numerical results and the analysis above, the state D ( * ) D ( * ) [I(J P ) = 0(1 + )] might be a good molecule candidate.
We should mention that in the calculation of the X(3872) one also obtained a bound DD * state with quantum numbers I(J PC ) = 0(1 ++ ) using the OPE potential [45,47]. One may be confused since the difference between the potential of the DD * system and that of the DD * system is the G-parity of the exchanged meson while the pion has an odd G-parity. Actually, the iso-singlet DD * system has two C-parity states, one with even C-parity (C = +) and the other with odd C-parity (C = −). And, the interaction of our present DD * system relates to that of the odd C-parity but not the even C-parity DD * state via the G-parity rule.
We obtain no binding solutions for the state D ( * ) D ( * ) [I(J P ) = 0(2 + )] even if we tune the cutoff parameter as high as 3.0 GeV. It seems that the present meson-exchange model does not support the state D ( * ) D ( * ) [I(J P ) = 0(2 + )] to be a molecule.
For the state D ( * ) D ( * ) [I(J P ) = 1(0 + )], with the cutoff less than 3.0 GeV, the long-range pion exchange is not sufficient to form the bound state. However, when we add the heavier eta, sigma, rho and omega exchanges and tune the cutoff to be 2.64 MeV, a bound state with mass 3730.17 MeV appears. The binding energy relative to the DD threshold is 2.64 MeV and the corresponding root-mean-square radius is 1.38 fm. The channel DD( 1 S 0 ) with a probability of 91.47% dominates this state. The probability of the channel D * D * ( 5 D 0 ) is very small, only 0.07%.
Just as the J P = 0 + case, with the pion-exchange po- numbers I(J P ) = 1(2 + ). If we consider all the five channels, with the OBE potential we obtain a bound state with the cutoff parameter fixed to be 2.84 GeV. The binding energy relative to the DD threshold is 12.93 MeV. Surprisingly, the corresponding root-mean-square radius is as small as 0.22 fm. The dominant channel is D * D * ( 5 S 2 ), with a probability of 99.5%. However, the probability of the channel DD( 1 D 2 ) is so small as 0.04%, which tells us that this is not a loosely bound DD state but a deeply bound D * D * state. With so tight a bound state, the present meson-exchange model dose not work. Therefore, we omit the channels DD( 3 D 2 ) and [DD * ] + ( 3 D 2 ) and keep the three D * D * channels. With the pion-exchange potential we fail to obtain a bound state with the cutoff parameter less than 3.0 GeV. However, when we use the OBE potential and tune the cutoff to be 2.48 GeV, we obtain a bound state with mass 4014.29 MeV. The binding energy is 2.95 MeV and the corresponding root-mean-square radius is 1.61 fm. The channel D * D * ( 5 S 2 ) with a probability of 99.93% dominates this state. The three states D ( * ) D ( * ) [I(J P ) = 1(0 + ), 1(1 + ), 1(2 + )] do not form molecules due to the large cutoff in the present mesonexchange model.
2.B ( * )B( * )
With the heavy quark flavor symmetry, the potentials for thē B ( * )B( * ) system are similar to those for the DD system. The main difference between the two systems is that the reduced mass of theB ( * )B( * ) system is much larger than that of the D ( * ) D ( * ) system. We summarize our numerical results of thē B ( * )B( * ) system in Table V a probability of 91.83% dominates this state. The probability of the D-wave is 6.96%, see Table V. When we use the OBE potential, the binding becomes tighter as expected, which is similar to its charmed partner The numerical results suggest that the stateB ( * )B( * ) [I(J P ) = 0(1 + )] seems to be a good molecule candidate. Different from its charmed partner,B ( * )B( * ) [I(J P ) = 0(2 + )] can form a bound state with the pion-exchange potential if the cutoff is tuned larger than 2.88 GeV. When we set the cutoff to be 2.88 GeV, the binding energy is 2.75 MeV, and correspondingly, the root-mean-square radius is 0.72 fm. When we use the OBE potential, we obtain the binding solutions with a smaller but more reasonable cutoff. Unfortunately, the binding solutions depend very sensitively on the cutoff parameter. When tune the cutoff from 1.66 GeV to 1.72 GeV, the binding energy changes from 8.30 MeV to 73.89 MeV. Despite the reasonable cutoff, we can not draw a definite conclusion about the stateB ( * )B( * ) [I(J P ) = 0(2 + )] because of the strong dependence of the results on the cutoff.
The pion-exchange alone is also sufficient to form the loosely boundB ( * )B( * ) [I(J P ) = 1(0 + )] state with the cutoff larger than 1.70 GeV. When we tune the cutoff from 1.70 GeV to 1.90 GeV, the binding energy increases from 1.05 MeV to 11.29 MeV, and correspondingly, the root-mean-square radius decreases from 2.07 fm to 0.75 fm. The dominant channel isBB( 1 S 0 ), with a probability of 95.35% ∼ 90.29%. And, the probability of the channel D * D * ( 5 D 0 ) is 1.79% ∼ 4.69%. Actually, two pseudoscalar D-mesons can not interact with each other via exchanging a pion. Therefore, the binding solutions totally come from the coupled-channel effect, just as in the Λ Q Λ Q case [48,49]. When we add the contributions of the heavier eta, sigma, rho and omega exchanges, the results change little, which implies that the eta, sigma, rho and omega exchanges cancel with each other significantly. Although the the results depend a little sensitively on the cutoff, the statē B ( * )B( * ) [I(J P ) = 1(0 + )] might also be a molecule candidate.
The stateB ( * )B( * ) [I(J P ) = 1(1 + )] is also an interesting one. When we tune the cutoff between 1.40 GeV and 1.70 GeV, we also obtain a loosely bound state with the OPE potential. The binding energy is 0.83 ∼ 11.80 MeV and the corresponding root-mean-square radius is 2.36 ∼ 0.80 fm. The dominant channel is [BB * ] + ( 3 S 1 ), with a probability of 96.59% ∼ 90.63%. Different from the isospin singlet case, when we use the OBE potential the binding becomes shallower. For example, with the OPE potential the binding energy is 11.80 MeV if the cutoff is set to be 1.70 MeV while with the OBE potential it is 8.29 MeV for the same cutoff, see Table V For the state D ( * )B( * ) [I(J P ) = 1(1 + )], when we tune the cutoff parameter larger than 2.55 GeV, we obtain binding solutions with the OPE potential. If we set the cutoff parameter to be 2.60 GeV, the binding energy relative to the DB * threshold is 2.43 MeV and the corresponding root-mean-square radius is 1.83 fm. However, when we add the heavier eta, sigma, rho and omega exchanges, we obtain no binding solutions with the cutoff parameter less than 3.0 GeV. It seems that the present meson-exchange approach does not support the state D ( * )B( * ) [I(J P ) = 1(1 + )] to be a molecule.
For the state D ( * )B( * ) [I(J P ) = 1(2 + )], when we tune the cutoff parameter as large as 2.80 GeV, we obtain binding solutions with the OPE potential. If we set the cutoff parameter to be 2.90 GeV, the binding energy is 2.00 MeV and the corresponding root-mean-square radius is 1.95 fm. The dominant channel is D * B * ( 5 S 2 ), with a probability of 97.56%. The probability of the D-wave is 2.44%. When we use the OBE potential, we obtain binding solutions with a smaller cutoff, see Table VI. If we tune the cutoff to be 2.10 GeV, the binding energy is 0.44 MeV. Similar to the I(J P ) = 1(0 + ) case, the state D ( * )B( * ) [I(J P ) = 1(2 + )] might also be a molecule. For the systems with strangeness S = 1, there dose not exist the long-range pion exchange, but there are heavier eta, sigma, K and K * exchanges. We summarize our numerical results in Table VII for (DD) s and (BB) s and in Table VIII for (DB) s . [46]). Besides, the binding solutions depend sensitively on the cutoff. It seems that the present meson-exchange approach dose not support the state (D ( * ) D ( * ) ) s [J p = 0 + ] to be a molecule candidate.
In the corresponding bottom sector, we also obtain a bound state of (B ( * )B( * ) ) s [J P = 0 + ] with a smaller and more reasonable cutoff, 1.82 ∼ 1.88 GeV. This can be easily understood since the mass of the bottomed mesons are much larger than those of the charmed mesons and the effective potentials for the two systems are similar. When we tune the cutoff to be 1.82 GeV, the binding energy is 0.56 GeV and the root-meansquare is 2.28 fm which is comparable to that of the deuteron (about 2.0 fm), see Table VII For the state (D ( * )B( * ) ) s [J P = 0 + ], if we set the cutoff parameter to be 1.28 GeV, the binding energy is 6.72 MeV and correspondingly, the root-mean-square radius is 0.92 fm. The probability of the channel DB s ( 1 S 0 ) is 50.10% while that of the channel D sB ( 1 S 0 ) is 25.66%. However, if the binding energy is 68.73 MeV, these two channels provide com-parable contributions, 25.30% for DB s ( 1 S 0 ) and 23.02% for D sB ( 1 S 0 ). Given that the mass gap between DB s and D sB is around 14.5 MeV, for the binding energy comparable to this value, the mass gap plays an important role in the formation of the loosely bound state. However, if the binding energy is as large as 68.73 MeV, which is much larger than the mass gap, the important effect of the mass gap is gone, which is similar to the case of X(3872) [47]. Despite reasonable cutoff, we can not draw a definite conclusion about the state (D ( * )B( * ) ) s [J P = 0 + ] due to the strong dependence of the results on the cutoff.
It is necessary to mention that there exist fourteen channels for the state (D ( * )B( * ) ) s [J P = 1 + ]. It is very hard to solve a 14×14 matrix Shrödinger equation. Due to the large mass gap between the threshold of D * B * s (or D * sB * ) and that of DB * s and the strong repulsive interaction coming from the D-wave, we expect the channels D * B * s ( 3 D 1 ), D * B * s ( 5 D 1 ), D * sB * ( 3 D 1 ) and D * sB * ( 5 D 1 ) to provide negligible contributions, which can be clearly seen from the previous calculation. Therefore, we omit these four channels in our calculation. We obtain a bound state of (D ( * )B( * ) ) s [J P = 1 + ] with the cutoff larger than 1.24 GeV and the results are similar to those of (D ( * )B( * ) ) s [J P = 0 + ], see Table VIII.
For the state (D ( * )B( * ) ) s [J P = 2 + ], if the cutoff parameter is fixed to be 2.20 GeV, the binding energy relative to threshold of D * B * s is 1.23 MeV and the root-mean-square radius is 2.25 fm. The dominant channel is D * B * s ( 5 S 2 ), with a probability of 88.81%. The other S-wave channel D * sB * ( 5 S 2 ) provides the second largest contribution, 10.90%, and the total contribution of the D-wave is 0.3%. It seems that the present meson-exchange approach supports (D ( * )B( * ) ) s [J P = 2 + ] to be a molecule candidate, but not a good one.
C. The Results for The Systems with Strangeness S = 2
For the systems with strangeness S = 2, there dose not exist the long-range pion exchange, but there are mediaterange sigma and eta exchanges and the short-range phi exchange. We summarize the numerical results for the systems (D ( * ) D ( * ) ) ss and (B ( * )B( * ) ) ss in Table IX and for the system (D ( * )B( * ) ) ss in Table X. If we fix the cutoff parameter between 2.76 GeV and 2.82 GeV, we obtain a bound state of (D ( * ) D ( * ) ) ss [J P = 0 + ] with binding energy 2.43 ∼ 28.53 MeV and root-mean-square radius 1.92 ∼ 0.55 fm. The dominant channel is D s D s ( 1 S 0 ), with a probability of 94.69% ∼ 87.27. The contribution of the D-wave channel D * s D * s ( 5 D 0 ) is very small as expected, less than 0.1%. In the bottomed sector, we obtain binding solutions with a smaller but more reasonable cutoff 1.90 ∼ 1.96 GeV. As one can easily read off from Table IX, when we set the cutoff to be 1.90 GeV, the binding energy is 2.27 MeV and the root-mean-square is 1.17 fm. The dominant channel , with a probability of 99.98% ∼ 99.96%. In the corresponding bottomed case, we obtain a loosely bound state of (BB) ss [J P = 1 + ] with binding energy 0.83 ∼ 11.98 MeV and root-meansquare radius 1.95 ∼ 0.54 fm when we set the cutoff parameter between 1.82 GeV and 1.88 GeV. Similar to the charmed case, the dominant channel is [B sB * s ] + ( 3 S 1 ), providing a contribution of 99.41% ∼ 99.29%. Similar to the J P = 0 + case, it seems that the state (B ( * ) B ( * ) ) ss [J P = 1 + ] might be a molecule whereas the state (D ( * ) D ( * ) ) ss [J P = 1 + ] might not be.
For the J P = 2 + case, the results are very similar to those of the J P = 1 + case, see IX. Table X.
For the state (D ( * )B( * ) ) ss [J P = 1 + ], when we fix the cutoff parameter between 2.34 GeV and 2.40 GeV we obtain binding energy 1.47 ∼ 23.41 MeV and corresponding rootmean-square radius 2.02 ∼ 0.50 fm. The dominant channel is D sB * s ( 3 S 1 ), with a probability of 93.37% ∼ 83.13%. However, the total contribution of the D-wave is very small, less than 0.1%, see Table X.
Very similar to the J P = 0 + case, we obtain a bound state of (D ( * )B( * ) ) ss [J P = 2 + ] with binding energy 2.98 ∼ 22.29 MeV and root-mean-square radius 1.35 ∼ 0.51 fm. The dominant channel is D * sB * s ( 5 S 2 ), with a probability of 99.98%. The numerical results indicate that the present meson-exchange approach seems to support all of the three states to be molecule candidates, but not good ones since the results depend a little sensitively on the cutoff.
IV. CONCLUSION
In the present paper, we investigate the possible molecular states composed of two heavy flavor mesons, including D ( * ) D ( * ) ,B ( * )B( * ) and D ( * )B( * ) with strangeness S = 0, 1 and 2. In our study, we take into account the S-D mixing which plays an important role in the formation of the loosely bound deuteron, and particularly, the coupled-channel effect in the flavor space.
In order to make clear the role of the long-range pion exchange in the formation of the loosely bound states, we give the numerical results with the one-pion-exchange potential for the system with strangeness S = 0, as well as the numerical results with the one-boson-exchange potential.
In our study, we notice that for some systems, such as , the probability of the D-wave is very small. What's more, the contributions of the D-wave channel with larger threshold is almost negligible for the system with a large mass gap among the thresholds of different channels. We also notice that when the binding energy is comparable to or even smaller than the mass gap, the effect of the mass gap will be magnified by the small binding energy, which is similar to the X(3872) case [47].
In the sector with strangeness S = 0, our results favor that D ] also seems to be molecule candidates, but not good ones because the results depend a little sensitively on the cutoff. With the same reason as for the S = 0 case, we can not draw definite conclusions on the states (D ( * )B( * ) ) s [J P = 0 + , 1 + ]. However, the present meson-exchange approach does not seem to support states (D ( * ) D ( * ) ) s [J P = 0 + , 2 + ] to be molecules.
For the S = 2 systems, our results suggest that the states (B ( * )B( * ) ) ss [J P = 1 + , 2 + ] might be good molecule candidates whereas the states (D ( * ) D ( * ) ) ss [J P = 0 + , 1 + , 2 + ] might not be molecules. Although the results depend a little sensitively on the cutoff, the states (B ( * )B( * ) ) ss [J p = 0 + ] and (D ( * )B( * ) ) ss [J P = 0 + , 1 + , 2 + ] might also be molecules. We summarize our conclusions in Table XI. In Reference [50], the authors also studied the doubly charmed systems within the hidden gauge formalism in a coupled-channel unitary approach. For the D * D * systems with C = 2, S = 0 and I = 0, they only obtained a bound state with quantum number I(J P ) = 0(1 + ), which is similar to our result. However, the pole appeared at 3969 MeV, which is 100 MeV larger than our result, 3870 MeV. This is because we consider the coupled channel effect of DD * to D * D * . Actually, what we obtain is a DD * bound state, but not a D * D * bound state. For the systems with C = 2, S = 0 and I = 1, both their results and ours indicate that there are no molecule candidates. For the systems with C = 2, S = 1 and I = 1 2 , they only obtained a bound state with quantum numbers I(J P ) = 1 2 (1 + ). Our results indicate that only the state with I(J P ) = 1 2 (1 + ) might be a molecule candidate, but not a ideal one because the result depends a bit sensitively on the cutoff parameter. For the systems with C = 2, S = 2 and I = 0, they obtained no bound states, neither do we, see Table XI. Our results and those in Reference [50]are consistent with each other, although these two theoretical frameworks were quite different.
It is very interesting to search for the predicted exotic hadronic molecular states experimentally. These molecular candidates cannot directly fall apart into the corresponding components due to the absence of the phase space. For these molecular states with double charm, they cannot decay into a double charm baryon plus a light baryon. The masses of the lightest doubly-charmed baryon and light baryon are 3518 MeV and 938 MeV, respectively, corresponding to Ξ + cc and proton as listed in PDG [41]. The mass of the molecular state is around 3850 MeV and much smaller than the sum of the masses of a doubly-charmed baryon and a light baryon. Therefore such a decay is kinematically forbidden. XI: The summary of our conclusions. " * * * " means the corresponding state dose not exist due to symmetry. " √ " ("×") means the corresponding state might ( might not) be a molecule. " √ " denotes that the corresponding state might be a molecule candidate, but not a good one because the results depend a little sensitive on the cutoff. However,"?" means we can not draw a definite conclusion with the present meson-exchange approach The isospin-dependent coefficients for the (D ( * ) D ( * ) ) ss and (D ( * )B( * ) ) ss systems with strangeness S = 1. The coefficients of the (B ( * )B( * ) ) ss systems, which are not shown, are similar to those of the (D ( * ) D ( * ) ) ss . | 2013-12-05T08:38:46.000Z | 2012-11-21T00:00:00.000 | {
"year": 2012,
"sha1": "5c15be5d2c9be33ada26faeb030ee875c72e0850",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1211.5007",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5c15be5d2c9be33ada26faeb030ee875c72e0850",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4555281 | pes2o/s2orc | v3-fos-license | The Evolutionary unZIPping of a Dimerization Motif—A Comparison of ZIP and PrP Architectures
The cellular prion protein, notorious for its causative role in a range of fatal neurodegenerative diseases, evolved from a Zrt-/Irt-like Protein (ZIP) zinc transporter approximately 500 million years ago. Whilst atomic structures for recombinant prion protein (PrP) from various species have been available for some time, and are believed to stand for the structure of PrPC, the first structure of a ZIP zinc transporter ectodomain was reported only recently. Here, we compare this ectodomain structure to structures of recombinant PrP. A shared feature of both is a membrane-adjacent helix-turn-helix fold that is coded by a separate exon in the respective ZIP transporters and is stabilized by a disulfide bridge. A ‘CPALL’ amino acid motif within this cysteine-flanked core domain appears to be critical for dimerization and has undergone stepwise regression in fish and mammalian prion proteins. These insights are intriguing in the context of repeated observations of PrP dimers. Other structural elements of ZIP transporters and PrP are discussed with a view to distilling shared versus divergent biological functions.
Introduction
Prion proteins are notorious for their central role in fatal neurodegenerative diseases in a subset of mammalian species, including humans [1][2][3]. In prion diseases, the cellular prion protein (PrP C ) undergoes structural rearrangements to a β-sheet-rich conformer termed PrP Sc (named after Scrapie in sheep, the first known prion disease). That essential role of PrP in these disorders was demonstrated by showing that the knockout of the prion protein gene (Prnp) renders mice refractory to acquiring the disease [4]. A side-product of mouse Prnp knockout studies undertaken concomitantly in several laboratories was the identification of a paralog of the prion gene, termed Doppel (Dpl). Dpl maps to a genomic region C-terminal to Prnp and, consequently, was determined to have arisen from a gene duplication event [5,6]. Further genomic sequence analyses revealed that the Shadoo (Sho) gene coded for an additional prion protein paralog [7,8]. The ancestry of this small gene family was enigmatic until 2009 when PrP was initially observed to interact with a subset of zinc transporters of the Zrt-/Irt-like Protein (ZIP) family [9], and subsequent bioinformatic analyses revealed PrP and ZIP transporters to meet several criteria that establish common ancestry [10]. The evolutionary relationship was particularly apparent in comparisons of PrP and ZIP ectodomain sequences in fish genomes, which exhibit a degree of sequence similarity and identity previously reported in pair-wise sequence comparisons of PrP and Dpl, or PrP and Sho. In contrast to ZIPs, which are multi-spanning transmembrane proteins, the prion protein is anchored in the membrane by a glycosylphosphatidylinositol (GPI) anchor, a shift in topology also observed in other protein families [11,12]. Consistent with the view that the prion protein founder gene represented a truncated ZIP gene, such a shift in topology can be experimentally induced when a gene coding for a transmembrane protein is truncated at the 3 end of its first transmembrane domain [13].
Whereas the biology of the prion protein in health and disease has been extensively studied and reviewed [3,14,15], considerably less is known about ZIP transporters, which are coded by members of the solute carrier 39a (Slc39a) gene family. In humans and mice, this family comprises 14 genes, whose gene products appear to be tasked with the import of zinc and other divalent cations into the cytosol, either from the extracellular space or from intracellular compartments. Autosomal recessive mutations in ZIP4 and ZIP13 genes have been linked to Acrodermatitis enteropathica, a rare zinc deficiency syndrome [16], and a form of Ehlers-Danlos syndrome characterized by a skeletal dysplasia that mainly affects the spine, and also causes developmental deformations of the hands [17]. The latter symptoms speak to an emerging pattern of ZIP-dependent phenotypes that point toward roles of these proteins in specific morphogenetic programs. In particular, members of the so-called LIV1 subfamily of ZIPs, featuring ectodomains with homology to PrP [10], stand out in this way ( Figure 1A). For instance, ZIP6 and ZIP10, the ZIP transporters most closely related to PrP, were shown to contribute to the mammalian oocyte-to-egg transition [18]. Moreover, the morpholino-based knockdown of ZIP6 or ZIP10 caused an embryonic arrest in zebrafish that exhibited characteristics of an impaired epithelial-to-mesenchymal transition (EMT) [19,20]. Test embryos exhibited a phenotype reminiscent of a similar impairment to that observed following the knockdown of PrP in the same paradigm [21]. The striking overlap in ZIP6-and ZIP10-dependent phenotypes was recently resolved by data, which clarified that these proteins form a functional heteromer [19,22]. The ability of these ZIPs to interact directly may also account for their original appearance amongst a short list of PrP interacting proteins [9]. This theory assumes that PrP inherited structural features responsible for interactions amongst these ZIPs from a common ancestor. It is currently unclear whether direct or third-party interactions underlie shared links of PrP and ZIPs to EMT. Recent work in NMuMG cells, a mammalian EMT model, put a spotlight on the neural cell adhesion molecule (NCAM1), by showing that not just PrP [23][24][25] but also the ZIP6-ZIP10 heteromer [22] predominantly interacts with this cell adhesion molecule. However, although both PrP and ZIP6-ZIP10 affect post-translational modifications on NCAM1 during its involvement in EMT, they do so in different and perhaps complementary ways. More specifically, whereas signaling downstream of PrP was shown to control the transcription of the sialyltransferase (ST8SIA2) that mediates NCAM1 polysialylation [26], the ZIP6-ZIP10 heteromer appeared to control NCAM1 phosphorylation at a specific cluster of cytosolic phosphoacceptor sites through its recruitment of GSK3 [22].
What are the structural features that govern these similarities and differences between PrP and its closest ZIP family members? Until recently, high-resolution structural data were only available for recombinantly expressed PrP [27][28][29], but not for ZIPs. According to these data, prion proteins from various species are composed of a disordered N-terminal domain and a folded C-terminal domain characterized by three α-helices and a short two-stranded β-sheet. The two most C-terminal α-helices form a conserved helix-turn-helix fold that is stabilized by an internal disulfide bridge and can be post-translationally modified by up to two complex N-glycans. Attempts to solve the structure of a fish prion protein, which presumably would be more closely related to ZIP ectodomain structures, were not met with success [30]. Finally, the first high-resolution crystal structure of a ZIP ectodomain [31,32] became available in the past couple of years. This was followed by the discovery of a separate structure of a prokaryotic ZIP containing the transmembrane domain conserved in the ZIP family [32]. Although the ZIP ectodomain structure is from ZIP4, a relatively distant PrP relative [10], the sequence of its membrane-adjacent domain is sufficiently similar to ZIP6 and ZIP10 to be of interest in this context. Here we describe this structure, compare it to PrP, and discuss its significance for understanding the biology and evolution of mammalian prion proteins. . Homologous α-helices are colored in cyan. Just for reference, the rendering also depicts the position of the side-chain of methionine-129 within the first short β-strand in PrP C . (C) Comparison of CFC domains of bat ZIP4 (PDB: 4x82) and human PrP (PDB: 1qm0). As the available bat ZIP4 ectodomain structure does not contain residues Cterminally to the CFC, its structure ends on the respective cysteine, but the PrP structure extends to the C-terminus.
Comparison of Molecular Architectures of ZIP and Prion Proteins
The ZIP4 ectodomain (from Pteropus alecto, the fruit bat) was solved at a resolution of 2.85 Å and was shown to be composed of two structurally independent subdomains. Its N-terminal subdomain of 156 amino acids, bearing no sequence homology to PrP, folds into a globular cluster of nine αhelices, termed helix-rich domain (HRD). The HRD is connected through a flexible linker to a folded C-terminal domain comprising amino acid residues 192-322. The latter featured two helix-turn-helix folds composed of helices α10-α11 and α13-α14 that were connected to each other by a disordered histidine-rich segment (residues 232-255) and a short α12 helix ( Figure 1B). A striking feature of the ZIP4 ectodomain is that it was crystallized as a dimer, held together by extensive interactions among the hydrophobic residues on a large interface including the 'PAL' sequence motifs present in the middle of helix α14 of interacting protomers. Hence, it was suggested to refer to the C-terminal subdomain as the PAL-motif containing domain (PCD). When comparing the ZIP4 PCD to PrP, the most apparent shared characteristic is the C-terminal helix-turn-helix fold, which in PrP comprises . Homologous α-helices are colored in cyan. Just for reference, the rendering also depicts the position of the side-chain of methionine-129 within the first short β-strand in PrP C . (C) Comparison of CFC domains of bat ZIP4 (PDB: 4x82) and human PrP (PDB: 1qm0). As the available bat ZIP4 ectodomain structure does not contain residues C-terminally to the CFC, its structure ends on the respective cysteine, but the PrP structure extends to the C-terminus.
Comparison of Molecular Architectures of ZIP and Prion Proteins
The ZIP4 ectodomain (from Pteropus alecto, the fruit bat) was solved at a resolution of 2.85 Å and was shown to be composed of two structurally independent subdomains. Its N-terminal subdomain of 156 amino acids, bearing no sequence homology to PrP, folds into a globular cluster of nine α-helices, termed helix-rich domain (HRD). The HRD is connected through a flexible linker to a folded C-terminal domain comprising amino acid residues 192-322. The latter featured two helix-turn-helix folds composed of helices α10-α11 and α13-α14 that were connected to each other by a disordered histidine-rich segment (residues 232-255) and a short α12 helix ( Figure 1B). A striking feature of the ZIP4 ectodomain is that it was crystallized as a dimer, held together by extensive interactions among the hydrophobic residues on a large interface including the 'PAL' sequence motifs present in the middle of helix α14 of interacting protomers. Hence, it was suggested to refer to the C-terminal subdomain as the PAL-motif containing domain (PCD). When comparing the ZIP4 PCD to PrP, the most apparent shared characteristic is the C-terminal helix-turn-helix fold, which in PrP comprises its helices α2 and α3 (often referred to as helices B and C). However, helix α2 in PrP is longer than α13 in ZIP4, thereby spanning homologous sequence segments that encompass ZIP4 PCD helices α12 plus α13 ( Figure 1C). More N-terminal sequence elements present in mammalian prion proteins, including helix α1 (helix A) and the short two-stranded β-sheet observed in this sequence region, cannot be aligned to structural elements within ZIP4. While these elements are well-conserved within mammalian PrP, they fall within a sequence segment that is highly divergent not only in ZIPs but also in fish prion proteins. The diversity of this region can be attributed to the presence of repetitive elements that are prone to contract and expand throughout evolution but may also indicate that this region is not essential for a shared core function of members of this protein family. In the ZIP4 PCD, a relatively short and disordered histidine-rich fragment (31 amino acids) is present between the pair of helix-turn-helix folds. The corresponding region is expanded in the subbranch of ZIPs closer related to PrP, encompassing 89 and 232 residues in human ZIP6 and ZIP10, respectively. As the respective N-terminal domain of mammalian PrP C is known to be highly disordered, one may speculate that N-terminal regions of related ZIPs might also not acquire a particular fold. This may indeed be the case for ZIPs 6 and 10, whose histidine-rich fragments, preceding the second helix-turn-helix fold (α13 and α14 in ZIP4 PCD), are predicted to be unstructured, and which have no sequences that correspond to the HRD in ZIP4.
Disappearance of CPAL Motif in the Prion Protein Subbranch of ZIP-PrP Protein Family
The ZIP4 ectodomain structure draws attention to the membrane-adjacent helix-turn-helix fold, which appears to be a core element shared between homologous members of the ZIP/PrP protein family ( Figure 1C). Extensive sequence comparisons of ZIP genes across a multitude of organisms revealed this sequence segment to have evolved approximately 750 million years ago in a ZIP ancestral gene around the time when metazoans evolved [33]. The approximate timing of this event can be inferred from the presence of homologous exons in the genomes of basal metazoan organisms, including Trichoplax adhaerens, a marine organism with a body plan lacking organs. Whereas the exon-intron structure of ZIP genes is generally quite diverse, the genomic segment coding for this core region is invariably flanked by introns in metazoans (Figure 2A). Consequently, the absence of these flanking introns in all PrP sequences represented one of the strongest pieces of evidence in support of the conclusion that the prion protein founder gene must have evolved by retroinsertion of a spliced transcript of an ancient ZIP gene [33]. Additional elements by which this segment is recognizable include a pair of highly conserved cysteines (hence its designation as the cysteine-flanked core (CFC) domain) and the aforementioned PAL sequence. Close comparison of PrP and ZIP ortholog sequences bearing the PAL motif establish that only the proline in this motif is highly conserved, the second position is very often occupied by a small amino acid (A/G), and the third position by a hydrophobic one (mostly L/I/V). The motif is often preceded by a cysteine residue and is followed by another hydrophobic amino acid (mostly L/I/V). A regression of this extended CPALL motif appears to have occurred throughout PrP evolution, i.e., fish prion proteins lack the cysteine residue, which is present in its closest ZIP relatives, and a full replacement of the PAL motif with a sequence stretch characterized by polar amino acids occurred in mammalian PrP genes ( Figure 2B). The CFC also frequently encompasses Nx(T/S) glycan acceptor sites, which, in a subset of prion proteins [34] and close ZIP relatives [35], were shown to be glycan-occupied, but nonetheless cannot be considered a core feature of the CFC domain. There is another variation to the CFC that can be observed in a subset of members of the ZIP/PrP family, namely a relatively large sequence insertion at a site predicted to form the 'turn' within the helix-turn-helix fold. Such insertions are found in ZIP orthologs of some insects (e.g., the mosquito Anopheles aegypti) and a subset of fish prion proteins (e.g., PrP2 from pufferfish, Takifugu rubribes). Consistent with the interpretation that the respective region in the native folds of these proteins must be able to accommodate relatively large additional structures, the second N-glycan acceptor site in mammalian PrP also maps to this 'turn'. One may speculate that the lack of the PAL motif in mammalian PrP is linked to the insertion of this second N-glycan acceptor site. For example, it may either functionally replace the PAL motif (see below) or offer some other unknown evolutionary fitness adaptation.
The CPAL Motif Represents a Dimerization Interface
In the context reviewed here, perhaps the single most interesting insight revealed by the ZIP4 ectodomain structure is the discovery of the ZIP4 dimer interface as being centered on the hydrophobic PAL motif ( Figure 3A). Several lines of evidence suggested for some time that ZIP proteins comprising a CFC domain can assemble into dimers in vitro, and may also exist as dimers in vivo. For example, recombinant ZIP5 and ZIP13 ectodomains were observed to purify as homodimers [36,37]. Additional combinatorial complexity of ZIP protein structures may exist due to the formation of heterodimers. The aforementioned ZIP6-ZIP10 heteromer is the first example of this phenomenon (note that current data are consistent with the latter complex representing a heterodimer. However, a rigorous validation of the stoichiometry of this interaction has not been made, hence its tentative designation as a heteromer [19,22]). As all of the above proteins share a homologous PAL motif, it is to be expected that this motif will also be central to their dimerization interface. When the recombinant ZIP4 construct was truncated C-terminal to its helix α11, the expressed protein no longer eluted as a dimer on size exclusion chromatography, indicating that the CFC is indeed essential for dimerization [31]. Moreover, a serine-to-cysteine replacement of the amino acid that precedes the ZIP4 PAL motif, thereby mimicking the predicted dimerization interface of the ZIPs that naturally harbor a cysteine in this position, caused the mutant to migrate in a Western blot as an SDS-resistant dimer ( Figure 3B). This finding is intriguing, as it suggests the enhanced stability of the mutant dimer is due to the respective cysteine residues in the interacting monomers forming a disulfide bridge in trans. Consistent with this interpretation, the protein shifted to monomeric molecular weight, when the same samples were subjected to disulfide bridge reducing conditions prior to their SDS-PAGE separation. Additional evidence supporting this dimerization model came from the biochemical characterization of the ectodomain of ZIP14 (from Pteropus alecto), which naturally possesses a cysteine residue immediately preceding the conserved proline residue. Again, as for the aforementioned artificial ZIP4 mutant, a disulfide bond-mediated dimer was observed in the SDS-PAGE analysis, and a cysteine-to-serine substitution led to a monomeric species under non-reducing condition. These results corroborate the conclusion that the PAL motif-centered dimerization represents a universal model for ZIPs containing this motif in their ectodomains.
Whereas there seems to be good agreement on the predominant existence of ZIPs comprising a CFC domain as dimers, the same cannot be said for PrP, although PrP dimers were repeatedly reported. For instance, a 60 kDa PrP dimer was initially observed in murine neuroblastoma cells expressing hamster PrP [38]. Extensive subsequent recombinant work with various PrP constructs, which most often relied on in vitro refolding steps, revealed the protein to give rise to monomeric NMR structures [28]. The folding repertoire of PrP is more complex though, as the protein could be observed to crystallize as a monomer or dimer [39]. Curiously, subtle differences in the protein sequence (e.g., single amino acid mutations or the presence of the M129V polymorphism) were critical for whether the protein crystallized as a non-swapped or swapped dimer, and seemed to determine the specific architecture of the dimer interface [38,40]. Although the aforementioned studies corroborated the notion that PrP can adopt a surprising diversity of conformational states, the in vivo relevance of these states is difficult to gauge. Cell-based studies with N-glycosylated wild-type PrP C observed a monomer-dimer equilibrium on the basis of crosslinking experiments, size exclusion chromatography, and enzyme-linked immunosorbent assay analysis [41]. Similarly, expression of cattle, hamster and human PrP in baculovirus-transduced insect cells led to dimeric PrP [42]. It is currently not understood if differences in the folding behavior of PrP in bacterial or eukaryotic cells merely reflects a need for a eukaryotic cellular protein folding environment, indicates that the attachment of N-glycans is critical for dimer formation, or has other causes. That dimerization might play a critical role in PrP's biogenesis and trafficking was corroborated by fusing it to the dimerization domain of the FK506 binding protein (Fv). When dimerization was induced by the addition of an Fv dimerization ligand, the authors observed profound increases in the levels of PrP that reached the cell surface [43,44]. Currently available data on dimerization motifs for mammalian PrP indicate that these are nonhomologous to the ZIP4 dimer interface discussed above and instead rely on structural features not shared with this ZIP transporter. It remains to be seen to which extent the structures of more closely related ZIPs will be informative for understanding this aspect of the biology of mammalian PrP.
To date, the smallest extensively studied PrP deletion construct known to convert and transmit prion-like disease in mice was of 106 amino acids (Δ23-88, Δ141-176) [45]. This expression product comprised the entire helix CFC domain, which in human begins at amino acid 179. However, although the CFC is a critical component of disease-associated prion proteins, and appears to be retained in their proteinase-resistant core of 27 to 30 kDa (PrP27-30), the study of ZIPs may not help us elucidate their structures (the reader is reminded that even fish PrP or the mammalian paralogs Dpl and Sho have not been shown to undergo prion-like conversion). Rather, we anticipate that comparative analyses of PrP and ZIPs may continue to help elucidate the physiological function of PrP C , and may hold a key to understanding the molecular mechanisms by which PrP affects its next neighbors.
Based on the striking conservation of the PAL motif in fish PrP sequences, our prediction is that the latter not only exist naturally as dimers, but also adapt the interface seen in ZIP4. Dramatic effects of point mutations on the ability of ZIP4 to fold and reach the cell surface [16,31] and a strong interaction of the ZIP6-ZIP10 complex with calreticulin (an ER resident chaperone) suggest that the Currently available data on dimerization motifs for mammalian PrP indicate that these are non-homologous to the ZIP4 dimer interface discussed above and instead rely on structural features not shared with this ZIP transporter. It remains to be seen to which extent the structures of more closely related ZIPs will be informative for understanding this aspect of the biology of mammalian PrP.
To date, the smallest extensively studied PrP deletion construct known to convert and transmit prion-like disease in mice was of 106 amino acids (∆23-88, ∆141-176) [45]. This expression product comprised the entire helix CFC domain, which in human begins at amino acid 179. However, although the CFC is a critical component of disease-associated prion proteins, and appears to be retained in their proteinase-resistant core of 27 to 30 kDa (PrP27-30), the study of ZIPs may not help us elucidate their structures (the reader is reminded that even fish PrP or the mammalian paralogs Dpl and Sho have not been shown to undergo prion-like conversion). Rather, we anticipate that comparative analyses of PrP and ZIPs may continue to help elucidate the physiological function of PrP C , and may hold a key to understanding the molecular mechanisms by which PrP affects its next neighbors.
Based on the striking conservation of the PAL motif in fish PrP sequences, our prediction is that the latter not only exist naturally as dimers, but also adapt the interface seen in ZIP4. Dramatic effects of point mutations on the ability of ZIP4 to fold and reach the cell surface [16,31] and a strong interaction of the ZIP6-ZIP10 complex with calreticulin (an ER resident chaperone) suggest that the assembly of ZIPs might be intricate. It then may not come as a surprise that fish PrP was observed to be refractory to recombinant protein refolding protocols that repeatedly produced high-quality NMR structures for mammalian prion proteins [30]. It is tempting to speculate that a move to a eukaryotic expression system, possibly augmented by a tyrosine-to-cysteine replacement of the tyrosine preceding the 'PAL' motif present in fish prion sequences, will lead to better-behaved fish PrP expression products [30].
Conclusions
Studying the physiological function of homologous segments within ZIPs may hold a key to understanding elusive aspects of the biology of PrP. Nonetheless, because ZIP4 is a relatively distant PrP relative, there are limits to the extent insights into its biology will be informative for understanding PrP. However, this overall approach should become more valuable once structures for the ZIP6-ZIP10 heteromer become available. Already, this general comparative approach has precipitated research that revealed a role of PrP in EMT [26]. It also was instrumental for understanding that the PrP-NCAM1 interaction is unlikely to have evolved following the split of PrP sequences from its ZIP relatives, but was most likely inherited by PrP and the ZIP6-ZIP10 complex from a common ancestor [22]. Currently missing are insights into the molecular workings of PrP and the molecular mechanisms by which its presence influences its interactors. We anticipate that major advances in this direction will be made once the ZIP6-ZIP10 complex can be functionally interrogated to dissect how structural elements within this complex affect its function.
Pa
Pteropus alecto PCD PAL-motif containing domain PL prion-like PrP C cellular prion protein SLC solute carrier Ta Trichoplax adhaerens TM transmembrane Tr Takifugu rubripes Ts Trachemys scripta | 2018-05-31T23:33:09.726Z | 2017-12-29T00:00:00.000 | {
"year": 2017,
"sha1": "a3e4d938e8f159831c65c196551c4298b0fe6f0b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/7/1/4/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3e4d938e8f159831c65c196551c4298b0fe6f0b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
221236606 | pes2o/s2orc | v3-fos-license | RNA-seq analysis of gene expression profiles in isolated stria vascularis from wild-type and Alport mice reveals key pathways underling Alport strial pathogenesis
Previous work demonstrates that the hearing loss in Alport mice is caused by defects in the stria vascularis. As the animals age, progressive thickening of strial capillary basement membranes (SCBMs) occurs associated with elevated levels of extracellular matrix expression and hypoxia-related gene and protein expression. These conditions render the animals susceptible to noise-induced hearing loss. In an effort to develop a more comprehensive understanding of how the underlying mutation in the COL4A3 gene influences homeostasis in the stria vascularis, we performed vascular permeability studies combined with RNA-seq analysis using isolated stria vascularis from 7-week old wild-type and Alport mice on the 129 Sv background. Alport SCBMs were found to be less permeable than wild-type littermates. RNA-seq and bioinformatics analysis revealed 68 genes were induced and 61 genes suppressed in the stria from Alport mice relative to wild-type using a cut-off of 2-fold. These included pathways involving transcription factors associated with the regulation of pro-inflammatory responses as well as cytokines, chemokines, and chemokine receptors that are up- or down-regulated. Canonical pathways included modulation of genes associated with glucose and glucose-1-PO4 degradation, NAD biosynthesis, histidine degradation, calcium signaling, and glutamate receptor signaling (among others). In all, the data point to the Alport stria being in an inflammatory state with disruption in numerous metabolic pathways indicative of metabolic stress, a likely cause for the susceptibility of Alport mice to noise-induced hearing loss under conditions that do not cause permanent hearing loss in age/strain-matched wild-type mice. The work lays the foundation for studies aimed at understanding the nature of strial pathology in Alport mice. The modulation of these genes under conditions of therapeutic intervention may provide important pre-clinical data to justify trials in humans afflicted with the disease.
Introduction
The stria vascularis is a highly specialized tissue lining the lateral wall comprised of marginal cells at the luminal surface that form basolateral infolds with intermediate cells and basal cells that are attached to the spiral ligament fibrocytes. Specialized pericytes and endothelial cells along with the strial capillary basement membranes maintain a tight fluid barrier of the strial capillaries [1]. The stria functions through various channels and transporters to maintain the endocochlear potential (EP) of the endolymph in the scala media of the cochlear duct. This EP is what drives the depolarization of cochlear hair cells when the mechanotransduction channels in the hair cell stereocilia open in response to movement of the basilar membrane. Thus, defects in strial function result in hearing loss. Alport mice are used as a model to study strial dysfunction. Alport syndrome is characterized by delayed onset glomerular disease associated with progressive hearing loss [2]. The syndrome results from mutations in basement membrane type IV collagen genes COL4A3, COL4A4 (autosomal recessive Alport syndrome, [3,4], and COL4A5 [X-linked Alport syndrome, [5]]. In the glomerulus of the kidney, the glomerular basement membranes, which are initially thinner than that of normal GBM owing to the lack of the type IV collagen α3/4/5 network, become progressively and irregularly thickened and thinned with multiple laminations of electron dense areas. This diagnostic phenotype, which is linked to the onset and progression of proteinuria, ultimately culminates in focal segmental glomerulosclerosis associated with renal failure. In the inner ear we previously suggested basement membrane thickening occurs specifically in the strial capillary basement membranes (SCBMs) based on electron microscopy analysis of basement membrane thickening in 7-9 week-old autosomal Alport mice [6]. This was associated with the accumulation of basement membrane proteins, including laminins, entactin, and type IV collagen, and elevated expression of hypoxia-associated genes and matrix metalloproteinases [7,8].
Alport mice showed permanent ABR threshold shifts after a noise exposure that was insufficient to cause permanent threshold shifts in age/strain matched wild-type mice [8]. These data strongly suggest that the stria vascularis, which shows a loss of metabolic homeostasis, is the source of pathologic hearing loss in the Alport mouse model.
In order to further our understanding of the nature of the changes that occur in the Alport stria vascularis compared to healthy wild-type stria, we verified that the thickening of the SCBM negatively influenced vascular permeability, which may account for the metabolic stress. We then performed RNA-seq analysis of strial RNA from wild-type and Alport mice at an age when SCBM thickening is apparent. The data confirm that the stria vascularis in the 7-week-old 129 Sv Alport mouse model is injured and dysfunctional. Moreover, the data provide new information informing whether current or planned pre-clinical trials of novel therapeutics can restore normal strial homeostasis.
Mice
129 Sv autosomal Alport mice were developed in the Cosgrove lab [9]. All mice were on a pure 129 Sv genetic background and maintained in house. Lab diet was Teklad Envigo diet # 7912. 15. All procedures involving animals were conducted in accordance of an approved IACUC protocols at both sites (Boys Town National Research Hospital and Saint Louis University) and consistent with the NIH guide for the care and use of laboratory animals. Every effort was made to minimize usage as well as minimize any pain or distress. Both males and females were utilized. Animals were housed in groups with in rooms with a 14/10 hour light/dark cycle.
Strial microdissection
A detailed procedure for strial microdissection was described previously [7]. The temporal bones were harvested from non-transcardially perfused mice following cervical dislocation (within two minutes) and transferred to ice-cold HBSS buffer in a specialized petri dish. Both stria were microdissected within 10 minutes and transferred to TriZol for later RNA isolation. The stria from eight wild-type and eight Alport mice were combined for a single analysis and the experiment was performed two independent times.
RNA-seq analysis
Microdissected striae were lysed in Trizol1 (Ambion1, Carlsbad, CA) and RNA isolated from the aqueous phase using PureLink1 RNA Mini Kit (Ambion1). An RNA Quality Number (RQN) was determined for each sample using a Fragment Analyzer™ Automated CE System (Advanced Analytical Technologies, Inc. Ames, IA). Samples with RQN's of � 8 were used in SMART-Seq1v4 Ultra1 (Takara Bio USA, Inc.). cDNA synthesis and libraries generated utilizing Nextera™ XT DNA Library Preparation Kit (Illumina1 San Diego, CA). RNA seq analysis was performed using the Illumina1 NextSeq™ 500 system (San Diego, CA). The data was analyzed using Ingenuity Pathway Analysis software (QIAGEN Bioinformatics) by the University of Nebraska Medical Center bioinformatics core facility. This is a classification of the data into categories and then sub-categories. For example, major categories include canonical pathways, transcriptional regulators, disease functions, and toxicity functions. These are further broken down to sub-categories based on disease functions, these include immunological disease, metabolic disease endocrine disorders and organismal injury. For each sub-category the molecules that are significantly modulated and the degree of that modulation are listed by the software based on a two-fold cut-off for up-or down-regulation. Because this categorization has considerable redundancy, given that the program was written as for broad-spectrum analysis, we present general categories that the literature suggests reflect the underlying causes of strial pathology, which is not a category probed by the software because it is too specific [1,10,11]. This is the data presented in Tables 1-4. The unbiased nature of this approach is reflected in the fact that >80% of these genes have never been ascribed as functionally important in the stria vascularis. Standard deviations are provided for two independent RNA-seq experiments. The complete set of raw data can be accessed at https://www.ncbi.nlm.nih.gov/sra/?term=PRJNA602720. Vascular permeability assay 8.5 wk old Alport and WT mice (n = 16/genotype) were anesthetized (Avertin (0.4 mg/g BW, IP)). Each mouse received an intra-cardiac injection (200 μl) of unconjugated Rhodamine fluorescent dye (0.04% in sterile normal saline (NS)) followed by 3 minutes incubation. At 30 and 60 sec post-injection, the lips, nose and front paws were examined under UV light. If no fluorescence was noted, the experiment was aborted. If at 3 min post-injection, the lips, nose and front paws glowed, the mouse was transcardially perfused with 5ml 0.1M cacodylate to wash out the Rhodamine dye from the vasculature. The cochleae were isolated, perilymphatically perfused with, and then immersed in cold (4˚C) 4% paraformaldehyde in 0.1M cacodylate buffer. Within 20 minutes, the lateral wall (LW, Fig 1) of both cochleae representing the stria vascularis and spiral ligament was microdissected, transferred into a 0.5 ml tube and frozen in liquid nitrogen. Since the stria vascularis tended to fragment if not attached to the lateral wall, In a subset of dissections, only the spiral ligament (SL, Fig 1) was collected to determine its contribution to lateral wall diffusion. The vascular permeability of the stria vascularis was the difference between that of the whole LW and the SL as shown in Fig 1. A negative control group (n = 7) was created using LW from mice that underwent intra-cardiac injection with 0.9% sterile NS in place of Rhodamine dye followed by isolation of the LW. Each sample of two LWs or two SLs was iced and micro-homogenized (100 strokes) in 30 μl of sterile phosphate buffered saline (PBS). Fluorescence (550/580) was quantified in black 384 well low-volume plates. A standard curve was constructed for the unconjugated Rhodamine fluorescent dye (y = random fluorescence units (RFU) vs. x = concentration (%w/v)). Since at low levels of fluorescence the standard curve was nonlinear, the fluorescent dye concentration in each sample was determined by solving for the quadratic equation for its positive root (x1 or x2), expressed as %w/v. The permeability of the StV was determined by subtracting the mean fluorescence of the SL samples from the of the LW samples for each genotype.
Immunofluorescence analysis
Mice were deeply anesthetized with ketamine and xylazine (300 mg/kg and 30 mg/kg, respectively, IP). The appropriate level of anesthesia was evaluated by loss of the hindquarter reflex after a pinch with thumb and index finger. If the indicated dose does not inhibit the reflex, an additional 20% of the anesthetic will be administered. Animals are euthanized by transcardial perfusion with PBS while under anesthesia. Cochleae were perfused with 4% paraformaldehyde and then decalcified overnight in 150mM EDTA on a rotator at 4˚C. The samples were
Confocal microscopy
Confocal images captured using a Leica TCS SP8 MP confocal imaging system, using a 63x NA: 1.4 oil or 10x NA: 0.3 objective. Final figures were assembled using Adobe Photoshop and Illustrator software (Adobe Systems, CA).
Statistical analysis
For vascular permeability studies, a Kruskal-Wallis H-test followed by Dunn's Multiple Comparison test was used to determine whether any tissue autofluorescence contributed to the measured rhodamine fluorescence. The presence of significant differences between the rhodamine fluorescence in the wild-type and Alport spiral ligament and lateral wall samples was determined using a 2-way ANOVA with factors of genotype and tissue followed by a Holm Sidak all pairwise post hoc analysis. The negative control group was removed from the ANOVA analysis because it contained only lateral wall tissue samples and thus violated a 2-way ANOVA assumption. Finally, a t-test was conducted to determine is the WT StV fluorescence differed significantly from that of the KO StV fluorescence. Significance was set at p<0.05. The statistics and resultant data graphs were conducted using Sigma Plot 13 (SYSTAT Software, San Jose, CA). Strial macrophages were counted and the data analyzed using a twotailed students t-test. Real time RT-PCR data was analyzed using a two-tailed students t-test.
Results
The distribution of basement membranes (black solid lines) in a mid-modiolar cross-section of the mouse cochlea are shown in Fig 1. As a reference for Fig 2, we outlined the lateral wall in yellow and the spiral ligament in blue. Previous work showed significant thickening confined to the SCBMs in Alport mice at 7 weeks of age [6]. This same study showed that basement membranes in all other cochlear regions (Fig 1) did not vary significantly in the Alport mice compared to age-matched wild-type mice. Morphometric measures of SCBMs from 9 different wild-type mice and 9 different Alport mice between 8 and 9 weeks of age showed that the SCBMs were significantly thickened (59.7 +/-19.1 nm for wild-type versus 98.7+/-38 nm for Alport SCBMs, [8]. We surmised that the thickening of the SCBMs might compromise the permeability of the strial capillaries. To test this, Rhodamine dye was injected intracardially into 8.5 week wild-type and Alport mice. The concentration of Rhodamine dye in cochlear lateral wall tissue after a transcardial flushing of the vasculature was quantified via fluorimetry against a standard curve. The results in Fig 2A show that the SCBMs in the stria of Alport mice are indeed less permeable than those in the Alport mice, presumably owing to the thickened SCBMs. Results of the Kruskal-Wallis (H = 14, p = 0.007, df = 4) and multiple comparison test (Dunn's, p<0.05) showed that the level of fluorescence of the negative control lateral wall (LW; Fig 1) as well as that of the spiral ligament (SL; Fig 1) tissue from both wild-type and Alport cochlea were significantly lower than that present in the lateral wall tissue of both the wild-type and Alport mice. Further analysis revealed that a significant interaction existed between the genotype and tissue (ANOVA, F (1,28) = 8.977, p = 0.006). Post hoc analysis revealed that the significance (Holm-Sidak, p<0.05) is due to the lower permeability of Rhodamine from capillaries to the surrounding lateral wall tissue in Alport versus wild-type mice.
No difference (Holm-Sidak, p>0.05) was noted in the fluorescence of spiral ligament tissue in the two types of mice. Together, the lateral wall and spiral ligament data indicate that the lower vascular permeability in the Alport lateral wall is due to the accumulation of basement membrane proteins resulting in thickened strial capillaries with dysfunctional endothelial and pericyte cells. To test this conclusion, StV permeability was determined from the difference in the median level of fluorescence between LW and SL tissues for each genotype. Fig 2B shows that the permeability of the StV in the Alport mouse is significantly less (t-test, p<0.001) than that of the wild-type mice. We presumed that reduced permeability might compromise strial function. To derive a clearer understanding of the health of the Alport stria vascularis we performed RNA-seq analysis using microdissected stria from 7-week-old wild-type and Alport mice. The stria vascularis was microdissected from (eight each) 7-week-old wild-type and Alport mice. The strial cDNA was sequenced and the data analyzed using Ingenuity Pathway Analysis software (QIAGEN Bioinformatics). The experiment was performed twice. Only genes that were consistently upor down-regulated in both independent experiments by at least two-fold are presented. We broke our analysis into four categories that the literature indicates likely reflect the underlying mechanisms of strial pathology (as described in the methods): cell morphology [[10]; Table 1], Injury [[1]; Table 2], hearing/hearing loss (Table 3), and upstream regulators [ [11]; Table 4]. It is notable that while the transcripts shown represent the significant differences identified from the 23000 mouse genes analyzed represent most of the genes modulated, they are not completely exhaustive. The raw data is provided in the NCBI repository to allow independent analysis by other investigators (see methods for link). For cell morphology (Table 1) most of the genes modulated regulate cell survival, cell signaling, transcriptional regulation, cell adhesion, and ion channels. For injury ( Table 2), most of the genes modulated regulate pro-inflammatory cytokines, inflammation, potassium channels, cell signaling, and hypoxia/ischemia. For hearing/hearing loss (Table 3) modulated genes include transporters, intermediary metabolism, and growth factors. For upstream regulators (Table 4), which are all transcription factors, the genes modulated regulate cell signaling, adhesion/migration/differentiation, inflammation/injury response, and of course other transcription factors. There is some overlap within the categories presented in the tables as some of the genes function in multiple pathways as analyzed using the pathway-finder function of the Ingenuity software. What is notable based on the function of the genes listed in Table 2 is that the Alport stria is in an inflammatory state with considerable evidence of injury. A large percentage (approximately 80%) of these genes have never been characterized in the stria vascularis, and thus represent a novel "footprint" for strial pathology. We validated several of these genes using standard qRT-PCR. As shown in Fig 3, most of these genes validated RNA the findings observed by RNA-seq. One notable exception, TNF-α, was significantly up-regulated by RNA-seq, but did not validate by qRT-PCR. This may be due to the fact that TAQman probes only span a single exon which might miss alternatively spliced transcripts. RNA-seq has greater specificity than PCR-based methods, suggesting that the RNA-seq data is the more reliable data set [12].
It has been previously shown that resident macrophages are activated and non-resident macrophages recruited to the lateral wall and the stria vascularis in response to noise damage, ischemia, or mitochondrial damage [13,14]. To determine whether the inflammatory cytokines are due to activation/increased numbers of macrophages in the stria vascularis of Alport mice we performed dual immunofluorescence labeling with anti-desmin antibodies (a marker for pericytes) and anti-F4/80 9a marker for macrophages [15], antibodies. The results in Fig 4A show that the number of macrophages in the Alport and wild-type mice appear similar. To validate this observation, macrophages were quantified in mid-modiolar cross sections of the stria from eight wild-type and eight Alport mice (Fig 4B). While the numbers trended higher in the Alport stria, they did not achieve significance.
To determine whether the RNA-seq data corroborated with protein expression in the stria, we performed immunohistochemical analysis of four proteins encoded by the genes marked in bold in Tables 2 and 3. Several genes were chosen on the basis that they have never been shown to be expressed in the stria vascularis and thus may reflect novel pathogenic mechanisms. The results in Fig 5 show that a good correlation exists between protein expression and mRNA expression for the four genes/proteins. Glial fibrillary acidic protein [concentrated at the luminal surface of the marginal cells, involved in cell-cell communication; [16]], neuronal pentraxin 1 [localizing to intermediate cells, involved in acute immune response; [17]], and reelin [partially encircling strial vessels, pericyte-like localization, involved in response to tissue injury; [18]] have never been shown to be expressed in the stria, and thus represent novel genes associated with strial pathology. ICAM1, a cell adhesion molecule, has been previously shown to be expressed in strial and spiral ligament vessels [19].
Discussion
Prior work showed that the SCBMs in Alport mice are thickened relative to age/strainmatched wild-type mice, and that the thickening is associated with an accumulation of extracellular matrix (ECM) [6][7][8]. As in the renal glomerulus, the mechanism underlying the SCBM thickening is mediated through activation of endothelin A receptors [8,21]. Blocking these receptors with small molecules prevents accumulation of ECM in the SCBMs and normalizes SCBM thickness ultrastructurally [8]. A cursory look at the resting stria from Alport mice demonstrated that the tissue was in a state of oxidative/metabolic stress, much like the stria from age/strain-matched wild-type mice following noise exposure [8,22]. Here we extend these findings to demonstrate the full spectrum of changes in gene expression in the Alport stria compared to age/strain-matched wild-type stria. To the best of our knowledge, this is the first application of RNA-seq comparing profiles in normal and diseased stria vascularis.
The results suggest that the stria vascularis is in an inflammatory state with a large number of proinflammatory cytokines and chemokines up-regulated (for example Ccr5, Ccl9, Ccl7, Il17f, and TNF-α) and a number of molecules meant to protect from inflammatory damage are down-regulated (including Ctnf, Ccl3, Il12a, and Smtnl1). A large number of genes are involved in the regulation of the pro-inflammatory NFkappaB response (including Bach2 (induced), Card11 (induced), Tnf (induced), Tnfrsf4 (suppressed), and Tmem178 (suppressed)), clearly identifying activation of inflammatory responses in the strial compartment. The number of interstitial macrophages in Alport stria versus wild-type stria (7 weeks of age) did not vary significantly, however the macrophages in Alport stria appear activated, with slightly higher numbers, larger cell bodies and numerous cell processes. Mid-modiolar cryosections of cochleae from wild-type (A) and Alport (B) mice were immunostained using anti-desmin (pericyte marker, red) and anti-F4/80 (macrophage marker, green) antibodies. Eight wild-type and 8 Alport mice were analyzed. The data analyzed by two-tailed student's t-test, but did not achieve significance (C). https://doi.org/10.1371/journal.pone.0237907.g004
PLOS ONE
That strial function is impacted in Alport mice is evidenced by modulation of a number of transporters and channels that show significant changes in gene expression compared to wildtype littermates. These include Slc2a5, Slc4a4, Slc6a19, Slc8a1, Slc17a8, Slc25a21, Slc26a4, and Spns3. Potassium channels and transport mediators were also affected. For the most part these were significantly down-regulated including Kcna1, Kcnc1, Kcnip 2, Mlc1, and Kcna5. There were exceptions, however, where up-regulation was observed including Kcnc2 and Kcnip1. To corroborate RNA-seq data, we analyzed protein expression using mid-modiolar cross sections of 7-week-old wild-type and Alport mice. Results show down regulation in Alport compared to wild-type mice for glial fibrillary acidic protein, neuronal pentraxin 1, and reelin, and with up regulation in Alport compared to wild-type for Icam1. All are consistent with the alterations in mRNA expression. We include Kir4.1 as a control for intermediate cell staining [20]. The experiment shown is reflective of four independent experiments using independent groups of animals. https://doi.org/10.1371/journal.pone.0237907.g005
PLOS ONE
These channels have not been previously characterized in the stria vascularis, so the consequence of their up regulation is not clear.
Notably, a number of transcription factors are modulated that regulate genes associated with injury and inflammation (Table 4), among them STAT1, NFkappaBIA, FOXa2 and JUNB. These four transcription factors are associated with inflammatory responses and likely contribute to the inflammatory state of the Alport stria vascularis. GATA2 is a transcription factor that regulates transcriptional modulators GATA1 and Klf1, both of which are highly upregulated in the Alport stria vascularis relative to wild-type, amplifying the transcriptional dysregulation in Alport stria.
As shown in Table 3, several genes are modulated that have been previously shown to be related to hearing loss. Brain-derived neurotrophic factor is markedly down-regulated in the Alport stria relative to wild-type stria. This growth factor has been shown to inhibit spiral ganglion degeneration and thus reduced secretion might compromise cochlear health. Slc26a4 encodes Pendrin, a well-characterized transporter required for regulation of fluid volume in the Scala media [23]. The absence of Pendrin results in deafness. Up-regulation of Pendrin in Alport stria might reflect a compensatory mechanism due to down-regulation of other transporters. FGF10 is required for expansion of the non-sensory regions of the cochlear duct during cochlear development [24]. Whether there is a functional consequence for FGF10 upregulation (5-fold) in the Alport stria is unclear.
In a recent publication, we showed that the Alport stria was under metabolic stress resulting in elevated expression of hypoxia-related factors [8]. In the current study we provide a more comprehensive profile of strial injury and demonstrate unequivocally that the stria vascularis in the Alport mouse model is in an inflammatory state. As noted above, many of the inflammatory pathways induced in the Alport stria converge at NF-kappaB activation. NF-kappaB has long been known to play a primary role in inflammatory diseases [25]. Therefore, it is of interest to point out that NF-kappaB is induced in the lateral wall of mice subjected to acoustic overstimulation [26]. Exposure of mice to loud noise produces oxidative stress and up-regulation of genes associated with inflammation [1,22,27]. The Senescence Accelerated Mouse-Prone 8 (SAMP8 mouse), which shows accelerated aging, shows signs of both inflammation and oxidative stress in the stria vascularis [28]. This likely precedes degenerative changes documented for the strial capillaries in the aging mouse [29]. Collectively, these studies suggest that inflammation associated with strial pathology may be quite common and thus may reflect a more general target to protect against major causes of hearing loss such as presbycusis and noise-induced damage.
Studies of human temporal bones documented splitting of the basilar membrane in the region of the pars pectinata and cellular infilling in the tunnel of Nuel. These investigators concluded that the SNHL associated with Alport syndrome might be associated with abnormal cochlear micromechanics [30,31]. The Merchant paper further concluded that the SCBMs were not thickened. Careful examination of the data in Merchant et al. it is clear that the SCBMs on the outside of the pericytes are indeed thickened relative to TEM images of normal SCBMs, which is what is observed in the mouse. SCBMs are bilayered, with an internal basement membrane between the endothelial cell and the pericyte and a second basement membrane lining the outer layer of the pericyte [graphically shown in [8]]. The Merchant paper was only considering the endothelial basement membranes, which were indeed of normal thickness. Early studies of human Alport organ of Corti isolated and fixed immediately following death also noted thickening of the SCBMs [32]. Importantly, Moon et al [33] noted significant hearing loss in Alport patients with normal otoaccoustic emissions, an observation that is wholly incompatible with the theory of abnormal cochlear micromechanics, which essentially rules it out. It is quite possible that the splitting of the basement membrane in the pars pectinata is either an artifact of tissue preparation, which can take up to a year for human temporal bones, or occurs long after hearing loss is established.
In summary, the RNA-seq studies presented here show that progressive thickening of the SCBMs in the Alport mouse model is associated with strial inflammation, oxidative stress, and dysregulation of ion channels and transporters. These changes likely account for the sensitivity of Alport mice to noise-induced hearing loss documented earlier [8]. It is important to remember that the Alport SCBMs have a change in the type IV collagen composition [6], which precipitates the progressive changes that culminate in reduced vascular permeability and strial inflammation. It will be of interest to determine the similarities/differences in the inflammatory response for models of presbycusis and noise-induced strial damage. If similar, they may be responsive to more generalized anti-inflammatory therapeutics, as has previously been proposed [13]. Since >80% of these genes have never been described in the stria vascularis, this work provides an important framework for validating therapies aiming to prevent Alport strial dysfunction as well as to define novel molecular pathways associated with strial dysfunction not only in Alport syndrome, but likely other disorders where SCBM thickening has been a noted feature. | 2020-08-23T13:05:59.535Z | 2020-08-21T00:00:00.000 | {
"year": 2020,
"sha1": "cfdb81b3fd8a6c3df59f59bdb5eada165c49c0d8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0237907&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc389dfbe9ee62213a62f596ef50d14b64c0f8e6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
258684889 | pes2o/s2orc | v3-fos-license | Blocking muscle wasting via deletion of the muscle-specific E3 ligase MuRF1 impedes pancreatic tumor growth
Cancer-induced muscle wasting reduces quality of life, complicates or precludes cancer treatments, and predicts early mortality. Herein, we investigate the requirement of the muscle-specific E3 ubiquitin ligase, MuRF1, for muscle wasting induced by pancreatic cancer. Murine pancreatic cancer (KPC) cells, or saline, were injected into the pancreas of WT and MuRF1-/- mice, and tissues analyzed throughout tumor progression. KPC tumors induces progressive wasting of skeletal muscle and systemic metabolic reprogramming in WT mice, but not MuRF1-/- mice. KPC tumors from MuRF1-/- mice also grow slower, and show an accumulation of metabolites normally depleted by rapidly growing tumors. Mechanistically, MuRF1 is necessary for the KPC-induced increases in cytoskeletal and muscle contractile protein ubiquitination, and the depression of proteins that support protein synthesis. Together, these data demonstrate that MuRF1 is required for KPC-induced skeletal muscle wasting, whose deletion reprograms the systemic and tumor metabolome and delays tumor growth.
Cachexia is a debilitating disease that results in decreased quality of life in PDAC patients and in many case is the cause of mortality than the cancer itself. research in this field has not lead to any definite markers of cachexia, however the increased muscle loss has been attributed to increased expression of muscle specific E3 ligases MURF1 and Atrogin1. in this study "Blocking muscle wasting via deletion of the muscle specific E3 ubiquitin ligase MuRF1 impedes pancreatic tumor growth" authors have solidified the role of MURF 1 in tumor induced cachexia and have showed how murf1 knockout mice could circumvent muscle and fat loss otherwise induced by tumor burden. slow tumor growth and increased overall survival in knockout mice and the proteomic and metabolomic studies confirm the role of MURF 1 in cancer induced cachexia. proper controls have been used and different pathways studied to compliment results from each pathway. this study will add to the field of cachexia research and pave way to considering MURF1 as a drug target for treatment or cachexia.
Reviewer #2 (Remarks to the Author): The manuscript by Neyroud et al investigated the requirement of the muscle-specific E3 ubiquitin ligase, MuRF1, for muscle wasting induced by pancreatic cancer. They found that KPC tumors induced progressive wasting of skeletal muscle and systemic metabolic reprogramming in WT mice, but not MuRF1-/-mice. KPC tumors from MuRF1-/-mice also grew slower, and showed an accumulation of metabolites normally depleted by rapidly growing tumors. They validated in female mice that MuRF1 deletion also protects against KPC T42D-induced wasting, slows tumor growth and extends survival. Overall, this study is interesting and novel. It provides comprehensive analysis of the metabolome and ubiquitination-proteasome alteration in WT and MuRF1-/-mice bearing tumors. I have the following comments.
1. Besides MuRF1, several other proteins are also critical for muscle wasting, such as Atrogin-1 and UBR2. Why would they focus on MuRF1? Please clarify. 2. For the results of metabolome analysis, as shown in Table 1-2, they compared WT and MuRF1-/mice bearing tumors. Did they have any chance to examine the metabolites of normal pancreas, muscle and serum in Sham mice? 3. They validated their findings in female mice in Fig In this manuscript, Neyroud et al investigated the role of MuRF1, a muscle-specific E3 ubiquitin ligase, impact of muscle wasting induce by pancreatic cancer. Using an orthotopic model of pancreatic cancer (either WT or KO MuRF1 transgenic mouse models), the authors demonstrated that MuRF1 is necessary for the increases in cytoskeletal and muscle contractile protein ubiquitination as well as for the inhibition of pathways that support protein synthesis. In addition, deletion of Murf1 can reprogram the systemic and tumor metabolism, delaying tumor growth. This is an interesting and well-organized study that unveils the function of MuRf1 in controlling skeletal muscle wasting in pancreatic cancer, establishing that directly interfering with muscle wasting can impact the tumor metabolism. The only major weakness stands in the fact that having in hands such a remarkable and robust effect on systemic and tumor metabolism, one would have expected to see a further validation of the effects induced by the metabolites that showed altered abundance across tissues, as reported in Figure 6. The authors may include additional experiments validating how MuRF1 loss deprives tumor cells from key energy metabolites and reduces tumor growth and expanding the discussion session on this topic, in order to implement the potential clinical impact of their findings
Reviewer #1
Comment: "Cachexia is a debilitating disease that results in decreased quality of life in PDAC patients and in many case is the cause of mortality than the cancer itself. research in this field has not lead to any definite markers of cachexia, however the increased muscle loss has been attributed to increased expression of muscle specific E3 ligases MURF1 and Atrogin1. in this study "Blocking muscle wasting via deletion of the muscle specific E3 ubiquitin ligase MuRF1 impedes pancreatic tumor growth" authors have solidified the role of MURF 1 in tumor induced cachexia and have showed how murf1 knockout mice could circumvent muscle and fat loss otherwise induced by tumor burden. slow tumor growth and increased overall survival in knockout mice and the proteomic and metabolomic studies confirm the role of MURF 1 in cancer induced cachexia. proper controls have been used and different pathways studied to compliment results from each pathway. this study will add to the field of cachexia research and pave way to considering MURF1 as a drug target for treatment or cachexia."
Response:
We would like to thank this reviewer for her/his review of our manuscript.
Reviewer #2
Comment 1: "The manuscript by Neyroud et al investigated the requirement of the musclespecific E3 ubiquitin ligase, MuRF1, for muscle wasting induced by pancreatic cancer. They found that KPC tumors induced progressive wasting of skeletal muscle and systemic metabolic reprogramming in WT mice, but not MuRF1-/-mice. KPC tumors from MuRF1-/mice also grew slower, and showed an accumulation of metabolites normally depleted by rapidly growing tumors. They validated in female mice that MuRF1 deletion also protects against KPC T42D-induced wasting, slows tumor growth and extends survival. Overall, this study is interesting and novel. It provides comprehensive analysis of the metabolome and ubiquitination-proteasome alteration in WT and MuRF1-/-mice bearing tumors. I have the following comments." Response 1: We would like to thank this reviewer for her/his review of our manuscript.
Comment 2: "Besides MuRF1, several other proteins are also critical for muscle wasting, such as Atrogin-1 and UBR2. Why would they focus on MuRF1? Please clarify." Response 2: We fully acknowledge that there are additional ubiquitin ligases in skeletal muscle that play important roles in muscle wasting, including Ubr2, which has also been identified as a key player in cancer-induced muscle loss (Gao et al., PNAS 2022). Nonetheless, MuRF1 is consistently elevated in skeletal muscles of mice and people with cancer who exhibit cachexia (ref), providing strong rationale to investigate the role of MuRF1 in cancer-induced muscle loss. Conducting cancer cachexia studies utilizing mice lacking other ubiquitin ligases in skeletal muscle, such as atrogin-1 and Ubr2, and conducting similar omics analyses (proteomics, ubiquitinomics, metabolomics) as performed in the current study, was simply beyond the scope of the current study.
To acknowledge the important role of the E3 ligase, Ubr2, in muscle wasting induced by cancer, we have now added the following sentences within the introduction (l. 63-67), and within the results/discussion (l. 225-227) to acknowledge the important findings from this study which revealed Ubr2 as a key E3 ligase that ubiquitylates MYH4 and MYH1 proteins in the context of cancer, leading to loss of muscle mass and force production.
-L. 63-67: "In the context of muscle wasting, various E3 ligases have been shown to be involved in contractile protein degradation (see [17][18][19] for review), including the muscle-specific E3 ligases -F-Box Protein 32 (Fbxo32/atrogin-1) 17,20-22 and muscle RING finger protein 1 (MuRF1/Trim63) 17,20,23-28 -as well as the more ubiquitously expressed E3 ligase, ubiquitin protein ligase E3 component N-Recognin 2 (Ubr2) 21,29 ." -L. 225-227: "Notably, a recent study demonstrated that MYH1 and MYH4 proteins are also ubiquitinated by another E3 ligase, UBR2, whose muscle-specific deletion prevented fast-twitch muscle wasting in response to tumor burden 29 ." Comment 3: "For the results of metabolome analysis, as shown in Table 1-2, they compared WT and MuRF1-/-mice bearing tumors. Did they have any chance to examine the metabolites of normal pancreas, muscle and serum in Sham mice?" Response 3: We have not examined normal pancreas metabolome in this study as our main goal was to determine if the slowed tumor growth observed in the absence of MuRF1 (i.e. a skeletal muscle E3-ubiquitin ligase) was mediated by an alteration in tumor metabolome. With respect to muscle and serum, we examined their metabolome in Sham mice. The data presented in Table 2 and 3 are indeed fold changes vs. Sham. The absolute data for the Sham mice (as well as tumor-bearing mice) can be found in Supplementary file 5, 7 and 8).
Comment 4: "They validated their findings in female mice in Fig. 7. Are female mice showing similar patterns of metabolome and ubiquitination-proteasome alteration?" Response 4: While performing metabolomic or ubiquitinome-proteomic analyses in tissues from both male and female mice could be insightful, both sexes showed comparable effects of MuRF1 deletion on the major outcomes, including protection against tumor-induced muscle loss and slowed tumor growth. and. Because sex-differences were not apparent in the major cancer and cachexia outcomes, there was not strong justification for duplicating the omics studies in both sexes, which would double the cost. We therefore limited the omics studies to a single sex. "Although omics analyses were not conducted on tissues from these mice, these findings in female mice, using a different KPC cell line and conducted in a different lab, lend strong support for the role of MuRF1 in mediating cancer-associated muscle wasting and tumor growth." (l. 348-350).
Comment 5: "In the figure legend of Fig. 1, the authors stated that "MuRF1 (Trim63) is increased in skeletal muscle of tumor-bearing hosts.". Although previous studies have demonstrated it, in Fig. 1, they did not examine MuRF1 expression in skeletal muscle of tumorbearing hosts. Please rephrase it." Response 5: We thank the reviewer for catching this oversight. We have modified Fig. 1 title as follow: "MuRF1 (Trim63) deletion protects against muscle wasting, skeletal muscle fiber atrophy and slows tumor growth." Comment 7: "Line 160-168, font types are inconsistent." Response 7: We thank the reviewer for his/her thorough review of the manuscript and catching this inconsistency. This font inconsistency has been resolved.
Reviewer #3
Comment: "In this manuscript, Neyroud et al investigated the role of MuRF1, a muscle-specific E3 ubiquitin ligase, impact of muscle wasting induce by pancreatic cancer. Using an orthotopic model of pancreatic cancer (either WT or KO MuRF1 transgenic mouse models), the authors demonstrated that MuRF1 is necessary for the increases in cytoskeletal and muscle contractile protein ubiquitination as well as for the inhibition of pathways that support protein synthesis. In addition, deletion of Murf1 can reprogram the systemic and tumor metabolism, delaying tumor growth. This is an interesting and well-organized study that unveils the function of MuRf1 in controlling skeletal muscle wasting in pancreatic cancer, establishing that directly interfering with muscle wasting can impact the tumor metabolism. The only major weakness stands in the fact that having in hands such a remarkable and robust effect on systemic and tumor metabolism, one would have expected to see a further validation of the effects induced by the metabolites that showed altered abundance across tissues, as reported in Figure 6. The authors may include additional experiments validating how MuRF1 loss deprives tumor cells from key energy metabolites and reduces tumor growth and expanding the discussion session on this topic, in order to implement the potential clinical impact of their findings." Response: We would like to thank this reviewer for her/his review of our manuscript. We agree with the reviewer's assessment that there is a remarkable and robust impact of MuRF1 KO on systemic and tumor metabolism. However, studies investigating the direct effects of specific metabolites regulated by MuRF1 on in vivo tumor metabolism, tumor growth and cachexia are not trivial, and would require extensive studies beyond the scope of the current study.
However, we have now incorporated a sentence to acknowledge that additional validation studies are needed to investigate the key metabolites regulated by MuRF1 on in vivo metabolism, tumor growth and cachexia: "These results pave the way for future validation studies investigating the direct effects of specific MuRF1-regulated metabolites on in vivo tumor growth." (l. 349-351). | 2023-05-15T06:16:17.919Z | 2023-05-13T00:00:00.000 | {
"year": 2023,
"sha1": "eee02f980bbee852de3797a3fa8c22875bdbc162",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d5499b3f73e0cba43e48989938b200eb27608dbc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55971651 | pes2o/s2orc | v3-fos-license | LSQ14efd: observations of the cooling of a shock break-out event in a type Ic Supernova
We present the photometric and spectroscopic evolution of the type Ic supernova LSQ14efd, discovered by the La Silla QUEST survey and followed by PESSTO. LSQ14efd was discovered few days after explosion and the observations cover up to ~100 days. The early photometric points show the signature of the cooling of the shock break-out event experienced by the progenitor at the time of the supernova explosion, one of the first for a type Ic supernova. A comparison with type Ic supernova spectra shows that LSQ14efd is quite similar to the type Ic SN 2004aw. These two supernovae have kinetic energies that are intermediate between standard Ic explosions and those which are the most energetic explosions known (e.g. SN 1998bw). We computed an analytical model for the light-curve peak and estimated the mass of the ejecta 6.3 +/- 0.5 Msun, a synthesized nickel mass of 0.25 Msun and a kinetic energy of Ekin = 5.6 +/- 0.5 x 10^51 erg. No connection between LSQ14efd and a GRB event could be established. However we point out that the supernova shows some spectroscopic similarities with the peculiar SN-Ia 1999ac and the SN-Iax SN 2008A. A core-collapse origin is most probable considering the spectroscopic, photometric evolution and the detection of the cooling of the shock break-out.
INTRODUCTION
Supernovae (SNe) without hydrogen lines in their spectra are classified into two main types: SNe-Ia and SNe-Ib/c (see Filippenko 1997 for a review). SNe-Ia originate from thermonuclear explosions of a carbon-oxygen white dwarf (CO-WD) reaching the Chandrasekhar mass. Two main channels for the origin of type Ia SNe have been proposed: a single degenerate scenario where the CO-WD in a binary system accretes matter from a companion star (Wheeler & Hansen 1971;Whelan & Iben 1973), and a double degenerate scenario for which the SN is the result of a merging of two close WDs after orbital shrinking (Tutukov & Yungelson 1979;Iben & Tutukov 1984;Webbink 1984). A third channel has been recently proposed by Katz & Dong (2012) for which, in a WD triple system, the WDs approach each other and the collision is likely to detonate the WDs leading to a type Ia SN. However, the detailed physics of the explosion are poorly understood and several models have been presented, from the supersonic detonation to the subsonic deflagration (Hillebrandt & Niemeyer 2000). In the last years, several peculiar SNe-Ia were discovered (e.g. Li et al. 2001, Valenti et al. 2014, suggesting the existence of a variety of explosion mechanisms and/or progenitor systems (e.g. Mannucci et al. 2006). Very bright objects, with a luminosity ∼ 40% brighter than normal SN-Ia have been observed and are considered to be super-Chandrasekhar explosions (Howell et al. 2006;Scalzo et al. 2010;Silverman et al. 2011;Taubenberger et al. 2011). At the other extreme very faint events show unusual observational signatures (e.g., Turatto et al. 1996;Foley et al. 2009;Perets et al. 2010;Kasliwal et al. 2010;Sullivan et al. 2011). A new explosion model was proposed for a particular subclass of these sub-luminous SNe events that exhibit similarities to SN 2002cx (Li et al. 2003), called type Iax SNe (e.g 2005hk Phillips et al. 2007;2008A Foley et al. 2013, which originate from the deflagration of a CO-WD that accretes matter from a companion He star. Type Ib/c SNe originate from the gravitational collapse of a massive star for which the iron core cannot be supported by any further nuclear fusion reaction, or by electron degenerate pressure, hence collapsing into a neutron star or a black hole. They can be divided into two classes: SN-Ib which show He lines in their spectra and SN-Ic which do not. Two main scenarios are considered for the progenitors of type Ib/c SNe (see reviews by Woosley & Bloom (2006) and Smartt 2009): a single massive Wolf-Rayet (WR) star which has lost its hydrogen envelope, before the collapse of the core, through stellar winds or a binary system (see Panagia & Laidler 1991 for an early suggestion) where the progenitor star loses its H (and He, in the case of SN-Ic) envelope through tidal stripping from the companion star. The measured masses of the ejecta seem to favour the majority being relatively lower mass binary stars, rather than very massive single WR stars (Eldridge et al. 2013, Lyman et al. 2016, Cano et al. 2014) and the data for iPTF13bvn seems to be more consistent with a binary system (Bersten et al. 2014;Fremling et al. 2014;Eldridge et al. 2015).
Unlike SNe-IIP and IIb, the progenitors have not been commonly identified in pre-explosion images (see Eldridge et al. 2013;. One probable detection exists for the progenitor of a Ib SN : namely iPTF13bvn by Cao et al. (2013), which has been studied further by Groh et al. (2013), Fremling et al. (2014), Bersten et al. (2014) and Eldridge et al. (2015).
Almost two decades of observations have allowed us to divide the SN-Ic population into two sub-classes: standard SNe-Ic, characterized by kinetic energies of E k ∼ 10 51 erg and broad-line (BL), with ejecta velocities of order ∼ 0.1c which therefore implies significantly higher kinetic energy (E k ∼ 10 52 erg, e.g. Nakamura et al. 2001). Some high energy, SNe-Ic-BL have been convincingly linked to gamma ray bursts (GRB; see Kovacevic et al. 2014 for an updated census of GRB-SNe), while the majority of SNe-Ic are not associated with GRBs (e.g. the ratio GRB/SNIbc is < 3%, Guetta & Della Valle 2007). Some SNe-Ic, such as SN 2004aw (Taubenberger et al. 2006) and SN 2003jd (Valenti et al. 2008a) show physical properties in between those of standard Ic events and SNe-Ic-BL, therefore suggesting the existence of a wide diversity in SNe-Ic in terms of expansion velocity of the ejecta, peak luminosity, and kinetic energy (Elmhamdi et al. 2006;Modjaz et al. 2015). In this scenario, it is not clear if the broad variety of observed SNe-Ic and SNe-Ic-BL is due to different sub-classes of SNe-Ic originating from different classes of progenitors, or if is representative of an existing continuum of properties among the different SN-Ic types (Della Valle 2011; Modjaz et al. 2014Prentice et al. 2016. The large fraction of peculiar objects, for both SN-Ia and Ic classes, which are now being found has led to cases of ambiguity in determining the physical origin of these hydrogen and helium poor objects. In many cases the physical origin in a thermonuclear or core-collapse explosion is debated e.g. SN 2002bj (Poznanski 2010); SN 2004cs (Rajala et al. 2004(Rajala et al. , 2005Leaman et al. 2011) and SN 2006P (Serduke et al. 2006Leaman et al. 2011;Li et al. 2011). Usually the photometric analysis does not solve the ambiguity that can arise from the early spectroscopy, for example Cappellaro et al. (2015) found a ∼ 40% difference in the the classifications of SN-Ib/c classification using PSNID within the SUDARE survey.
Nebular spectra may help, in revealing nucleosynthetic products in the interior part of the star, which are very different in the two mechanisms but it is possible to observe them only for the brigthest and nearest sources.
In some cases, early photometric observations may be able to detect the cooling of the shock break-outs which can constrain the progenitor system. One of the main signatures of core-collapse SNe (CC-SNe) is represented by the early emission of X-ray and/or ultraviolet radiation which traces the break-out of the SN shock-wave through the stellar photosphere. After the envelope of the star has been shock heated it starts cooling and it creates an early peak in the optical passband. This event represents the initial stages of a SN event and it typical shows very short duration, from minutes to hours, which makes the detection difficult and has resulted in the number of observed cooling of the shock breakouts being few in number (e.g. SN 1987A, Arnett et al. 1989SN 1993J, Lewis et al. 1994SN 2006aj, Campana et al. 2006SN 2008D, Mazzali et al. 2008Soderberg et al. 2008and SN 2011dh Arcavi et al. 2011). This early emission can be interpreted also as an extended envelope or due to outwardly mixed 56 N i as investigated for the peculiar type Ib/c SN 2013ge (Drout et al. 2016).
The cooling of the shock break-out in a type Ia SN explosion in a WD is too dim and fast to be detectable for extragalactic events (Nakar & Sari 2012;Rabinak et al. 2012) but an early UV excess in type Ia SNe is predicted for certain binary progenitor systems. Kasen (2010) shows that the collision between the SN ejecta and its companion star should produce detectable radiative diffusion from deeper layers of shock-heated ejecta causing a longer lasting optical/UV emission, which exceeds the radioactively powered luminosity of the SN for the first few days after the explosion.
Here present an extensive data set for LSQ14efd which was discovered by the La Silla QUEST survey (LSQ; Baltay et al. 2013), and monitored by the Public ESO Spectroscopic Survey of Transient Object (PESSTO; Smartt et al. 2015) 1 . Our analysis suggests that LSQ14efd is most similar to SN 2004aw which was originally classified as type Ia SN and re-classified as a type Ic after a longterm followup. We show a detection of the cooling envelope emission after the shock break-out. The photometric and spectroscopic analysis of LSQ14efd shows this is a SN-Ic. However, LSQ14efd shows some ambiguities in its spectroscopic classification with similarities to SNe Iax such as SN 2008A. LSQ14efd is also interesting object because it shows intermediate properties between "standard" and very energetic SNe-Ic events.
This paper is organized as follows: in Section 2 we present the discovery and the classification of LSQ14efd and we discuss the properties of the host galaxy, the distance and the extinction; in Section 3 we present the optical photometric evolution of LSQ14efd and compare its colour evolution and bolometric light curve with those of other Type Ic and Ia SNe. In Section 4 we present the optical spectroscopic observations and comparison with other SNe. In Section 5 we model the light curve peak to estimate the main physical parameters of the explosion such as ejected mass, kinetic energy and nickel mass. In Section 6 we summarize our discussion and present our conclusions.
DISCOVERY AND HOST GALAXY
LSQ14efd was discovered in an anonymous galaxy on 2014 August 17 UT (MJD=56886.81) at the coordinates R.A.= 03h: 35m: 38.74s and DEC.= −58 • : 52 ′ : 38.3". After the discovery all images from LSQ archive were checked and we found the SN was already visible on the image of August 13, but not in images from August 09, with both images having the same depth. We thus consider the explosion date as August 11, with an uncertainty of 2 day. On 2014 August 18, Tartaglia et al. (2014a) classified LSQ14efd as a Type II SN with an unknown phase, as part of PESSTO. Due to the poor signal to noise (S/N) of the first spectrum, new observations were performed the day after and LSQ14efd was reclassified as a Type I SN around maximum (Tartaglia et al. 2014b). They also reported that the best match of the spectrum of this transient was obtained with the peculiar Type Ic SN 2003jd. LSQ14efd is located in an outer region of the host galaxy (see Fig 1). The distance to the galaxy is 1 www.pessto.org not available in the literature. The 2D spectroscopic frames shows the presence of the host galaxy spectrum for which it was possible to identify the Hα emission, at a redshift of z = 0.0672 ± 0.0001. Assuming H0 = 70 km s −1 Mpc −1 , we then calculate a distance modulus of µ = 37.35 ± 0.03 mag. The Galactic reddening towards LSQ14efd is estimated from Schlegel et al. (1998) to be E(B −V ) = 0.0376±0.0015 mag. We considered the internal reddening of the host galaxy as negligible since there is no clear evidence of NaID absorption in the spectra nor significant reddening of the spectrum continuum. Assuming a Cardelli et al. (1989) reddening law (RV = 3.1) we estimate the total V-band extinction towards LSQ14efd to be AV = 0.12 mag.
Search for an associated GRB
We have investigated the possibility that LSQ14efd could be related with a GRB event. We have examined the Fermi Gamma-ray Burst Monitor 2 (GBM) and the SWIFT 3 catalogs in a period of time < 20 days from the occurence of the SN event. The time interval was chosen since it corresponds to the threshold of 95% confidence level for the association between GRB and type Ib/c SNe (Kovacevic et al. 2014). In this range of time, we found 3 detections, but none of them were spatially coincident, within the errorbox of the GRB detection, with the SN position. Hence, no association can be found between LSQ14efd and a GRB event.
Data sample and reduction
A photometric monitoring campaign for LSQ14efd, at optical wavelengths, was conducted over a period of 100 days post-discovery, covering 40 epochs, using multiple observing facilities.
g ′ r ′ i ′ images were collected with: the 1m from LCOGT (Siding Spring, Australia) equipped with the SBIG Camera (BV , 10 epochs); the 1m from LCOGT (South African Astronomical Observatory, South Africa) equipped with the SBIG Camera (BV , 7 epochs) and the 1m from LCOGT (Cerro Tololo, Chile) equipped with the Sinistro Camera (BV , 3 epochs). A summary of the telescopes and the intruments characteristics are presented in Table 1.
Data pre-reduction followed the standard procedures of bias, overscan, flat-field corrections and trimming in the IRAF 4 environment. Johnson B V and g ′ r ′ i ′ calibrated magnitudes of 45 reference stars were obtained through the AAVSO Photometric All-Sky Survey (APASS) (Munari et al. 2014). The internal accuracy of the APASS photometry, expressed as the error of the mean of data obtained and separately calibrated over a median of four distinct observing epochs and distributed between 2009 and 2013, is 0.013, 0.012, 0.012, 0.014, and 0.021 mag for the BV g ′ r ′ i ′ bands, respectively. In our knowledge no other star catalogues were available for the field of the SN. Johnson-Cousins RI photometry was estimated trasforming the g ′ r ′ i ′ filters through the Lupton et al. (2005) transformation equations.
The QUBA pipeline (Valenti et al. 2011) was used for most of the photometric measurements. This pipeline performs a point-spread-function (PSF) fitting photometry, based on DAOPHOT (Stetson 1987), on both the SN and the sorted reference stars. QUBA allows one to model the background with a polynomial surface, to treat the cases in which the SN is embedded in a spatially varying background. After some empirical tests, we found that a 4th-order polynomial model gives good results for the background subtraction, considering the high S/N of the SN in the images.
Photometry on LSQ data, for the pre-discovery epochs, was performed with the stand-alone version of the DAOPHOTIV/ALLSTAR software, which allows cross-correlation of the measurements on the individual images, and produces a light curve with respect to a reference epoch. This approach is particularly useful when dealing with point sources with a low S/N ratio, as in the case of the first and last epochs of the LSQ data of LSQ14efd. The LSQ filter is a custom broadband filter centered on 5534Å, ranging from 4000 to 7000Å (Baltay et al. 2012). It is customary to transform LSQ instrumental magnitudes to Johnson V magnitudes, by computing a (V, B − V ) colour equation, where the coefficients are estimated on selected reference stars in the field, for which standard B, V magnitudes are available. In our case, instrumental LSQ magnitudes were transformed to standard Johnson V magnitudes taking advantage of the B − V colour curve, estimated with the other telescopes.
At all epochs, we estimated K−correction factors from our spectroscopy and we found out that they are negligible, compared with the photometric uncertainty, therefore none were applied.
The full photometric measurements of the SN in the BV RI and g ′ r ′ i ′ are listed in Table 2. The Johnson-Cousins BV RI photometry is reported in Vega magnitudes, while the g ′ r ′ i ′ photometry is reported in the AB magnitude system.
Early evolution
The early points in the V band show an initial decline before the rising of the peak. Fig. 2 shows a 2nd order polynomial fit of the early stage of the V band light curve without taking into account the first point. We note that it deviates significantly, 1σ, from the expected rise. A pre-explosion limit has been calculated from the image of August 09. The estimated limit is 20.6 ± 0.2 mag and it is represented by the arrow in Fig. 2.
We interpret this decline as the cooling that occurs soon after the shock break-out event, similar to that observed in other CC-SNe (e.g. SN 1993J, Lewis et al. 1994;SN 2006aj Campana et al. 2006SN 2008D, Soderberg et al. 2008SN 2011dh, Arcavi et al. 2011iPTF15dtg Taddia et al. 2016), as shown in Fig. 3. A direct comparison of the decline observed in LSQ14efd with CC SNe shows good agreement with the behaviour observed in previous cases where very early photometry exists, in particular there is good relative agreement with SN 2008D and similar also to SNe 1993J and 2006aj (see, left panel of Fig. 3).
An early UV emission pulse has been predicted also for type Ia SNe which generate from an interactive binary system (Pakmor et al. 2008, Kasen 2010) and it has been observed recently (e.g. SN 2012cg, Silverman et al. 2012;Marion et al. 2015 andiPTF14atg Cao et al. 2015).
We do not have UV data to directly compare the UV excess so we investigated the B−v colour evolution. The B− V colour evolution is compared in Fig 3 to further investigate the early phase UV excess. We notice that the B − V colour evolution of iPTF14atg shows a pre-maximum value around −1.5 and −1 around 10 days and it then sharply increases above 0 around maximum. SN 2012cg shows a different B − V colour evolution, being constant around 0 until maximum and smoothly reaching a value ∼ 1 at 20 days. The LSQ14efd B − V evolution shows a different behaviour with respect to iPTF14atg, while shows a qualitative similar trend with Figure 2. Fit of the light curve in the apparent V band magnitude of LSQ14efd. Epochs refer to the V-maximum. The preexplosion limit is represented with the arrow.
respect to SN 2012cg but redder, since it differs of ∼ 0.5 at all epochs. Instead, the B − V colour evolution of SN 2008D in pre-maximum phases is similar to that of LSQ14efd.
Late evolution
The photometric evolution of LSQ14efd in the BV RI and in the g ′ r ′ i ′ filter systems is shown in Fig. 4. The SN was discovered 14 days before maximum in B band. The epoch of the B-maximum (M JD = 56895.72) was obtained with a polynomial fit performed using the first 20 days of data. All subsequent epochs referred to in this work, unless specified, will refer to this date as the epoch zero.
The B-band light curve reaches a peak magnitude of mB = 19.47 mag and has a decline rate of 5.61 ± 0.78 mag per 100 days, in the interval 5-30 days past B-maximum. This interval was assumed to measure the decline rate also for all other bands. The V band light curve reaches the peak ∼ 5 days after with a magnitude of mV = 18.97 mag and a decline rate of 4.48 ± 0.45 mag per 100 days. In the R band, the light curve peaks around 7 days with a value of mR = 18.84 mag and a decline rate of 3.28 ± 0.33 mag per 100 days. The I band, the light curve peak appears around 8 days with a magnitude of mI = 18.66 mag and a decline rate of 2.28 ± 0.23 mag per 100 days. In the g ′ we see the peak after around 5 days with a magnitude of m g ′ = 19.11 mag and a decline rate of 5.96 ± 0.72 mag per 100 days. The r ′ band shows a peak after ∼ 5 days with a magnitude of m r ′ = 18.87 mag and a decline rate of 2.78 ± 0.28 mag per 100 days. The peak in the i ′ band appears after around 5 days showing a magnitude of m i ′ = 18.96 mag. The decline rate is 1.64 ± 0.16 mag per 100 days. We also note that the blue bands show a narrower light curves compared to those of the red bands.
A comparison of the light curves of LSQ14efd with SN 2004aw (Taubenberger et al. 2006) shows that the shift of the maximum in the different bands is compatible, within the errors, with the ones estimated for SN 2004aw, except for the maximum in the V filter, which is reached after ∼ 3 days for SN 2004aw and after 5 days for LSQ14efd (see Table 3). A comparison of the decline rate, in the range 5-30 days, shows good agreement between the two SNe. However in the B band the decline rate measured for SN 2004aw it is 6.96 ± 0.16 mag per 100 days and 5.61 ± 0.78 mag per 100 days for LSQ14efd (see Fig. 2 in Taubenberger et al. (2006)). This decline rate is slower than of SN 1994I which shows a faster evolution and a decline rate of ∼ 9 mag per 100 days in the B band (Elmhamdi et al. 2006).
As it will be shown later, the spectroscopic evolution of LSQ14efd shows some similarities with type Iax SNe, therefore it is useful to compare its photometric evolution with Iax and peculiar type Ia SNe to determine if there are any similarities. In Fig. 5, we present the absolute I magnitude light curve of LSQ14efd compared with SNe 1999ac (normal Ia), SN 2008A (SN-Iax) and SN 2004aw (SN-Ic as discussed above). LSQ14efd does not show a secondary I-band peak as we see normal SN-Ia and it has a wider peak and a slower post-maximum decline when compared the type Iax SN 2008A. It bears most similarities with SN 2004aw, although the comparison is qualitative. The measure of the ∆m15 for LSQ14efd is reported in Table 3 We have applied the values of ∆m15 of the Phillip's relation (Phillips 1993) for type Ia SNe to LSQ14efd and found an implied MB = −19.3 ± 0.3 mag. This differs by ∼ 1.26 mag from the measured B-band peak of SN (see Tab. 3) which implys that LSQ14efd does not satisfy the SN-Ia relation and does not comfortably fit with this physical explanation. We stress that the type Iax SNe also do not satisfy the Philipps relation.
We then compared the light curves of LSQ14efd with those of a sample of SNe-Ib/c. In particular, the evolution of the light curve in the R and V bands of LSQ14efd was compared with the templates by Drout et al. (2011) (Fig. 6). Those template are the result of the interpolation over the normalized V and R light curve of 10 well-sampled literature SNe-Ib/c. The weighted mean flux density was then extracted over the time interval -20 to 40 days. We notice that the evolution of the V band of LSQ14efd follows the decline post-maximum of the template within 1σ while the pre-maximum evolution differs significantly from the template showing that the light curve of LSQ14efd is broader than the ones in the sample. Instead, the R band evolution of LSQ14efd follows nicely the template but we note that in the R band we are missing data at early phases when the most significant deviation is observed in the V-band.
The dereddened B − V , V − R, B − R and B − I colour evolution are shown in Fig 7. We measure a B − V colour of 0.4 mag at ∼ 10 days before B-maximum. It then increases to about 1 mag at ∼ 5 days and stays more or less constant around this value. The V − R colour increases from ∼ 0 mag to around 0.5 mag within 15 days after B-maximum and settles at around this value for the subsequent days. The B − R colour steadily increases from ∼ 0.3 mag to about 1.4 mag within ∼ 20 days after B-maximum. The B − I colour Lewis et al. 1994;SN 2008D, Soderberg et al. 2008SN 2011dh, Arcavi et al. 2011SN 2006aj Campana et al. 2006iPTF15dtg, Taddia et al. 2016and SN 1987A Shelton 1993. The compared bands are in the label. Phases refer to the days since explosion, in logarithmic scale. Right panel: comparison of the B − V colour evolution of LSQ14efd with type Ia SNe iPTF14atg (Cao et al. 2015) and 2012cg (Marion et al. 2015) and type Ib SN 2008D (Soderberg et al. 2008). Phases refer to the days from the B-maximum. increases from ∼ 0.5 mag to around 1.7 mag at ∼ 20 days after B maximum.
The dereddened colour evolution have been compared with those of some type Ic SNe (2004aw, Taubenberger et al. 2006, 2003jd Valenti et al. 2008a, 2007gr Valenti et al. 2008bHunter et al. 2009and 1998bw Patat et al. 2001, to those of some type Ia SNe (1999ac Phillips et al. 2006, 1997br Li et al. 1999 and to those of some type Iax SNe (2008A Foley et al. 2013and 2005hk Phillips et al. 2007).
The colour evolutions of type Ic SNe are well defined in the B − V , V − R and B − I colours (see Fig. 7, upper panel). In the V − R and B − I evolution we can see that SN 1998bw shows a slightly different trend after ∼ 15 days, with a flattening in the evolution that is shown only ∼ 10 days after in the other SNe of this sample. The trend of the B − R evolution is also quite similar for all the SNe of this sample. We notice that the colour evolution of LSQ14efd is very similar to those of type Ic SNe. They show fairly similar trends in the rising part of curves and they all subsequently flatten to comparable values. The similarity between dereddened colours of LSQ14efd and those of the type Ic comparison sample supports the hypothesis of no extinction within LSQ14efd host galaxy (Sect. 2).
The B −V , B −R and B −I colour evolutions of type Ia LSQ14efd 0.9 ± 0.3 0.3 ± 0.1 0.1 ± 0.1 0.02 ± 0.02 a Epoch relative to the B-maximum; b a distance modulus of µ = 37.35 ± 0.15 mag and a extinction E(B − V ) = 0.0376 ± 0.0015 mag were adopted for LSQ14efd; c a distance modulus of µ = 34.17 ± 0.23 mag and a extinction E(B − V ) = 0.37 ± 0.10 mag were adopted for SN 2004aw (Taubenberger et al. 2006); d decline rate in the time range 5 − 30 days after B-maximum, in mag per 100 days. and Iax SNe also show a broadly similar trend. The V − R colour evolution is the most diverse among the objects of the sample (see Fig. 7, lower panel). In V − R the colour evolution of LSQ14efd is similar to that of SN 2005hk while SN 2008A appears to diverge from the broad trends of the set. The B − V , B − R and B − I colour evolutions of LSQ14efd seems to be bluer at early phases and around maximum with respect to those of SNe 2005hk and 1999ac. In the B − R and B − I colour evolutions we point out that the curves are steeper for Ia and Iax SNe respect to those of LSQ14efd.
Quasi-bolometric light curve
A quasi-bolometric light curve has been calculated by integrating the observed optical flux over wavelength. The estimated Bg ′ V r ′ Ri ′ I apparent magnitudes were converted into monochromatic fluxes at the effective wavelength for each filter. After correcting for Galactic extinction (Sect. 2), the resulting Spectral Energy Distribution (SED) has been integrated over the full observed wavelength range, assuming, at limits, a zero flux. The flux was estimated at the phases in which V band observations were available. For the other bands for which photometry was not available, the magnitudes were calculated by interpolation of the values from the photometry on nearby nights. Finally, using the redshift-based distance of the galaxy (Sect. 2), the integrated fluxes were converted into luminosity. We note that the first point was excluded when building the quasibolometric light curve. The quasi-bolometric light curve of LSQ14efd is shown in Fig. 8 together with some other type Ia and Ic SNe. The luminosity at the peak is L = 3.9 × 10 42 erg s −1 . This has to be considered as a lower limit since we constructed the quasi-bolometric light curve only across the wavelength limits sampled by the observed filter set, namely the Bg ′ V r ′ Ri ′ I bands. Lyman et al. (2014) developed a method to estimate the bolometric correction from data which are limited in wavelength. The value of the peak bolometric luminosity estimated with this method is L ∼ 5 × 10 42 erg s −1 . In Fig 8, we can notice that the post-max slope of the light curve of LSQ14efd compares well with those of type Ic SNe 1998bw, 2003jd, iPTF15dtg and 2004aw, while its peak luminosity is intermediate between 2003jd and 2004aw. It is significantly less luminous than the SN-Ic-BL 1998bw but with a similar width. LSQ14efd shows a different behaviour from type Ic SNe 1994I and 2007gr, being more luminous and showing a broader bolometric curve. Instead, iPTF15dtg has a comparable luminosity with respect to LSQ14efd but it is much wider and show a slower decay. We note that type Ia SN 1999ac shows an evident double peak in the curve, which is not present in the light curves of the other SNe plotted. Also the peak luminosity of type Ia SN 1999ac is much brigther than LSQ14efd. SN-Ia 1997br has a comparable light curve shape with respect to LSQ14efd but it is more luminous at every epoch. The peak luminosity of LSQ14efd is not far from that of the type Iax SN 2008A, but it shows a wider peak and a slower decay. We can conclude from this comparison that the photometric evolution of LSQ14efd is consistent with those of the known population of type Ic SNe but also similar to peculiar type Iax SNe.
Data sample and reduction
We performed a spectroscopic monitoring campaign of LSQ14efd at the ESO NTT at La Silla, Chile. Eight epochs . of optical spectra were acquired with EFOSC2 from 8 days before B-maximum until 37 days after B-maximum. Details of the spectroscopic observations and the characteristics of the employed instrumentation are listed in Table 4. The pre-reduction of the spectra (trimming, overscan, bias and flat-field correction) was performed using the PESSTO pipeline , which is based on the standard IRAF tasks. Comparison spectra of arc lamps, obtained in the same instrumental configuration of the SN observations, were used for the wavelength calibration. Observations of spectrophotometric standard stars were used for the flux calibration. Atmospheric extinction corrections were applied using tabulated extinction coefficients of each telescope site. We note that the data presented and analysed in this paper were custom re-reduced, and differ somewhat from those in formal public release of the PESSTO Spectral data products 5 . We obtained some better quality results with more tailored and manual reductions, particularly with manual fringing corrections for the fainter spectra. In this case, the reduction of the spectra followed the standard procedure, with particular attention to the flat-field, considering just a few columns (100) next to the target, such to improve the removing of the fringing in the red bands. Also the background has been optimized, considering also an adjacent area, to minimize the contamination of the host galaxy. An example of the difference in the data quality from the manual reduction and the pipeline is shown in Fig. 9.
A comparison of synthetic BV and r photometry obtained from the spectra, throught the IRAF task CALCPHOT, with the observed photometry at similar epochs was performed to check the quality of the flux calibration. These spectro-photometric magnitudes were compared with those from photometric observations and, when required, a scaling factor was applied. Finally, calibrated spectra were dereddened for the total extinction and corrected for the estimated redshift.
Data analysis
The time evolution of the optical spectra of LSQ14efd, obtained from −8 to 37 days with respect to B-maximum and covering the whole photospheric phase, is shown in Fig. 10. Corresponding line identifications are presented in Fig. 11. Spectra obtained one week before maximum show a blue continuum with some features due to Fe II lines (λλλ4440,4555,5169). A feature very likely due to the Si II (λ6355) appears around phase -5. Branch et al. (2002) pointed out the possibility of a misleading identification of the Si II in type Ib and Ic SNe as the Hα line (see also Parrent et al. 2015) at high velocities (or "detatched hydrogen"). As shown below in this section, the estimated velocities of Si II and Fe II are in agreement, within the errors, which suggests to us that identification as Si II is the more plausible explanation.
There is no evidence of narrow interstellar medium NaID absorption in the spectra, leading us to conclude the absorption due to the host galaxy is negligible, or at least not measurable with the data available. During the evolution of the photospheric phase the spectra obtained close to maximum light show that the intensity of the blue continuum decreases significantly. It's also possible to see prominent features due to Fe II (λλλ4555, 5169, 5535) and the Si II line (λ6355). A possible feature due to Ca II (λ8542) starts to appear. Late spectra show a continuum dominated by the iron-group elements, we find Fe II (λλ4555, 4924) and Fe II(λλ5169, 5535) while Si II lines (λ6355) are still barely visible. There is no evidence of forbidden lines arising in the last spectrum, leading us to conclude the LSQ14efd is still in the photospheric phase. A comparison with other type Ic SNe is shown is Fig. 12. The earliest LSQ14efd spectrum resembles the featureless continuum of SNe 2007gr, iPTF15dtg and 2004aw (admittedly having low S/N). The spectrum around B-maximum does not show any major differences with those of SNe 2004aw and iPTF15dtg. SN 2007gr shows a generally good agreement, but with the presence of extra features around 4000Å and 5000Å. The feature at around 8200Å is possibly due to Ca II, as the comparison with SN 2004aw seems to suggest. At the later stages, about 1 month past maximum, the spectrum of LSQ14efd exhibits similarities with iPTF15dtg, while there is a good agreement with SNe 2007gr and 2004aw, though they exibit stronger Ca II and O I lines.
Fig 13 shows the early spectrum of LSQ14efd (a few days before B-maximum) compared with pre-maximum spectra of type Ia SN 1999ac and type Iax SN 2008A. In the blue part of the spectrum, LSQ14efd has lower signal to noise than that of SNe 1999ac and 2008A, while the rest resembles an almost featureless continuum. The LSQ14efd spectrum around maximum differs, from that of SNe 1999ac and 2008A, mainly in the blue part. The late spectrum of LSQ14efd is again similar to that of SNe 1999ac and 2008A, though the instensity of the Si II feature is weaker for LSQ14efd. A comparison with SNe-Ic and SNe-Ic-BL templates (Liu et al. 2016;Modjaz et al. 2014) remarks a quite good similarity with SNe-Ic for the early spectra, considering the width of the lines present in both, LSQ14efd and the template. We then note a significant deviation for the late ones since the strenght and the width of emission line of LSQ14efd differ from the those of the template. A similar comparison of LSQ14efd spectra with the template for SNe-Ic-BL seems to show also a good agreement at all epochs. This strengthens the peculiarity of LSQ14efd.
The ejecta velocities produced by the two different physical explosion mechanisms can be different. For comparison, ejecta velocities can be estimated from a gaussian fit of the absorption profile of the P-Cygni features (after correction for the redshift of the host galaxy). The uncertainties on the estimated velocities are a result of the error propagation on the uncertainties obtained from the measurement. This was confirmed by several repeated tests. For LSQ14efd, the ∼ −7 spectrum indicates a Si II velocity of vej ∼ 12300 km s −1 . At maximum, this fit to the same line gives vej ∼ 10000 km s −1 , which decreases to ∼ 8000kms −1 after 10 days. At around 20 days, we measure the Si II velocity to drop to ∼ 5000 km s −1 and also estimate the Fe II lines to show an outflow of ∼ 3000 km s −1 A comparison with SN 2004aw, shows that the Si II velocity, goes from ∼ 12700 km s −1 around maximum, to ∼ 9300 km s −1 after 10 days, which are comparable with those estimated for of LSQ14efd, within the errors. In the same temporal range, the Fe II velocity decreases from ∼ 12000 km s −1 to ∼ 8100 km s −1 . At around 20 day, SN 2004aw shows a Si II velocity of ∼ 6000kms −1 , in agreement with the value of Si II for LSQ14efd, within the errors.
We then compared the spectroscopic characteristics of LSQ14efd with those observed for sample of SNe-Ic (Modjaz et al. 2015;Liu et al. 2016). In particular, the evolution of the Fe II (5169Å) line velocities of LSQ14efd have been compared with the trends found by Modjaz et al. (2015) for SNe-Ic and SNe-Ic-BL, see Fig. 16. We can notice that LSQ14efd shows an initial Fe II velocity slightly higher with respect to SNe-Ic trend but quite in agreement within the uncertainties.
Following the comparison done for the light curve of LSQ14efd with type Iax SNe we then proceed to study the Si II velocity evolution. Fig. 15 shows the evolution of the SiII velocities of LSQ14efd compared with other type Ic SNe (SN 2004aw Taubenberger et al. 2006, SN 2003jd Valenti et al. 2008a, SN 1994I Sauer et al. 2006, SN 2007gr Valenti et al. 2008b, and SN 1998bw Patat et al. 2001 other type Ia SNe 1999ac Garavini et al. 2005, 1991T Phillips et al. 1992. Optical spectroscopic evolution of LSQ14efd, starting from −8 days from the B-maximum to 37 days after. Li et al. 1999and SN 2003du Stanishev et al. 2007) and type Iax 2008A Foley et al. 2013). Benetti et al. (2005) found that in type Ia SNe there is a trend for the Si II velocity to reach a value of ∼ 6000 km s −1 at ∼ 30 days. Type Iax SNe, instead, are characterized by low-velocities of Si II. Foley et al. (2013) show, in their Fig. 19, the typical range of velocities for type Iax SNe to be 5000 − 8000 km s −1 , around maximum. Although we can note that SN 2008A displays higher velocities than almost all the objects in this class (∼ 8000 km s −1 ). Type Ic SNe, instead, show a wide diversity in ejecta velocities, varying from ∼ 30000 km s −1 to ∼ 10000 km s −1 around maximum. In summary, LSQ14efd shows a velocity evolution (traced by Si II and Fe II) which is quantitatively similar to that of SN 2004aw. We find that the velocities are compatible, within the errors, at almost all epoch. LSQ14efd velocities are significantly slower than those of the SN-Ic BL 2003jd at all epochs, but are faster than those of SN 2007gr. iPTF15dtg shows a higher pre-maximum velocity of Si II with respect to LSQ14efd, but starting from B-maximum they become comparable. The trend is also very similar to that of LSQ14efd.
The comparison with type Iax SN 2008A shows that the estimated Si II velocities for LSQ14efd have a similar slope but a higher value than the typical Iax SNe but they are low compared to that of type Ia SN. We also note there is a clear difference in the slope of the Si II evolution between type Ia SNe and type Ic; the former have slower decrease. The slope of the velocity decrease for LSQ14efd is more similar to those of type Ic SNe.
56 Ni AND EJECTED MASS
The light curve model developed by Arnett (1982) gives an analytical description of light curve of type I SNe. The original paper by Arnett (1982) contain a typographical error in the numerical factor of Arnett's equation 54, which is discussed in Lyman et al. (2016) and that has been corrected in equation 3. The analysis performed at the peak of the light curve can lead to a rough estimate of the total mass of the SN ejecta (Mej ), the nickel mass (MNi) and the kinetic energy (E kin ). The main assumptions of the model are: a homologous expansion; spherical symmetry; a constant optical opacity; no mixing of 56 N i and radiation-pressure dominance. Furthemore, it considers also the diffusion approximation for photons, which can reasonably be applied in the early phases when the ejecta is optically thick, due to the high density. The time-evolution of the SN luminosity is given by where x ≡ t/τm, y ≡ τm/2τNi and ǫNi = QNi/(MNi τNi); QNi is the energy release for 56 N i decay and τNi is the e−folding time of the 56 N i decay. The width of the peak of the bolometric is related to the effective diffusion time and it is given by: where vsc is the velocity scale of the expansion, kopt is the optical opacity and β is an integration constant. Furthermore, assuming a homogeneous density of the ejecta, it is possible to relate the kinetic energy to the photospheric velocity (v ph ) at maximum through the relation (Arnett 1982) v 2 ph ≈ 10 3 We first estimate the width of the light curve of LSQ14efd, τ ∼ 20 days, following the prescription in Lyman et al. (2016) so the estimate of the light curve width is the numbers of days required to reach the same magnitude the SN had 10 days prior maximum, then, assuming kopt = 0.06 cm 2 g −1 (Lyman et al. 2016) in equation 2 and considering the previously estimated Fe II velocity, we estimated Mej = 6.3 ± 0.5 M⊙.
We then estimate the nickel mass synthesized through the equation 1, evaluated at the time of the bolometric peak when we have reliable measurement. We estimate a MNi = 0.25 ± 0.06 M⊙. Finally, through equation 3, we obtained an estimate for the kinetic energy, E kin = 5.6 ± 0.5 × 10 51 erg. We note that since the MNi depends on the peak luminosity (Valenti et al. 2008b), iPTF15dtg (Taddia et al. 2016) and 2004aw (Taubenberger et al. 2006), at early pre-maximum epoch (∼ −5 days), around maximum and at late spectra (∼ 28 days) and has been estimated from a quasi-bolometric light curve, we should consider this value as a lower limit. Considering the peak luminosity inferred from the method developed by Lyman et al. (2014), the nickel mass could increase up to ∼ 0.32M⊙. The estimated physical parameters are generally higher than other SNe-Ic, excpet for the nickel mass which is close to that estimated for SNe 2004aw (Taubenberger et al. 2006) and 2003jd (Valenti et al. 2008a). We note that a recent work from Mazzali et al. 2017 revisited the explosion parameters for SN 2004aw but they are still in agreement, within the error, with the ones from Taubenberger et al. 2006 and used in this work. As expected, from the empirical comparison of the light curves, the the ejecta mass and 56 Ni mass are larger than those found for type Ic SN 1994I (Nomoto et al. 1994), and somewhat comparable with the higher value for broad-lined type Ic such as SN 1998bw (Galama et al. 1998, Nakamura et al. 2000. iPTF15dtg is more energetic and has a more massive evelope and nickel mass than LSQ14efd. Table 5 contains a quantitative comparison. A comparison of the explosion paramters with the average value found by Lyman et al. (2016) is also shown in Table 5. We note how LSQ14efd has generally higher value than the average found for normal SNe-Ic but somewhat similar to that found for SNe-Ic-BL. We also note that the LSQ14efd velocity is higher than the average value found for SNe-Ic but still lower than those of SNe-Ic-BL. We point out that the kinetic energy estimated has to be considered as an upper limit since spherical symmetry was adopted for the explosion model, for all these SNe in Table 5 . Instead, Taubenberger et al. (2009) show that more than half of all stripped-envelope CC-SNe explosion may be significantly aspherical.
As a consistency check, we have also estimated the explosion parameters assuming that LSQ14efd is a possible type Ia SN using the same model but different assumptions for the physical constants which are more appropriate for an exploding CO white dwarf. In this case we considered kopt = 0.3 g cm −2 s −2 (Stritzinger et al. 2006) and therefore obtained a different ejecta mass of Mej = 1.0 ± 0.2 M⊙ and E kin = 0.6 ± 0.2 × 10 51 erg. The estimated nickel mass still remains at 0.25±0.06M⊙. As expected, the difference in the values of the Mej and E kin under the two assumptions is simply due to the different values of kopt, while the MNi remains the same as it depends on the peak luminosity. We compared these results also with those of type Iax SN as reported in Magee et al. 2017, where the MNi for SNe-Iax ranges in ∼ 0.03−0.6M⊙ . The estimated MNi for LSQ14efd falls in this interval and it's not possible to use this parameter to discriminate among the two possible scenarios. We then compared the results obtained for LSQ14efd with those of a sample of SNe-Ic (Drout et al. 2011). In particular, the Ni mass with respect to the absolute magnitude in R band MR of LSQ14efd was compared with the trend found by Drout et al. (2011) for SNe-Ic and SNe-Ic-BL (Fig. 17). Drout et al. (2011) derived the 56 N i mass through the light curve models from Valenti et al. (2008a) which are based on Arnett (1982) formalism. We can notice that LSQ14efd follows the trend and it is in agreement with the best fit evolution, within the uncertainties.
CONCLUSION
We have presented the photometric and spectroscopic follow-up of LSQ14efd within the PESSTO survey, which covered a period of ∼ 100 days. LSQ14efd exploded in an anonymous galaxy at a distance modulus µ = 37.35 ± 0.15 mag and does not appear to suffer of strong reddening (E(B − V ) = 0.0376 ± 0.0015 mag). Early photometric observations show the probable detection of the cooling of the shock break-out event in the light curve of LSQ14efd. A comparison with other CC-SNe shows a similarity in the cooling of the shock break-out detection of LSQ14efd and SN 2008D (see Fig. 3). The well sampled colour evolution was studied to investigate the possibility that LSQ14efd is a SN-Ia with an indication of interaction of its ejecta with either a companion or nearby CSM, such as as SN 2012cg and iPTF14atg (see Section 3.2). We want to stress that this early emission can have also different interpretation for CC-SNe as the presence of an extended envelope previously ejected by the progenitor or it can be due to some outwardly mixed 56 N i. But with only a Figure 14. Comparison of the spectra of LSQ14efd at different epochs with the templates (Modjaz et al. 2014;Liu et al. 2016) for SNe-Ic (dashed red line) and SNe-Ic-BL (red solid line). The epochs refers to the V maximum. Spectra have been flattened through the SNID package (Blondin & Tonry 2007) for the comparison with the templates. We point out that a characteristic property of the photometric evolution of LSQ14efd is represented by the shift in time between the peak in the red and blue bands with an offset of ∼ 8.9 days between B and I bands, very likely due to different time scales in the cooling of the ejecta. Another characteristic is depicted by the slow decline rate of the light curves (in the interval 5-30 days post B-maximum, 5.61 ± 0.78 mag per 100 days in the B band and 3.28 ± 0.33 mag per 100 days in the R band), see Table 3. The colour evolutions of LSQ14efd (Fig. 7) resemble those of type Ic SNe but do not differ much from SNe Iax as well. The quasi-bolometric light curve (Fig. 8) is similar to those of type Ic SNe 2004aw and 2003jd but also comparable with type Iax SN 2008A. We also perform a comparison of some observables of LSQ14efd with the trend found for sample of SNe-Ic (Drout et al. 2011;Modjaz et al. 2015;Liu et al. 2016). In particular LSQ14efd R band follows the template evolution found in Drout et al. (2011) while the V band instead differs significantly in the pre maximum evolution, showing that LSQ14efd has a broader light curve than average SN-Ib/c (Fig. 6). LSQ14efd follow the 56 N i mass vs MR correlation -10 0 10 20 30 5000 Figure 15. Si II velocity evolution of LSQ14efd compared with type Ic SNe 2004aw (Taubenberger et al. 2006), 2003jd (Valenti et al. 2008a), iPTF15dtg (Taddia et al. 2016) and with type Ia SNe 1999ac (Garavini et al. 2005), 1991T (Phillips et al. 1992), 1997br (Li et al. 1999), SN 2003du (Stanishev et al. 2007 and type Iax 2008A (Foley et al. 2013). Epochs refer to th Bmaximum.
-10 0 10 20 30 5000 Figure 16. Comparison of Fe II λ5169 velocity of LSQ14efd with the general trend for SNe-Ic and SNe-Ic-BL found in Modjaz et al. 2015. Epochs refer to the V-maximum. found by Drout et al. (2011).
The spectroscopic evolution of LSQ14efd shows Si II, Fe II lines and a likely Ca II feature and it is similar to that of SN 2004aw at epochs that are consistent with the light curve evolution. The evolution of LSQ14efd Si II velocities (see Fig. 15) from B-maximum (∼ 10000 kms −1 ) until 20 days after (5000 km s −1 ) is very similar to that of SN 2004aw, which is intermediate between "standard" SNe-Ic and the very energetic ones, like SN 1998bw. Recently, it has also been proposed that SN 2004aw is a "fast-lined" SN rather than a BL-SN (Mazzali et al. 2017), strengthening the peculiarity of this SN. However the spectra of this object present some similarities with peculiar type Ia and Iax SNe, in particular in the late phase (see Figs. 13 and 12) and based on spectra only, we can not determine wheter this is a peculiar SN-Ia or a Ic. A comparison with SNe-Ic and SNe-Ic-BL templates (Liu et al. 2016;Modjaz et al. 2014) remarks a quite good similarity with SNe-Ic for the early spectra but a significant deviation for the late ones but a better agreement with the templates for SNe-Ic-BL at all epochs. This strengthens the peculiarity of LSQ14efd. The Fe II evolution of LSQ14efd was compared with the average trend for SNe-Ic found by Modjaz et al. 2015 for SNe-Ic and it shows a comparable behaviour at early epochs. Instead LSQ14efd shows a pretty different Si II evolution with respect of SNe-Ia and Iax.
Considering the overall proprierties shown by LSQ14efd we favour a core-collapse origin for LSQ14efd.
We applied a simple model for CC-SNe to the quasibolometric light curve to calculate the physical parameters of LSQ14efd. We obtained a synthesized MNi = 0.25 ± 0.06 M⊙, an ejected mass of 6.3 M⊙ and a kinetic energy of 5.6 × 10 51 erg. A comparison of the explosion parameters with the average values found by Lyman et al. (2016) shows that LSQ14efd seems to have values closer to that found for SNe-Ic-BL rather than those for normal SNe-Ic. No evident association with a GRB was identified.
The increasing number of discoveries of peculiar SNe-Ic, which represent a link between energetic Ic events which are not connected with GRBs (as SN 2003jd) and those which show a clear association with GRBs (as GRB 980425/SN 1998bw) gives support to the idea of an existing continuum of properties between broad lined Ic and "standard" SNe-Ic events rather than suggesting the existence of clearly "separated" classes of SNe-Ic. LSQ14efd is more energetic than standard SNe-Ic, it is not as energetic as SNe-Ic-BL which are often associated with GRBs. This again shows a diversity of the type Ic SNe class, which was already pointed out by Taubenberger et al. (2006).
LSQ14efd confirms the existence of an unresolved ambiguity in SN classification, particularly when the classification in SN types relies only on the photometric evolution and/or early stages spectra. | 2017-07-14T21:21:04.000Z | 2017-07-14T00:00:00.000 | {
"year": 2017,
"sha1": "5d6e092a3c8a16c3826bd5c0c2395e38fa203bce",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/471/2/2463/19491243/stx1709.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6a553fec17abf1b87f112d2f7a7f6215e3a2d6c0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236816380 | pes2o/s2orc | v3-fos-license | Entrepreneurial Orientation and Business Performance: An Assessment of Start-up Companies
The purpose of the present research is to analyse from the core constructive influence of entrepreneurial orientation (EO) (risk taking, innovativeness, and pro-activeness) on the business performance (BP) of start - up enterprises. Various related literature aspects discussed for to embrace entrepreneurial orientation to increase business performance. Profitability index, size, age, return on sales, technological advancement, profit and sales among others used to relate EO with BP. Linear regression models, descriptive statistics, Pearson’s correlation, Cronbach's Alpha for reliability test etc., among many used to relate statistically EO with BP. Sample of 53 start-up firms from Addis Ababa, Ethiopia, chosen for data interpretation. It is found out that there is a positive relationship exists between Entrepreneurial Orientation (EO) and Business performance (BP). This study wants to create awareness and knowledge among startup entrepreneurs to adopt entrepreneurship. In the global competitive and dynamic environment to outperform rivals, start-up companies expected to become entrepreneurial to meet long-term goals of superior performance by discovering new and better entrepreneurial opportunities. To focus more on entrepreneurial opportunity, attention be given on knowledge creation and understanding towards EO process, practices, and decisions making, which will help to have more competitive advantage for higher performances and as a whole develop countries economy.
INTRODUCTION
Entrepreneurship considered as driver for economic development (Canina, Palacios, and Devece, 2012), has significant strategic importance for sustainable growth, competitive advantage and excellence (Zahra 1991 (Hamilton and Harper, 1994). Entrepreneurship as defined, a value creating process, brings together unique resources for to exploit and maximise opportunity (Stevenson and Jarillo-Mossi, 1986). In the entrepreneurship definitions most commonly used themes included as creator of wealth, change, innovation, enterprise, employment, growth and value (Morris, Kuratko, and Covin, 2008). As per (Williams, Round, and Rodgers, 2010) there is little consensus about definition of Entrepreneurship. Specifically for Ethiopia, entrepreneurial orientation (EO) is more required as it is the major solution provider for main developmental challenges which include as per World Bank in Ethiopia, promoting rapid economic growth, accelerating poverty reduction, significant progress in job creation, improvement in living standards, etc.
The entrepreneurial orientation (EO) is one of the most crucial factors in the success of a business (Azlin, Amran, Afiza, and Zahariah, 2014). The concept of EO is relevant worldwide and several empirical studies conducted on it in the national contexts in different countries (Thorsten, Tina and Sascha, 2016). Again, several other studies conducted with the help of EO to estimate the performances of the firms (Hoq and Chauhan, 2011; Fauzul, Takenouchi and Yukiko, 2010; Tajeddini 2010; Schindehutte, Morris and Kocak, 2008;Wang, 2008). Quite a few meta-studies have indicated the positive relationship between EO and firm performance where EO uses the entrepreneurial practices of the firms from the perspectives of risk taking ability, pro-activity and innovativeness (Javalgi and Todd, 2011;Miller, 1983). EO viewed as the strategy development process that offer the companies a foundation for entrepreneurial decision-making process as also related other methods (Lumpkin and Dess, 1996;Wiklund and Shepherd, 2005).
These days, EO has become an important element of global strategy making process which can relate several aspects of managementespecially strategy and entrepreneurship (Miller and Friesen, 1982). Modern EO models conjoined three key aspects of start-ups, viz. proactiveness, innovativeness, and risk taking EO is actually an advanced corporate strategy making system; it has several aspects like the organizational framework or the behaviour of management (Begley and Boyd, 1987). So, EO considered as a very suitable and effective process that helps to understand corporate entrepreneurship very closely. The corporate culture of modern times expects an organization be very much entrepreneurial for developing and surviving in the market. It is especially relevant for the start-ups that are new in the business and constantly under huge market pressure. So, EO is relevant for running a business successfully. The rationale behind adaptation of EO is that it is directly associated with the trend to gain opportunities in business which has a positive influence on performance of the organization (Wiklund, 1999).
The purpose of the present research is to analyse the real influence of entrepreneurial orientation on organizational productivity and profitability. This research will analyse the models of EO for examining its effects on the performance in the start-up firms in Addis Ababa, Ethiopia. The present research will check whether entrepreneurial orientation that have an impact on the business performance be reinforced during the phased of development of a start-up company.
Entrepreneurial Orientation (EO)
EO is usually accepted as a strategy for the top management on innovativeness, risk taking ability, and proactivity (Covin and Slevin, 1986;Lumpkin and Dess, 1996;Miller, 1983). Miller opined (1983), that the firm involved in product marketing and related innovations, at first work with utmost uncertainty in several aspects and then arise with all positive aspects in the market leaving their competitors behind. He further said that an organization could be entrepreneurial if it had the risktaking capability regardless of their structural features. , regarded EO as a single dimensional element and claimed that the three structural aspects, viz. innovativeness, proactivity, and risk taking of EO be merged into one dimension.
Initially, innovativeness is the foremost urgency in an entrepreneurship. It is the tendency to connect with modern approach and inventive measures that leads to new products and services (Miller, 1983). Innovation is adoption of new idea and / or behaviour (Zaltman, Duncan, and Holbek, 1973). Higher the innovation abilities more successful to dynamic conditions, new capabilities, to changes, and for better performance (Montes, Moreno, and Fernandez, 2004). Focus, cost leadership, and differentiation are Porter's strategies related to innovation for performance (Porter, 1985). Then comes proactivity. It denotes an extention of the organizational leadership closely linked with market competence (Wiklund, 1999), outperform industry rivals (Lumkin and Dess, 2001), offensive tactics continuously (Davidson, 1987), forcibly acting for to make changes not merely anticipating (Bateman and Crant, 1993). At the end, risk-taking is the amount to which an organization can take risk in terms of productivity and output (Covin and Slevin, 1991), failing and missing opportunity (Dickson and Giglierano, 1986), willingness for to commit resources, to carry out projects, activities, and solutions with high level of uncertainty for outcomes (Lumpkin and Dess, 1996). Constructive risk taking generates exploration and exploitation (Baird and Thomas, 1990), and prevents from inertia and inaction (Busenitz and Barney, 1997;Miller and Friesen, 1982).
Moreover, Lumpkin, and Dess (1996) purported that constructs of EO can separately differ from one another. Furthermore, Kreiser, Marino, and Weaver, (2002), purported that the alternatives to choose the dimension is dependent on goals to decide whether acuteness is more significant to ease, and whether a differential association anticipated between prime variables and sub-dimension.
Going against this uni-dimensional concept of EO, some researchers emphasized that EO be a multidimensional element and the above-mentioned three constructs can separately be focused (Lumpkin and Dess, 1996;Kreiser et al., 2002). Lumpkin and Dess (1996) opines that the constructs of EO have different features and different necessities. Kreiser et al., (2002) opined that all the constructs may not necessary at the same time.
Impacts of Entrepreneurial Performance
Theoretically, it's assumed that there could be a strong association between entrepreneurial orientation and organizational performance and the available real-world data confirmed the impact of EO.
The research of Mathew and Robert, (2007) showed that the effect of EO may make varied consequences in a managerial life-cycle. So, it is necessary to recognize the impacts of entrepreneurial orientation during the developmental stage of EO. Right now, no proven data is available on the correct implementation of EO, so it might be useful to show the characteristics of entrepreneurship development. This study will deconstruct the constituents of EO and investigate the performance impact of each during the developmental stage and then access models of EO. This will help to understand the EO from different perspectives.
Aspects of EO
The crucial facets of EO could be excavated from judgement of planning of small-scale business researches (Covin and Slevin, 1991;Miller, 1983;Miller and Friesen, 1978;Venkatraman, 1989a proposed by Miller (1983), there are three points of EO identified and recruited constantly in the sphere of analytical studies. Those three aspects are new strategies, taking challenges as well as positive way-out. Invention of new planning and the ability to involvement in novel trading operations along with testing with unused item and utilities. Additionally, the capacity to lead from the front by utilizing updated techniques and performing R&D as well as technical improvement is important. Taking challenges covers risky works for companies and personnel's by introducing new and inexperienced merchandising operations in novel spheres, high amount of lend activities as well as involving expansive capitals in works where results are unsure. Positive and active way and mentality means asking a chance to prove ability, while it also incorporates novel concepts, utilities by anticipating futuristic needs. As per Lumpkin and Dess (1996), there are two vital points to direct small and new business ventures. These two aspects point the race and competition as well as free regulations etc. for EO. To stay in the market, competing with rivals at the forefront is important. The characteristics of strong competitive mentality lies in expansive condition of company as well as powerful actions against competition's operations. Freedom from any upper chairs is one such crucial point for autonomy of business ventures which has developmental targets. Several analysts have proposed that direction of business ventures has no specific point but amalgamation of different features of EO might be integrated to meet similar outcomes Knight, 1997). Recent academicians have propounded that different approaches could be grouped into various methods. Every single aspect dissipates a different approach as well as the autonomy of EO is also reflected here. Consequently, the outcomes of EO might be engaged in separate ways for producing company's output (George, 2006). Specifically, the developmental rationalizes about Entrepreneurial Orientation is possibly to occur in such a way so that analysts can separate the merits as well as demerits of replacement models of Entrepreneurial Orientation. Furthermore, the condition where replacement prototypes might be proper is also judged (Covin, Green, and Slevin, 2006). On the other hand, the different prototype ideas could be recruited, so with respect to positive or negative feedback thinking Entrepreneurial Orientation as singular or pluralistic idea. Differential approximation and analysis (before and after) can prove the ways of research work when the multi-aspects of Entrepreneurial Orientation ware associated with execution in similar and different spheres.
Performance Relationship
The hypothetical construction of previous researches rests on the facts that companies make profits from innovation and new approaches along with faith and trust upon themselves (Lumpkin and Dess, 1996). Living in an era of speedy changes and small items and entrepreneurial prototypes makes it mandatory to search for new initiatives and approaches for possible incomes. Henceforth, companies may meet profits by recruitment of Entrepreneurial Orientation. Furthermore, these kinds of companies produce new initiatives constantly at the time of taking risks in commodity-merchandising methods (Miller and Friesen, 1982). On a different note, trials to comprehend needs and specifically put item and utility result expansive production (Ireland, Hitt, and Sirmon, 2003). As a result, hypothetical arguments ensured that entrepreneurial orientation results in better execution.
Performance Assessment
Execution is a multi-scalar idea and it links between Entrepreneurial Orientation and execution of results that relied upon the differences which recruited to judgement such results (Lumpkin and Dess, 1996). As per assessment of these researches, good variation of execution signals which is a global distinguishing feature among the financial as well as non-financial works. Non-financial works cover targets such as satisfaction or fulfilment along with global manufacturing hierarchies, created by administration. Whereas financial works cover estimation of points such as sales tallies and funding-returns (Venkatraman and Ramanujam, 1986).
Regarding monetary outcomes, very less differences are there among different signals or exponents (Combs, 1986). Conceptually, variation between growth and infrastructural works and evidence of advantages be analysed without much difficulties. On the other hand, most of the points connected to each other hypothetically and analytically (Murphy, Trailer, and Hill, 1996). For example, companies might invest with great vigour on longer duration development, so there might be some compromises for small-scale profit structure. The idea of argument on the Entrepreneurial Orientation-executive association concentrated on financial aspects on largest amount. Business firms with greater EO can target more helpful scopes for achieving speedier development (Zahra and Covin, 1995). When it comes to the association between the Entrepreneurial Orientation along with non-financial targets, providing contentment of administration isn't the only stressed target. Previous works on financial execution have demonstrated the greater dependencies on primitive analysis or existing auxiliary information. On a different way, the primitive studies might cater good chances to estimate several points of execution such as analytical study of competitive companies. These spheres of researches might have some degree of partial attributes due to sociological demands as well as for few bindings of the technique.
III. RESEARCH ISSUES AND OBJECTIVES
As far as the current aim concerned, representation of analytical improvements to comprehend business and trade operations as well as rise of Entrepreneurial Orientation (EO) is the major aims.
In earlier analytical studies, a solid connection between Entrepreneurial Orientation (EO) and Business Performance (BP) in medium and smaller trading has established Wiklund and Shepherd, 2005). Although several empirical works claimed more elaborating work about company sizes because of its impact on the EO as well as its executive-connection ( using the Cronbach's Alpha. This is a familiar and often used coefficient of reliability. EO and business performance (BP) have a deep relationship and the Pearson's correlation analysis was one such scale used to find significance of this relationship. Entrepreneurial Orientation (EO) is known to have a constructive impact on business performance. Any external or internal situations or conditions that might lead significant or ingenious to the firm itself will also be ignored in this hypothesis. 1. H 1 : Innovativeness has a positive and significant impact on business performance.
H 2 :
Proactiveness has a positive and significant impact on business performance.
H 3 : Risk-taking items have a positive and significant
impact on business performance. In order to find the relationship between EO and business performance, there were plenty of linear regression models used. The models that have illustrated below used sales and profitability index as their variables and then tested for the needed relationship between the factors.
Conceptual Frameworks
Some studies found that entrepreneurial orientation enables small firms or new ventures, which defined as firms newly built or less than ten years old (Lussier, 1995), to do better than competitors and enhance firm performance (Ireland et al., 2003;Lumpkin and Dess, 2001;Wiklund and Shepherd, 2005;Zahra and Garvis, 2000). Based on Lussier, (1995), current study considered start-up forms as firms newly built or less than ten years old taken into consideration. Now, let us have a look at the tables and their respective data. In the table 1, the demographics of each firm have analysed. Here, the firms with sizes 6-10 were 47.2% and the remaining ones were 26%. Most of the firms were 3-4 years old which means they were a new start-up. However, most of these start-ups were of a technologically nonadvanced category. Table 3 focuses on another important measure the descriptive statistics of each firm. The firm age was around 4.8 years and the firm sizes were around 8. The average profitability index and return on sales have also been found out through this equation. The average return on sales was 0.27 and profitability index was 1.48. Table 4 has gathered information about the reliability statistics. Here, the Cronbach's Alpha has found out 0.875, which means that the internal reliability of this company is very high and the EO is also consistent. The correlation coefficients of EO and business performance measured in table 3. The performance related variables like sales, profitability index, profit and investments were positive and on the rise. This done with the factors of EO like innovativeness, risk taking and others. On the other hand, these variables were positive and much correlated with the components of entrepreneurial orientation viz., Innovativeness, proactiveness and risktaking items. Apart from estimating Cronbach's alpha coefficient of reliability, to investigate dimensionality of the scale, factor analysis was also performed. Results in the table 7 shows that Kaiser -Meyer -Olkin's measure of sampling adequacy was 0.815 (greater than 0.50 is considered acceptable). Bartlett's test of Sphericity was significant (p <0.001). To understand the relationship between entrepreneurial orientation (EO) and business performance (BP), multiple regression analysis performed with business performance as the dependent variable and Innovativeness, proactiveness and risk-taking items as the independent variables. Two models built to analyse the relationship, in the model 1, dependent variable was Increment in sales and in the model 2, dependent variable was profitability index.
These two dependent variables used as these were not dependent on size of the responding firm.
In the following table, results of the model 1 have presented. Results indicated that Innovativeness had a positive and significant impact on the business performance of the firm. The overall model was significant as indicated by significant F value.
VI. CONCLUSION
This study wants to create awareness and knowledge among start-up entrepreneurs to adopt entrepreneurship. It was clear from both the correlation analysis and multiple linear regression models that EO had a positive and significant impact on the business performance. In both the regression models the relationship between business performance as measured by increment in sales and profitability index and with EO (Innovativeness, Proactiveness and Risk-taking items) were positive and significant. | 2020-05-13T11:50:21.159Z | 2020-04-30T00:00:00.000 | {
"year": 2020,
"sha1": "99526bb06106d5d6acf07c12b7cf4f8742e46c8f",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/3840901/files/IJEMR2020100218.pdf",
"oa_status": "GREEN",
"pdf_src": "ElsevierPush",
"pdf_hash": "ad4dc08f2770485aac6a79ec4028a39812f4621d",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
228930411 | pes2o/s2orc | v3-fos-license | War, Sin and Justice in the Novel “ The Quiet American ”
This research paper focuses on one of the literature works of 20th century. A work of one of the most famous English novelists, Graham Greene, “ The Quiet American ’’. In this novel, the writer mirrored the war in Vietnam. The key features of this novel are touching and frightening, seen only from the narrator’s point of view during the Vietnam War. The major characters are tangled in a love triangle that leads to death and sorrow.
Literature Review
"Modernism is an area of literary research particularly subject to contest and revision. Most studies converge on the period between 1890 and 1940 in their attempts to date modernism, but there is wide variation, with some accounts stretching this time frame back to the early 19th century and others forward to the beginning of the 21st century" (Hobson, 2017).
Greene belongs to the period of modernity. Modernism has its own features and characteristics. The term "modernism" in terms of its literature became known in English literature immediately after the First World War, to describe the experimental literature, important works by T.S. Eliot, James Joyce, Ezra Pound and Virginia Woolf. Other terms are used as "modernity" in philosophy, "Modernization" in sociology, "modern music", "modern art and modern history" (Lewis 2007, p. 17). "The moderns are therefore those who start off by thinking that human nature, then the relationship of the individual to the environment, forever being metamorphosized … this change, recorded by the seismographic, senses of the artist, has also to change all relations within arrangements of words or marks on canvas which make a poem or novel, or a painting" (Matz 2004, p. 8). Modern novel experiments with everything, because modernity creates opportunity. Those who create in this modern age sought to flee traditions and create something new. Modern novel attempts to restore the sense of integrity or beauty in the modern world (Matz 2004, p. 8).
Modernism shows that everything is restarted for good, to find ways and methods to discontinue the tradition, but to start with something new, something modern, like the novel, as in all forms of creating. Experimentation and individualism remain behind in the past and modernism has moved into motion. Modernism shows a strong sense of time and similarity between gender and other things. Writers accepted this modern point of view very well. The modernist era was a reaction against Victorian culture and aesthetics, which had prevailed in the nineteenth century. This literary period made traditions abandoned and continued with something new. In the world of art, modernism was the beginning of the distinction between high art and low art. In modernist literature, poets gained the new spirit of time and expanded their vocation What makes Graham Greene different from other modernists is the other way of creating. Greene faced some changes, but he turned from artificial prose to a natural prose based on clarity and precision. Greene expresses what is most important to the writer, the prose. According to him "The danger to the novelist is that he should write with his mind on the subjective response of his readers instead of being concerned only to express his idea with the greatest accuracy and the greatest economy" (Diemert 1996, p. 42). "The novelist must tell the truth as he sees it" (Diemert 1996, p. 42).
Methodology
This research used the historical, analytical and critical method. The paper describes the war between the French and the Vietnamese. This war has a long history. Greene wrote this novel while he was in Vietnam as a journalist. This novel is based on real events, the Vietnam War. It is a long story, which tells many things that happened there. I have analyzed very well this novel like: war, sin and justice. I did an analysis of the war that took place there, many years ago. Then, there were many sins in this war. Innocent people were killed, a bomb exploded in the center of Saigon where many innocent people were killed. There was no justice at all in this war. This war was very unjust, people were killed without any guilt. This novel has been widely criticized for thinking that the author, Graham Greene, spoke bad words of Americans. In fact, it was later observed that Greene did not speak bad words of the American people but of the American government which was doing nothing but fighting only for its own interests.
The Modern Novel
The modern novel begins in Modernism. The modern novel developed new ways of creating, thinking, and ideas. This novel presents new forms of critical thought and style of creation. Like all other types, even the modern novel has its characteristics and features. From traditional ideas is passed on to something new, a new way of creating. Through it begins a new form of creation different and real. Modern means something new, something more, it means everything. "The modern novel tries for something new, in the face of modernity, with a "pattern of hope" for redemption" (Matz,p. 10).
The first modern novel was made in 1740. This novel has been a modern English development. "There is nothing offensive to the dignity of literary history in acknowledging that the most prominent piece of work effected by literature in England during the eighteenth century is the creation -for it can be styled nothing less -of the modern novel" (Edmund, 1740). After the modern literature began, the novel became very important. The modern novel distinguishes because this novel tells the reality, how life is in reality. This novel describes the positive side as well as the negative ones of life, shameless and beautiful. Modern novel depicts both rich and poor people. As a result, we can say that the modern novel has a serious story that means it only shows the truth. This novel is a psychological novel. This novel has a unique style of writing. In this novel everything is real, and this is very important because it is better to read this novel. From this novel, we can learn a lot because it deals with interesting and important topics both for the time and for the people. This novel is a psychological novel, said by novelists such as Henry James, Joseph Conrad, James Joyce and Wirginia Woolf. These novelists found that human consciousness has deep layers of consciousness. From all this, we can say that the modern novel has turned special and unique turning into English literature (Earl, 2016).
It is important to say that through this novel, different topics were taken from the reality. The life of people, the way of life, and everything that has to do with life and truth is reflected. Starting from a different approach to life, free thinking, and dealing with themes in the novel was different. Creators in the period of modernity enjoyed revolutions because they started with something very important and interesting. There is a great increase in literature with the beginning of the modern novel, a new development. Now, everything was filled with modern ideas, different topics, and other approaches. In the modern novel, great importance things are themes, motives, symbols, but more importantly are the characters and the way in which those characters are depicted. Modernists try to describe characters in the most real and original way. "Modernist subverted and reworked the coherent ego of realist characters with blurring and fragmentation that modified a variety of master narratives" (Kern,. Graham April 3, 1991 and was buried in Corsier-sur-Vevey Cemetery (Kurian 2010, pp. 342-343). Greene had a dynamic life, where he never stopped creating. Wherever he went, everyone discovered his talent. He had a tendency to create very special works, special themes, interesting characters. He had an educated family, and he tried to be like them.
Works
His creative work is divided into novels, short stories, travel writings, plays, autobiographies, essays, and more. His major works are considered to be "Brighton Rock" (1938), "The Power and the Glory" (1940), "The Heart of the Matter" (1948), "The Third Man" (1949), "The End of the Affair" (1951)," The Quiet American" (1955), "A Burnt Out Case" (1961), and "Ways of Escape" (autobiography; 1980). Greene's work expressed his relationships, love-hate relationship with the Catholic faith, such as "Brighton Rock", "The Heart of the Matter", "The Power and the Glory" and "The End of the Affair". His works are identified in his personal battles with God: including his works as "The Heart of the Matter" and "The Quiet American" (Kurian 2010, pp. 342-343). The world of the characters in the works of Greene is different and the tone of his works high lights the presence of evil, as a visible force. Despite his pessimistic tone of many of his works, Greene was one of the most read British novelists of the twentieth century. Throughout his career, Greene was fascinated by film. Greene published several collections of short stories like "Nineteen Stories" (1947), "The Living Room" (1952), and "Potting Shed" (1957). His collected essays appeared in 1969, "A Sort of Life" (1971). The unfinished manuscript of "The Empty Chair", a murder mystery that Greene began to write in 1926 was discovered in 2008. The style of writings, Greene is characterized as direct and clear. The writing style was that he had the ability to describe the internal struggles of his character's face, as well as their internal clashes (Kurian 2010, p. 343). Greene was the literary critic of his time, which according to its literature can be saved by adding stories religious elements. Greene feels a fierce battle between good and evil, right and wrong, reality and sin. Greene believes that the consequences of evil were as real as the benefits of being good. Greene was the first novelist since Henry James. Greene has received praise and criticism from catholic scholars and writers. Greene's main issue is moral and spiritual struggle within individuals. His novels depict a Europe wretched by the depression that goes to the war. Meanwhile, his later novels describe wars, revolutions and political turmoil. Greene was one of the most romantic British romantics of the twentieth century. The popularity of his books reveals crime and intrigue.
Information about "The Quiet American"
The novel "The Quiet American" is one of the most popular works of Graham Greene. "The Quiet American" is a novel of great importance. This novel is very popular. Even today, this novel is read by many readers and is highly rated. It was written in the mid-50s and has been translated into Albanian by Betim Muço, in 2006. Events take place in Vietnam at the height of the conflict in the former French colonizers, before the US invasion. "With "The Quiet American", Graham Greene left his explicitly his religious novels behind and returned to the political novel" (Donaghy 1986, p. 67). This novel is a political novel and different from other works of Greene. This novel was first published in the UK in 1955 and then in the United States in 1956. Greene was inspired to write this novel during 1951 while a war correspondent in Vietnam, which included his experiences during the years 1951-1956.
The novel has three main characters: Fowler an English reporter entirely cynical and persistent to be neutral, Pyle (The Quiet American) a gentleman, but a politically naive and inexperienced and Phuong, Fowler's girlfriend. The entire novel revolves around these three main characters, the other characters are more to complete the puzzle. Broadly the novel "The Quiet American" is a novel about an amorous conflict and a political conflict, a formula that often encountered. These are so intertwined with each other that it is hard to see who dominates (Greene, 1955). For example, Fowler wanted to be neutral in the war not only as a political creed, but also as a defense mechanism against emotional problems that has majority and despite this, he is a war reporter who reports. His most important steps are undecided and without enthusiasm. Faced with all this he chose to be a cynical extremist as far as his cynicism flowing like a waterfall in every page of the novel. On the other hand, Pyle is a good man with his enthusiasm and naive goodness does evil, whereas, Phuong is a natural woman with little to say as she is controlled by her older sister. However, she is the girl who does what it comes naturally (Greene, 1955). This novel is about the war in Vietnam, between the French and the Vietnamese. The novel is narrated by the protagonist Thomas Fowler, a British journalist who has lived in Vietnam for a long period of time. Fowler did not want to form opinions or engage in the conflict during the war, but wants to report the facts. Alden Pyle (The Quiet American) working for the social assistance mission. Vigot, a French policeman, does not believe Fowler, he suspects him behind the murder of Pyle, but Fowler denies the charges, saying that he knows nothing about the murder of Pyle. Fowler a married man, his wife is in England whereas in Vietnam he is in a romantic relationship with Phuong, his Vietnamese girlfriend (Greene, 1955). When Pyle arrives in Saigon, he is accompanied by Fowler and his girlfriend. However, Pyle is in love with the girlfriend of Fowler, Phuong. Pyle tries to take Phuong, Pyle tries to convince that Fowler is not worthy of Phuong and Fowler has a wife in England. Fowler asks his wife to give him a divorce, but she does not want to divorce, because according to the Catholic religion she does not believe in divorce. Pyle is very sincere in relation to Fowler but when it comes to Phuong, he is confronted with Fowler because he wants a better and safer life for Phuong. First, they were friends, thenrivals, Pyle attempts to steal Phuong by persuading her to leave Fowler and live with him together. Fowler notes that Pyle is involved in a deadly espionage supported by General "The" and the "Third force" where many Vietnamese people are killed. In the last chapter, Fowler's wife sends a letter where she accepts to divorce, so Fowler and Phuong can live happily. Also, we recently discovered that the advantages inherent Fowler helped a Communist leader in the murder of Pyle, he is Heng and his associates, as Pyle was involved in the bombing at a political square where many people got killed (Greene, 1955).
The title of the novel "The Quiet American" refers to Pyle, who is one of the main characters of the novel. Pyle is a 32-yearold American. He is a CIA agent, who works undercover. Pyle is mild and serious. He is an American with an intellectual family background: his father is a known professor. Pyle studied government and social theories; he is very inspired by York Harding's books on his vision regarding the need of a third force in Vietnam, Pyle shares York Harding's opinion on this matter. Pyle differs from other characters in the novel, because he is polite, quiet, educated and thoughtful. Pyle is a serious man and very sincere, he is enthusiastic about his ideals and beliefs. His character is patriotic, he is an idealist American, who believes he can do something positive in a country that he knows little of (Vietnam). Fowler compares Alden Pyle with other Americans who live in the Continental Hotel. "He's a good chap in his way. Serious. Not one of those noisy bastards at the Continental. A Quiet American" (Logan,p. 39). The characters are: Fowler, Pyle, Phuong, Vigot, Helen, Granger, Captain Truoin, Ms. Hei, General "The", Mr. Chou, Mr. Heng, American Economic Attache-Joe, York Harding and Domingues.
Themes, symbols, and characters make this novel as a more appealing and attractive to the reader. In this novel, there are symbols like alcohol and opium, here is talked of Fowler, drinking alcohol and opium. Graham Greene was also known for drinking alcohol and opium. Alcohol and opium help Fowler to be better off not to look at different situations, like when he sees dead people, as well as his separation with Phuong when Pyle gets her. The second symbol is the helmet. This is about the helmet that the Lieutenant offers Fowler when he goes to Phat Diem to report about the war. Although, Fowler as tired of what he sees in the war, he still tries to show what he sees. And the third symbol is The Role of the West. The Role of the West is described by York Harding, where Alden Pyle is inspired to go to Vietnam. Pyle thinks that a "Third Force" is needed but it is very difficult and he fails to do it. In the novel, there are also themes like Vietnam and the West. France fights Vietnam and does not want to withdraw. It is a war, where many innocent people were killed. Vietnam wants to lead its own country and be free. The second theme is Impartiality and Action. Fowler wants to remain neutral in the war. He does not want to be involved in the war and he calls himself a reporter and not a journalist. However, later he is involved in Pyle's death. Fowler understands that due to Pyle, many innocent people are dying because of his naivete. And the third theme is Inevitability of Death. Fowler sees dead people in the war and he is afraid of death. He fears why everything in life is not permanent. Phuong loves him very much, but he still fears that she will leave him. Whatever he does, he wants it to last forever. The three main themes of the novel are: war, sin and justice. It is about war in Vietnam, between the French and the Vietnamese. Vietnam wants to be released and govern its own country. In this novel, there are many sins like the murder of innocent people in the war, then, the murder of Pyle and corruption. There is a great war, where America is struggling to win Vietnam.
And corruption, where General "The" is a corrupt man. The third theme is justice. In this chapter, it is seen that there is no justice, dead people were killed and nobody does anything, then, they kill Pyle and no one finds out who killed him. And General "The", who just blows up and bombs explodingin Saigon, where many people were killed. The themes that are going to be discussed and analyzed are war, sin and justice. The focus will be on these three topics as very important to be addressed and analyzed. The war takes place in Vietnam, between the French and the Vietnamese. In this war, Fowler goes to report for the war that is taking place there. Pyle goes to help through "The Third Force". In this war there are many sins, the innocent people were killed, corruption and the murder of Pyle. There is no justice in this war.
War
According to Davis, the war in the novel is not described properly. Fowler presents the war as something serious, but the killing of women and children is as if the enemy did not exist, which is the communist Vietnamese who is in contact with him. Another important aspect, not only of the Vietnam War but of any war is "how the novel shows that the "truth" about what actually happens depends upon who control the interpretation" (Burns, p. 6). Between 1965 and 1975, there were various names about the Vietnam War. The War was called by different names. Americans called the war "Vietnam War", while Vietnamese called it the "American War". Some others prefer to call the "Second Indochina War". Critics have described the war in Vietnam as "No event in American history is more misunderstood than the Vietnam War" (Nixon, 1985). "The Indochina War" refers to the war of Vietnamese independence against the French colonial rule from the end of the World War II to 1954" (Kutler 1996, p. 241). Indochina's war is called in some names such as "French Indochina War", due to the rule of the French in three nations. It is also called "Franco-Vietnames War", a war between the French and the Vietnamese, and other names. On December 19, 1946, until Americans enter Vietnam with the idea that they know better than the Vietnamese, assuming they know what is best for the people of Vietnam. The novel "The Quiet American" is a political novel about war in Indochina. Here the characters are more like individuals than as representatives of the nations. It is about war between the French and Vietnamese. It is also relevant to the rivalry between the two men in love with the Vietnamese girl ). The novel "The Quiet American" is called a novel against war. Greene describes war as an absurd attempt, orchestrated by cynical politicians, fought by terrified soldiers, blamed, and reported by journalists to capture a big story. The war is associated with the involvement of the United States of America in Vietnam, spying, terrorism, and disregarding the intentions of innocent people, killed as a result of its intervention. Graham Greene was the only one who wrote about war and politics in Vietnam in the novel "The Quiet American". Greene represents the way of life in general but also of political life. The novel refers to the Indochina of the 1950s, the conflict between the French and the Vietnamese, the involvement of the United States of America and three characters (Lawrence 2008, p. 1).
How does war affect the main characters?
"His characters radiate the ethical uncertainty and confusion that comes from living a war-without-end" (Smith, 2004). Fowler from the beginning when he reports as a journalist, until the end of the war, he experiences many things. He lives with Phuong. Then he goes to report war in many places and sees everything. Pyle initially starts well but latter he gets involved into the deaths of many innocent men drowning in Saigon and finally killing them. Phuong from the beginning up to the end remains the same in terms of its decisions guided by others instead of making its own decisions. She trusts the two but finally ends up with Fowler and they live together (Smith, 2004). Fowler reports to the war, while Pyle tries to help in the war, but he is involved in the explosive bomber in Saigon. On the other hand, Phuong does not even deal with what's happening in her own country. Fowler, Pyle and Phuong are the main characters in the novel "The Quiet American". The novel shows a love triangle, Fowler, Pyle and Phuong. Phuong is Fowler's girl, but Pyle falls in love with Phuong and wants to marry her. Fowler wants to marry Phuong, but his wife is religious and does not give him divorce. Phuong abandons Fowler and goes to Pyle but finally Fowler is involved in Pyle's assassination, so Fowler and Phuong live together (Gibson, 2017). Pyle supports the "Third Forces"leader and believes in him a lot (Johnson, 2003). These have a double meaning in the novel but each of them has their end. Fowler speaks, represents, and furthers the rhetoric of the rising anti-colonialism movement of the 1950's. Pyle is prointervention. Phuong is a noticeably voiceless in the novel. Particular importance in the novel "The Quiet American" also has the relationship between the characters. The main characters in the novel are: Fowler, Pyle and Phuong. At the beginning of the novel, their relationships are good but later change, because of jealousy and point of views. Fowler meets Pyle for the first time at the Continental Hotel in Saigon. Fowler was having a drink at the Continental hotel when Pyle came. Pyle is a modest, quiet and serious man, and he does not criticize anybody. Pyle asks Fowler if he has read York Harding. Pyle says that Harding has written a book titled "The Progress of Red China" and is a very deep book (Greene 1955, p.25). Pyle meets Phuong for the first time at the Continental Hotel, just two weeks after arriving in Saigon. Phuong was there with Fowler and Pyle came to their table, inviting them to his desk. They talk about war and various things. Then there's also a Granger at their desk (Greene 1955, p.36). Pyle welcomes Phuong to dance together, she acknowledges and they begin dancing. In any case, Pyle does not realize how to dance while Phuong is the best player in the Grand World.
Fowler takes a seat at the table and looks at them as they staggered, while Phuong's sister, Miss Hei, goes to Fowler and asks his identity. Fowler says he is called Pyle and that he is an individual from the US financial mission. Phuong and Pyle go to the table where Fowler acquaints Pyle with Phuong's sister. Phuong's sister leaves and just these three remain, Fowler, Pyle and Phuong. There starts the night with artist (Greene 1955, pp. 43-50). Pyle goes to Fowler in Fat Dinh, to converse with Fowler since he needs to wed Phuong. Fowler discloses to Pyle that he cannot hold up until the point when I return to Saigon the following week. Yet, Pyle discloses to Fowler that you can be executed and it was not reasonable to say no. Their discussion started to protract and mortars started. Fowler says they are endeavoring to stop an assault and that there is no rest today around evening time (Greene 1955, pp. 62-67). Fowler leaves a gathering with Pyle in Fowler's home at six o'clock. Fowler reveals to Phuong that Pyle will come; however, she needs to go to her sister and she is not exhausted for Pyle and says that on the off chance that he needed to see me, he may have requested me. Phuong goes to her sister. Then, Pyle comes to Fowler in the apartment together with his dog "Duke". He goes inside, where he sees that Phuong is not there. Fowler says that Phuong has gone to her sister. Fowler asks Peter about the plastic. Then Phuong comes to the apartment and greets Pyle and Fowler. He tells Phuong that Pyle wants to talk to her. Pyle tells Phuong that he has the particular respect and love that he has felt since they have danced together. Pyle says she wants to marry her. Phuong says no and refuses to marry Pyle. Then, Pyle comes out of Fowler's apartment (Greene 1955, pp. 79-87). Fowler told Pyle that something struck me but it is not serious. Fowler was seriously hurt. Fowler continues to lie languidly by the great pain he has in his left foot. Then, Pyle summons Fowler and asks him if he is hurt. Fowler admits to the injury. Pyle supported Fowler in his chest but Fowler says he did not want to leave. Pyle says they should either help or kill both of them. It is very cold. Pyle says I say that this is for Phuong. Fowler says you helped me just because of Phuong. Fowler says, "If I was you, I would left you. Oh, no, you will not go with me, Thomas. "He added with a self-conceited self-conceit:" I want to know you better than you know yourself" (Greene 1955, p.124). So far there were some good moments that they passed together. Now they begin to break the relationship between them, because of jealousy and because of the involvement of Pyle in the bombing that happened in the center of Saigon. Fowler because of the jealousy of Pyle began to grumble everything he had with the Americans. Fowler also became nervous about the bombing that took place in the center of Saigon, where Pyle was involved. Fowler goes to meet Mr. Heng. Mr. Heng says that Pyle was involved in the bombing that occurred in the center of Saigon. Also, General "The", wants to publish this news in the newspaper, but Fowler does not accept it. Then, both of them agree to kill Pyle, where Heng tells Fowler to invite Pyle for dinner to "Vieux Moulin" at Dakou's bridge (Greene 1955, pp.193-196). Pyle dies and after his death Vigot, the French police officer asks Fowler about Pyle's death because he doubts and suspects him. Vigot asks Fowler a lot of questions due to the fact that he knew Pyle well. He asks about Pyle's life, who Pyle was, where he came from, what he was doing, and where he has worked. Pyle's death remains unpaved. At the end of the novel, Fowler receives a letter from his wife where she accepts to divorce him. Phuong is very happy and goes to meet her sister and tell her the good news. While Fowler, though rejoicing again, says, "Everything was wasted on me, he was not alive, but I wanted to have someone I could say to me that was bad" (Greene 1955, p.212).
All characters have their own importance, but these three characters give the image of the entire novel. If we compare the importance of three characters to the novel "The Quiet American", Phuong is the passive character. Phuong represents Vietnam, where it serves as a stand for a silent and passive birth over which competing foreign powers are competing. The novel "The Quiet American" is a tense historical moment, where some foreign countries had a role in the future of Vietnam. The Frenchmen wished to keep the colonies. Vietnam wanted independence. Americans wanted to stop the spread of communism and install democracy. Ordinary Vietnamese people have no control and thus become passive. Phuong is a featured character, influenced by her sister Miss Hei and in the competition between Fowler and Pyle. We can say that Phuong plays a secondary role in the novel "The Quiet American". First, it has a full character. Compared to Fowler and Pyle, the icon of Phuong is important for the novel only as a promoter of events.She is an object of the desire of Fowler and Pyle. Because of it, Fowler and Pyle fall into conflict. Fowler is more involved than Pyle in Phuong's case, because Pyle is in love with Phuong. Pyle decides to tell Fowler that he loves Phuong and he wants to take her. Pyle decides to marry Phuong and it would be much better for Phuong than to be Fowler's love (Logan, p. 40).
Sin
Sin is another theme in the novel "The Quiet American". "Sin is committed everyday by people even without realizing it. We even commit the same sins, but the responses are always different. Literature depicts this psychological fact very well through the realistic trials and options the charactersface" (Strickland, 2014). "Only those who recognize the omnipresence of sinrecognizing first of all that they themselves number among the sinful-can possibly anticipate the moral snares inherent in the exercise of power" (Bacevich, 2009). Fowler, a Catholic, lost in a world of darkness, losses his way to love, humanity and religion. One of the sins in "The Quiet American" is religion. The narrator struggles with religion, Catholicism, because it does not allow him to get a divorce and remarry. Fowler falls in love with a young Vietnamese, Phuong, and starts a love affair with her, while being married to another woman at the same time. The sin of unfaithfulness is a matter that hurts Fowler but yet the love he feels for young Phuong, pushes him to ask for a divorce and commit a sin. His legs are tiedas he is married and his wife does not believe in divorce, because she is Catholic. In a dark world, Fowler sees Phuong like a flower in the middle of a world full of pain. But his flower with youth and purity is being taken away because of his marital situation that pushed Phuong to Pyle (Johnson, 2003). On the other hand, we have the murder of Pyle. Pyle goes to Vietnam with ideas based on York Harding, to do something good for the country. He becomes friends with Fowler, where he loves his lover, Phuong. Pyle is struggling to do something for Vietnam but at the same time to take Phuong as a woman and to share it with Fowler because she thinks he is not for her. Pyle also co-operates with General "The" and the bombs that explode in Saigon are due to their cooperation. Fowler reveals that Pyle is doing just the worst, he decides that with Heng to kill Pyle. Now, innocent people will not die. And now Fowleer will be quiet and happy together with Phuong. Even though French policeman Vigot tries to find out who killed Pyle, but he cannot figure it out. Fowler does not know anything about Pyle's death. As Phuong never finds out who killed Pyle, though he lacks Pyle because he knows he was a good man (West, 1991).
Fowler sees people killed in war, sees terror. Wherever he goes, he is terrified of being seen, children and women crying, innocent people. He also goes to different conferences to know the reason for the murder and to know who won the victory.
Each party believes that they have won. No one ever accepts the loss, but always says they have won. At conferences Hanoit Fowler wants to know who has won and knows the exact number of victories and losses. In this war, the enemy does not exist because there are victims every time and nobody cares how many women have remained without children and many children have remained without mothers. Likewise, Fowler sees many children without feet, many motherless, no heads. People crying for losing their family. So, there is a strict war, where no one knows who will win, because the losses are huge by each party. The only hope of the people was the unification of Vietnam that would make them live freely in their country and rule it by its own. And this is at the bottom where the French are drawn by Vietnam and all Vietnam is joined, and now the people are free and in peace. Like any other country, these people also want to live freely, live like others. Do not have foreign leaders but have their own people (Burns, 13).
Corruption
"Corruption is a constant in the society and occurs in all civilizations, it has many different shapes as well as many various effects, both on the economy and the society at large" (Šumah, 2018). Pyle trusts General "The" a lot. Pyle is convinced that the goals of General "The" are adapted to the purposes of America. So, General "The" organizes an attack where many innocent people are killed in the center of Saigon (Bacevich, 1955). The corrupt military army ran by general "The" who armed his army with weapons of States, which consequently results in a series of terrorist bombings in Saigon, Vietnam. In these bombings, a number of innocent people die. It's a difficult situation, a rigorous war where people are killed intentionally and unintentionally. The lack of justice is a very broad topic where justice is seen when one is killed or disappears. If you can call justice killing Pyle, where innocent people are killed, then his death is justice. Lack of justice and parody leaves a little place for him to see justice (Greene, 1955).
Fowler goes to the north to see how the war is taking place there. He had gone there with a troop boat from Nam Dinhi. Fowler knew Fat Diemin well. All those bombardments and blasts made that place more devastated and desolated. There was a church in which many people had sheltered, who had fled from their homes, and were more secure in the church.
But later, Fowler leaves the church and leaves with a group at the top with the lieutenant, to continue in the other villages to observe the terrain. They continue to walk where mortars went over their head and spread beyond them. They continued to walk to a canal where they were stopped by a packed full of corpses in the canal. Still, they went on walking trying to do something to get to the other end. They went to see what had happened there. In a narrow channel there was a woman and a little boy dead. Then they returned to the farm, but news came through the radio and the lieutenant said they would bombard the village. They got up and started returning, navigating again between the corpses and coming in row after the church. The planes were launched and bombs began after them. Darkness had collapsed when they arrived at the staff headquarters. There it was very cold and they got a drink to heat up a bit. Colonel asked Fowler if he had a shooter with him and Fowler said he did not. And he gave him a shooter and said keep it under the pillow. At about three and a half hours the mortars will begin, Colonel said.
Their conversation was accompanied by a whiskey drink, and this is not anything compared to what is happening 100 km further in Hoa Binh and there is a great war said the colonel. Afterwards, they all froze, Fowler sees Pyle dreaming by pacing and holding an invisible partner. But, he woke up from the ghuma and his hand was in the revolt that had been given to him. Pyle said I could sleep here, while Fowler asked Pyle where he had found the gun and Pyle said someone gave it to me (Greene 1955, pp. 51-62). The epic "The Quiet American" has the transgression of affection and murder when a man like Fowler does not enable religion to accept Phuong as a lady. Fowler is hitched and his significant other does not trust in separation, since she is Catholic. Then again, we have the homicide of Pyle, which he defames unjustifiably on the grounds that he is guiltless.
At first, the connection among Fowler and Pyle was great, yet later they lost their organization in light of Phuong. Pyle is attached to Phuong and needs to lift her up. He cannot do that until the point when he goes to Pyle's passing. Pyle is straight forward with Fowler, revealing to him the amount he adores Phuong and that he will do his best to make her upbeat. Sin is likewise debasement. An aggressor armed force ruined by General "The", who furnished his armed force. General "The" outfitted his equipped armed force of America, bringing about a progression of psychological oppressor bombings in Saigon, Vietnam. From these bombings, numerous honest individuals kick the bucket. It is a troublesome circumstance, a strict war where individuals are shot dead and purposefully.
Fowler is an agnostic and does not trust in God. Wherever he goes to Vietnam on various devour days, he is exceptionally shocked. He doesn't concur that the Catholic religion does not permit separate. Fowler and Pyle participate in the merriments of the Caodaist celebration celebrated in Tanyin. Consistently in Tanyin extending eighteen miles toward the northwest of Saigon, the Caodaist celebration was held. Caoda's places of worship could draw in the consideration of nonnative's in each town. In these festivals, the pope welcomed individuals from government, discretionary and others. Fowler went while in transit to the house of God and proceeded to take a gander at all the things that were there, similar to a Christian, a Buddha. Fowler had never needed to accept and that the writer's activity is to express and enlist (Greene 1955, pp. 92-98). Fowler composes a letter to his significant other, Helen, who needs separate.
Fowler says Helen does not hope to give her separation instantly, but rather let her think and after that arrival the appropriate response. Likewise, Fowler reveals to Helen that she has discovered another lady in Vietnam and sits tight for her. Fowler reveals to Phuong that he is looking for separation to his significant other and afterward weds Phuong. Phuong is cheerful and says he needs to run with him to England, where in London there were passages and two-level transports (Greene 1955, pp. 89-90).
Inevitably, Fowler's answer originated from his significant other. Fowler asks Phuong that there is no paper and Phuong says, indeed, a letter has come. Phuong brings Fowler's letter while Fowler takes the letter and sees the start of the letter stating "Dear Thomas" and "With Love, Helena". Phuong inquired as to whether the letter was from his significant other and Fowler says yes. Fowler started to peruse the letter, where Helena had expounded on their encounters and the undertakings Fowler had with other ladies. Lastly, Helen says, "I don't put stock in separation; my religion stops it and my answer, Thomas, is no, no" (Greene 1955, p. 133).
At last, Helen (Fowler's better half) consents to offer separation to Fowler. Toward the finish of the novel, Fowler and Phuong get uplifting news. Helen has sent Fowler a letter expressing she consents to separate from her. In the letter she expresses: "I recollected on your letter. Stop. I'm acting nonsensically as you trusted. I advised my legal advisor to begin the separation system. Stop. Favor the Lord. Stop. With adoration Helen" (Greene 1955, p. 211).
Religion
"Religion doesn't play a significant role in "The Quiet American" (Miller, 2004). "Religion has always been an integral part of the literary tradition: many canonical and non-canonical texts engage extensively with religious ideas, and the development of English Literature as a professional discipline began with an explicit consideration of the relationship between religion and literature" (Knight, 2009). Catholic religion has its very own standards. Catholics trust marriage comes as a blessing from God's hand. Catholic religion has these key components: Marriage joins a couple in loyal and shared loves Marriage opens a couple to giving life Marriage is an approach to react to God's call to blessedness Marriage calls the couple to be an indication of Christ's adoration on the planet.
In marriage, the two ended up one substance in an association joined by God, (Mark 10: 8). Jesus talks about separation: "Accordingly what God has consolidated; no individual must separate" (Mark 10: 9). The Catholic Church does not allow divorce for precious sacramental marriages. According to them, marriage goes hand in hand with the relationship with God.
God is 100% loyal to the relationship of those who choose to marry and are called in the same loyalty. God does not allow divorce relationship. God does not accept and hate divorce (Mal 2:16) and allows it only in the case of unfaithfulness by another companion (Matthew 19: 9). When there has been loyalty or other sins that have disrupted them, it is God's desire for both partners to repent and reconcile (Luke 17: 3-4, 1 Corinthians 7: 10-11). However, forgiveness and failure to seek forgiving is sin (Matt 6:15). Whoever divorces her and marries another, the man commits adultery, and he who marries a wife deserves her husband's adultery (Luke 16:18). God does not allow a divorced person to marry. The Bible says that the woman should not be separated from her husband and the husband should not divorce his wife (1 Corinthians 7: 10-11 ESV).
Pyle's death was not revealed by anyone. That night, Fowler waited for dinner at the restaurant in "Vieux Moulin" (near Dakou's bridge), it was twelve o'clock and Pyle had not come. Fowler returns home to see Phuong there waiting for Pyle to return. Vigot, a French policeman asks Fowler some questions about Pyle. Fowler says that all he knows about Pyle is that Pyle was thirty-two years old, employed in the economic aid mission, American nationality. You look like his friend told Vigot. Fowler begins to describe the situation as he had been in the Continental hotel at six o'clock. Then at half past eight, I went to "Vieux Moulin" and just a morning.
Pyle did not come. Then Fowler asked where they had found Pyle. He said that Pyle was found in the water, at Dakou's bridge. Then, Vigot tells Fowler if you can identify the Column and go down the stairs to the basement where Pyle's corpse was. Pyle had wounds in the chest, but Vigot says that Pyle has not died because Pyle is sunk into the clay and found clay in his lungs. Fowler then leaves with Phuong in the apartment (Greene 1955, pp. 11-24). General "The" is a Vietnamese militant leader. Pyle sees General "The" as the leader of the "Third Force". He sees him as the leader of a nationalist coalition and believes it will bring democracy to Vietnam and govern the country, not the French, not the community. Pyle thinks General's "The" intentions fit with America's goals. However, this turns out to be a wrong assessment. General "The" uses plastic explosives secured secretly by Pyle. Pyle believes the campaign will focus on military facilities, but in fact, General "The" organizes a fierce attack on civilians in the center of Saigon. "We are an old colonial people, Pyle, if we have learned little from reality, we have learned not to play with the crag. This Third Force has come out of a book, that's all. General "The" is just a bandit with several thousand people. It does not constitute a national democracy" (Greene 1955, p.175). Pyle tells Fowler that I thought you were not on anyone's side, and Fowler says he is not on anyone's side. Fowler begins to criticize Pyle, saying that he put General "The" on the map, found the "Third Force" and national democracy. He tells Pyle that where he is on your right shoe and go to Phuong tell your historical work. Pyle does not agree with what Fowler says and says General ""They"" would not do this (the Saigon outbreak), and that someone has deceived him. According to Pyle, communities may have done this (Greene 1955, p. 182).
Moral dilemmas
"Moral dilemmas set a challenge for ethical theory. They are situations where agents seem to be under an obligation both to do, and to refrain from doing, a specific act" (Statman, 1995). As an important topic in the novel "The Quiet American" has moral dilemmas. They represent human values and provide an interesting source of thought. Moral dilemmas are when Fowler does not want to be involved anywhere. At first it is seen that he only reports in the war, reports of murdered people and all what is happening in the war. He tries to look at his job and not to be involved it anywhere. Later, we see his change. He sees that Pyle with his naivete tries to do good but on the other he does badly, because he is jealous of Pyle that he is getting his wife. Also, Pyle is a bomber in the center of Saigon. Fowler is terrified of what he sees in the center of Saigon, where many people remain dead, with limbs torn, many people crying for their dead children. He began to discover more about Pyle, about what he is does and how he collaborates with. All these Fowler reveals and puts together Mr. Heng to kill Pyle. According to them, Pyle is considered dangerous in the sense of naivety because he did not know what he was doing, good or bad. First of all, he did not know Vietnam as he thought it was not easy at all. He thinks he is doing well, but he does not know that through his actions is doing just badly.
His naivety costs him a lot, and the ignorance of many things he does not know. And because of his naiveté, comes to his death (Greene, 1955). Greene is concerned about dilemmas. His dilemmas are religious, political and personal. Fowler does not believe in religion; he is an atheist. But he is frightened of death and the fact that he'll be alone because Phuong is going to live with Pyle. Fear makes him think more about his life, seeing himself as an old man, he thinks it is important for him to live with someone. To political dilemma, it is seen that Fowler deals with French and Vietnamese politics. He reports on the war between them, a bad war where he sees only bad things, and that both French and Vietnamese are trying to show their strength. And personal dilemma, Fowler tries to divorce his wife and live happy with Phuong. In the last few years of his life he wants to live happily and not have problems but to fix his life. Greene addresses all of these issues by treating each one in the best way. So his ideas to deal with these dilemmas were clear. All of these make the work interesting and understandable for everyone (Stephen 1986, p. 316).
Critics see the novel "The Quiet American" as a novel that has much to offer, even if we see the characteristics of Greene's work, such as Catholicism. This novel cannot be considered as a political novel, but can also be seen as a novel on morals and moral issues. These moral issues should not be closely linked to Catholicism but represent human values and provide an interesting source of thought. Morality should not be related to Catholic religion, because each of them represents something else, something different. Each of them is unique in its own way. Their division can offer more and something interesting. And all of these make the novel interesting and have interesting topics within it. This novel is distinguished by the topics, in terms of themes from other novels. It is different in everything. This is what makes the novel stronger (Radecki 2015, p. 15).
Literary Criticism on Graham Greene's "The Quiet American"
Graham Greene's books are of extraordinary significance. His books have diverse points. "The Quiet American is of a mixed genre because Greene blends the techniques of his earlier novels together to produce a work of political narrative, full of action, with psychological introspection, that has an historical background and Catholic sensibility in its confessional narration" (Reshetova, p. 10).
The epic "The Quiet American" is an essential novel. The epic "The Quiet American" was evaluated as an enemy of American tale. After the novel "The Quiet American" distributed, it energized and debate in the Americans since Greene was charging the Americans and the US legislature of underwriting fear-based oppression. Per users in the United States and numerous per users by and large discovered Greene utilizing Pyle's character to reprimand Americans. As indicated by them, Greene through Pyle presented Americans as youthful, insensible and unsafe individuals. Greene was additionally condemned for America's way to deal with Vietnam. America does not encourage any nation, nobody, but rather it intercedes for its very own advantages.
Where there is no intrigue it doesn't meddle. As indicated by the title of the novel, America is the principle concern since "America is a vulgarly materialistic and "honest" country with no comprehension of different people groups" ). After every one of these reactions, Greene answers that "He was not against the American individuals but rather against the unsafe approaches of the American government" (Logan, p. 37). The US government was not helping but rather was just causing more in the midth of 50s, Greene expressed "I don't see myself as hostile to American any more than I think about enemy of Romanian or against Italian" (Roiphe, 2003). Of those who did not like the novel, they called it in different ways. Despite numerous explanations, they still had different thoughts, but later changed their minds as well as Americans. Some of them named this novel, A.J. Liebling called this novel "a nasty little plastic bomb" (Roiphe, 2003). "Although Greene claims in a prefatory note, "this is a story and not a piece of history, "he wanted it both ways" (Ripatrazone, 2018). Greene's enemy of Americanism is more delicate than thought. This is likewise found in the film, where European enemy of Americanism has changed in the course of the most recent 50 years. This implies the novel "The Quiet American" isn't as unpleasant as it was previously. Today, Greene's perspective for Americans appears to be outdated and individuals don't have that conviction of gratefulness as previously.
All these clarifications made by Greene reveals that the novel "The Quiet American" is unique in every respect, as in dealing with war, characters, messages, and many other things. This novel was distinguished by other novels written by Greene. Greene was also distinguished by other writers as the one who was the first who wrote about the war in Vietnam. He wrote the novel in the best way and presented real events. This novel is also distinguished by the way it has been written.
Results and Conclusions
The novel "The Quiet American" written by Graham Greene, known for his great contribution in English literature, has great importance and value in English literature. It is evident that this work is a masterpiece of its kind. The novel reflects on the pain that hunts the human being during life. Moreover, it mirrors the situation happening during the Vietnam War. It is a sensational statement reminding people that humanity will overcome all hardship or make a man commit sins such as murder for what he believes or feels. "The Quiet American" is a novel of a special importance. In addition to others, this novel was criticized but also praised. According to artistic aspect, this novel has multidimensional artistic values.
From the analysis, it would be concluded that the novel "The Quiet American" is a unique book and has very high values in English literature. This research paper is divided in seven chapters. In the first chapter, it was talked about modernism, as a great period of literature. In the second chapter, there was talked about Graham Greene, his life and work. In the third chapter, there was discussed about the novel "The Quiet American". A conflict between them was in terms of love and war. In the fourth chapter, there was talked about the war, one of the main topics, the war between the French and the Vietnamese, when America intervenes in order to calm the country. After many murders, efforts finally reach an agreement where war stops and the country takes back the peace that had earlier. Efforts where great for a united and peaceful Vietnam. In the fifth chapter, there was discussed about sin. As in any war, even in this war, the sins come from people who act intentionally without inadvertently.
In the sixth chapter, was talked about justice, a rather delicate subject. Based on circumstances and happenings, there is no justice, because innocent people were killed. In the seventh and final chapter, there was talked about literary criticism about novel. These themes, war, sin and justice, make this novel more attractive and appreciative. In this novel every theme is unique. They are very important, the war takes place between the French and the Vietnamese, and then the sins that are made in this war, in relation to justice: it is shown that there is no justice for people in this novel. These themes make the novel more special and more important to the reader. These themes are of huge importance as they are treated for the first time and they were never addressed before by anybody. From the analysis that has been made, it has been found that the war took place between the French and the Vietnamese: furthermore, the involvement of America did not make it easier.
This novel does not indicate how the war ended. Pyle tries to help, but unfortunately only cause's damage, while Phuong, even though war takes place in her own country, is not impressed by anything and she is influenced only by others because she never makes decisions for herself but others decide for her. There are many sins in this war, like women's betrayal, corruption, murder of innocent people, Pyle's murder. As seen in the novel, Fowler is married, and also has a lover in Vietnam. He wants to be divorced from his wife, but she does not want to divorce because the Catholic religion does not allow separation. Fowler wants her divorce several times, and at the end of the novel, his wife agrees to divorce, and he is now free to marry Phuong and they live together. The other sin is the murder of innocent people in the war. Fowler goes to report to the war and sees many dead people and he is terrified of what he sees. It is also the explosion of the bomb at the center of Saigon, where innocent people are killed. In corruption, General "The" is corrupted. Pyle believes that he is doing well for his country. The next sin is Pyle's assassination. Fowler, seeing that many people are dying because of Pyle's naivety he decides to kill Pyle along with Mr. Heng. It was not supposed to kill Pyle, but to inform the court authorities and leave the case to them. | 2020-12-14T21:03:29.459Z | 2020-10-30T00:00:00.000 | {
"year": 2020,
"sha1": "bf448d66e7e2a979dd3e66ffc72c231cfd136a86",
"oa_license": null,
"oa_url": "https://al-kindipublisher.com/index.php/jeltal/article/download/671/566",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e3da9e8c7cdafaeb271673e362fed996ff72503d",
"s2fieldsofstudy": [
"Art",
"Education"
],
"extfieldsofstudy": [
"History"
]
} |
199641643 | pes2o/s2orc | v3-fos-license | Integrated Pest Management of Longan (Sapindales: Sapindaceae) in Vietnam
This paper describes the current state of pests and diseases of longan (Dimocarpus longan Lour.) and their management in Vietnam. Longan is the third most cultivated fruit crop and second major fruit crop exported from Vietnam. Brief descriptions of arthropod pests Eriophyes dimocarpi Kuang (Acari: Eriophyidae), Conogethes punctiferalis Guenée (Lepidoptera: Crambidae), Conopomorpha sinensis Bradley (Lepidoptera: Gracillariidae), Conopomorpha litchiella Bradley (Lepidoptera: Gracillariidae), Tessaratoma papillosa Drury (Hemiptera: Tessaratomidae), Eudocima phalonia L. comb. (Lepidoptera: Erebidae), oriental fruit fly Bactrocera dorsalis Hendel (Diptera: Tephretidae), Planococcus lilacinus Cockerell (Hemiptera: Pseudococcidae), Drepanococcus chiton Green (Hemiptera: Coccidae), and Cornegenapsylla sinica Yang & Li (Hemiptera: Psyllidae) and fungal diseases Phytophthora palmivora Butler (Peronosporales: Peronosporaceae), Colletotrichum gloeosporioides (Penz.) Penz. & Sacc. (Incertaesedis: Glomerellaceae), and Ceratocystis fimbriata Ellis & Halsted (Microascales: Ceratocystidaceae) affecting longan are given. The longan witches’ broom syndrome is a major factor causing 50–86% annual crop loss in Vietnam and it has been considered the primary constraint in production. The causative agent of this syndrome has been identified as the eriophyid mite E. dimocarpi. Deployment of Integrated Pest Management strategies for longan production in Vietnam is outlined.
Eriophyes dimocarpi (Kuang) (Acari: Eriophyidae) It occurs in China, Hong Kong, Taiwan, Thailand, and Vietnam (So and Zee 1972, Menzel et al. 1989, Tri 2004. Females lay white colored spherical eggs on developing buds and hatch in about 5.10 ± 1.37 d. The mite has two nymphal instars lasting 6.4 ± 0.79 d. Firstinstar nymphs are white, 0.06 ± 0.006 mm long, while second-instar nymphs are white, 0.09 ± 0.01 mm long, and both have two pairs of legs. Adults are white, 0.12 ± 0.008 mm long and live 2.2 ± 0.52 d (Fig. 3). Average life cycle is completed in 13.70 ± 2.16 d. The mites are abundant from November to May, coinciding with the dry season. It has been reported to be associated with longan witches' broom syndrome (LgWB) (He et al. 2001, Hanh et al. 2012a, Hoat et al. 2017).
Witches' broom on longan was first reported in China in 1941 and later in Thailand, Hong Kong, Taiwan (So andZee 1972, Menzel et al. 1989) and Cambodia (R. Muniappan, personal observation, 2018). In Vietnam, it appeared in the north in 1999 with an apparent introduction from China, and in the south in 2001 (Tri 2004). It is considered as one of the important constraints of longan and rambutan production in Vietnam. Trees affected by LgWB have short vegetative shoots with small leaves showing curling of leaf margins, shortened inflorescence with malformed flowers, and panicles poorly filled with small fruits (So and Zee 1972, Kuang 1997, Zhang and Zhang 1999. The crop losses caused by LgWB vary from 50 to 86% in the fields (Zhang and Zhang 1999, Chen and Xu 2001, Hoat et al. 2017. Studies conducted to identify the causative organism of LgWB in China, Thailand, and Vietnam have attributed it to virus (So and Zee 1972, Ye et al. 1990, Chen et al. 1996, Chen and Xu 2001, phytoplasma (Visitpanich et al. 1996, Hoa et al. 2012, and association of E. dimocarpi (He et al. 2001, Hoat et al. 2017. One of the objectives of the USAID-funded IPM Innovation Lab project instituted at the Southern Horticultural Research Institute (SOFRI) in Vietnam in 2015 was to identify the causative organism of LgWB of longan. To verify the presence of phytoplasma or virus, samples of affected longan shoots from Southern Vietnam were sent to the laboratory of Dr. Robert Gilbertson, University of California-Davis, and to Dr. Rayapati Naidu, Washington State University, Prosser. The diagnostic efforts at these two institutions to find phytoplasma or virus in the samples proved negative. Studies carried out by SOFRI in collaboration with Molecular Biology Divisions of the Can Tho University, Molecular Biology Division of the Nong Lam University, Electron Microscope Department of Institute of Hygiene and Epidemiology in Hanoi, and Central Analysis Laboratory of National Science University at Ho Chi Minh City also failed to identify phytoplasma, bacteria, fungi, or virus in LgWB samples from Southern Vietnam (Hanh et al. 2012a,b).
It is known that eriophyid mites cause necrosis, enations, fasciation, various galls, and witches' broom on several plants (Jeppson et al. 1975, Westphal andManson 1996). Recent findings of Dr. Hanh Tran's work at SOFRI has confirmed that E. dimocarpi is the causative agent of LgWB and it is not a vector of viruses or phytoplasma that were speculated to be causative agents of the syndrome (H.T., unpublished data).
For management of longan witches' broom infected shoots and inflorescences on longan trees should be removed and destroyed.
The alternate host of the longan gall mite, rambutan, should not be grown in longan orchards (Hanh et al. 2014). Flower inducement from April to June should be avoided as this coincides with the peak period for E. dimocarpi (Hanh et al. 2012b). Prophylactic application of sulfur compounds, neem oil or petroleum oil spray reduces incidence of LgWB. The predatory mite, Amblyseius sp. (Acari: Phytosiidae) and Arthrocnodax sp. (Diptera: Cecidomyiidae) were found feeding on E. dimocarpi at the Mekong Delta fields (Hanh et al. 2014). In addition, entomopathogenic fungus, Paecilomyes sp. was found infecting E. dimocarpi in the field. Reducing use of toxic pesticides will lead to adoption of conservation biological control, which will enhance the population of local natural enemies.
Conogethes punctiferalis Guenée (Lepidoptera: Crambidae)
It is widely distributed in South and East Asia, Australia, and Papua New Guinea (CABI 2011). It is a polyphagous pest with broad host range (Sekiguchi 1974, Waterhouse 1993, Li et al. 2015. The adults are medium-sized moths with wingspans of 20-23 mm, the forewings are peach-yellow in color with scattered black spots and it lives 8-11 d ( Fig. 7) (Ganesha et al. 2013). The moths lay yellowish white eggs that hatch in 2-4 d (Ganesha et al. 2013). The first instar larvae are light pinkish brown in color with pale black spots. The older larvae are light brown in color with dark brown heads and dark spots on the body. Larvae bore into the fruits and presence of frass on the fruit surface is one of the characteristics of this insect infestation (Fig. 8). It pupates in soil and sometimes on fallen leaves, and pupal duration is 7-9 d (Ganesha et al. 2013).
Pheromone and/or light traps could be set up to monitor the adult moth population in the field. Bagging fruit clusters at 15 d after fruit set reduces damage by this pest (Fig. 9). Collecting and destroying infested fruits is recommended in small orchards. Application of neem formulations repels moths laying eggs on the fruits. In India augmentative release of Trichogramma sp.
Conopomorpha sinensis Bradley (Lepidoptera: Gracillariidae)
It is a major pest of litchi and longan in China, Taiwan, Thailand (Waite and Hwang 2002), and Vietnam (APHIS/USDA 2011). The moths lay cream-colored, scale-like eggs on the shoots or fruits (Waite 2005). The eggs hatch in 3-5 d and the neonate larvae immediately bore into the shoots or fruits (Schulte et al. 2007). One or more eggs may be laid on a shoot or a fruit, but generally only one larva survives on each shoot or fruit. Mature larvae are brownish or green in color, and 6-10 mm in length. The larval duration is 10-12 d. Pupation takes place within the cream-colored, oval cocoon under mature leaves and the adults emerge after 5-7 d (Waite and Hwang 2002). The adults are straw-colored moths with long filiform antennae, fringed forewings, and it lives 5-8 d. In absence of fruits, the larvae survive by feeding on young leaves or shoots (Waite and Hwang 2002).
When the egg laid on the fruit surface hatches, the larva bores into the fruit, feeds on the seed, ( Fig. 10a and b) causing the fruit to be prone to infection by various microorganisms and fruit drop (Huang et al. 1994, Wang et al. 2008. Huang et al. (1994) found 96.1-100% of fallen fruits and 41.5-96.7% of fruits remaining on the trees were damaged by this pest in unsprayed orchards.
Bagging of fruits and application of neem formulations recommended for C. punctiferalis are effective in management of this pest (Fig. 9). Pheromone and/or light traps could be used for monitoring the population. In Taiwan, the larval parasitoids Tetrastichus sp. and Elasmus sp. (Hymenoptera: Eulophidae) and pupal parasitoids Phanerotoma sp. and Apanteles sp. (Hymenoptera: Braconidae) were reported. In Thailand, Apanteles briaeus Nixon, Chelonus chailini Walker and Huddleston, Colastes sp., Phanerotoma sp., Pholestesor sp. (Hymenoptera: Braconidae), and Goryphus sp. (Hymenoptera: Ichneumonidae) were found parasitizing larvae (Waite andHwang 2002, Schulte et al. 2007). The adult females lay small, light-yellow eggs on new shoots and they hatch in 3-5 d. The newly hatched larvae are pale green and mine in the leaf blades. The mature larvae prefer to feed on the mid-rib and veins of young leaves (Fig. 11). There are five larval instars and the larval period is about 10-14 d (Waite and Hwang 2002). Pupae are light green when formed and later change to golden brown. Pupation takes place on mature leaves covered by a thin silken web and the pupal stage lasts 7-10 d. The life cycle is completed in 25-30 d.
All development stages of this leafminer are similar to those of litchi fruit borer. Larvae bore into the midribs, causing distortion and twisting of young leaves. The density of C. litchiella infestation is high during the rainy season from June to September in Vietnam. In severe infestations, affected shoots should be pruned and disposed of. The same species of parasitoids attack both C. sinensis and C. litchiella.
Tessaratoma papillosa Drury (Hemiptera: Tessaratomidae) It occurs in China, India, Indonesia, Malaysia, Pakistan, Philippines, Sri Lanka, Taiwan, Thailand, and Vietnam (CABI 2002). It is known to feed on 21 species of plants, but the favored ones are lychee and longan. Adults (Fig. 12) are golden brown and measure 25-30 mm long and 15-17 mm wide (Quynh 2016). Eggs are round and light green when laid and gradually become yellowish brown (Commonwealth of Australia 2004). The newly hatched nymphs are elliptical, at first reddish, and later turn dark-blue. Second-instar nymphs are rectangular and are orange-red with a dark-gray margin. There are five nymphal instars and the total lifecycle duration is about 60-80 d.
Tessaratoma papillosa has one generation per year, and overwinters as adults. In spring, the overwintering adults do not mate immediately, as their reproductive organs are not mature. Females mate multiple times and lay up to 14 egg masses, each containing about 14 eggs on the lower surface of leaves (Waite and Hwang 2002). Both nymphs and adults feed on tender plant parts like shoots, inflorescence, and fruits (Boopathi et al. 2015). The feeding causes necrosis of young twigs, withering of flowers, fruit rot, and eventually fruit drop (Quynh 2016). Although the pest infestation can be seen year around, damage is more prevalent in summer and low in the rainy season (Boopathi et al. 2011). It typically causes 20-30% yield loss and if the infestation is heavy it may reach 80-90% (CABI 2002).
Eudocima phalonia (L.) comb. (Lepidoptera: Erebidae)
Fruit piercing moths occur along the tropical belt, except in the Americas (Waterhouse and Norris 1987). Eudocima phalonia (Fig. 13) is one of the common species whose larvae ( Fig. 14a and b) on vines of the family Menispermaceae in most parts of the world (Cochereau 1977), and in addition Erythrina spp. (Fabaceae) in the Pacific Islands and Leea indica (Leeaceae) in Thailand and Malaysia (Muniappan et al. 1994/1995, Reddy et al. 2005, Leong and Kueh Pest Management, 2019, Vol. 10, No. 1 2011. The moths drill holes in the fruits and suck the juice at night (Fig. 15). Microbial contamination from the probosces of these moths results in rotting of the pierced fruits. Bagging of fruits effectively prevents damage by this moth. Waterhouse (1993) reported it to be a pest of longan in Vietnam but additional information is lacking from this country. Egg parasitoids Trichogramma sp., Telenomus sp. (Hymenoptera: Platygastridae), and Ooencyrtus sp. (Hymenoptera: Encyrtidae), and larval parasitoids Euplectrus spp. (Hymenoptera: Eulophidae), and Winthemia sp. (Diptera: Tachinidae) have been reported from Asia and the Pacific islands (Waterhouse and Norris 1987).
Oriental Fruit Fly Bactrocera dorsalis Hendel (Diptera: Tephritidae)
Oriental fruit fly is a polyphagous pest with a wide host range of over 200 host plants in 40 families (Prokopy et al. 1990) and it has been reported from Vietnam (Drew andHancock 1994, Vargas et al. 2015). Oriental fruit fly (Fig. 16) lays pale yellow eggs under the skin of ripened or ripening fruits. The physical damage caused by ovipositional punctures as well as feeding damage by maggots leads to rotting of fruits (De Villiers 1992). Bagging fruits effectively prevents damage by this pest. A locally developed protein bait called SOFRI-PROTEIN, made from beer waste and an insecticide, attracts both male and female flies and kills them. Additionally, setting up methyl eugenol traps attracts and kills male flies. These techniques in combination with orchard sanitation are effective in managing the fruit fly. However, for export of fruits, either hot vapor or irradiation treatments are required.
Drepanococcus chiton (Green) (Hemiptera: Coccidae).
It occurs in south and southeast Asia. Ibrahim (1994) reported it to complete its life cycle in 50 d at 29°C and each female to produce about 1,200 eggs (Fig. 18).
Cornegenapsylla sinica Yang & Li (Hemiptera: Psyllidae).
The female adults are small with an average size of 1.7 × 0.36 mm (Fig. 19), and the male adults are 1.4 × 0.33 mm long. Eggs are pale yellow in color and are laid singly into the veins on the under surface of the leaves (Fig. 20). There are four nymph instars and they remain inside the galls. The psyllid completes its life cycle in about 53 d. There are 3-5 generations per year. The psyllid is most abundant from April to June (H.T., unpublished data). Pruning and disposal of the severely affected shoots is recommended.
Fruit Rot Phytophthora palmivora Butler (Peronosporales: Peronosporaceae)
The disease attacks the longan trees from flowering stage to fruit harvest. Phytophthora palmivora can survive in the soil and it spreads through irrigation water. In addition, humans and ants also contribute to its spread. It affects young shoots, panicles, and fruits. Symptoms are necrosis of young shoots, flower drop, irregular lesions on fruits, and premature fruit drop (Coates et al. 2003(Coates et al. , 2005. The disease causes severe damage during the rainy season, and all longan varieties in Vietnam are susceptible to fruit rot. Proper aeration and reducing humidity by pruning, removing, and destroying infected fruits decrease disease incidence. Fungicides are effective in controlling fruit rot. Anthracnose Colletotrichum gloeosporioides (Penz.) Penz. & Sacc. (Incertaesedis: Glomerellaceae) It is an important disease of litchi but is of minor importance on longan. It can attack both leaves and fruits. Symptoms on older leaves that appear as small spots in the margins coalesce to form large patches with brown borders (Fig. 21). On young leaves, watersoaked lesions appear first and later turn dark brown and dry up. On fruits, dark brown lesions appear on the surface. Under wet conditions, white mycelial growth and fungal fruiting bodies may also cover the lesions (McMillan 1994). Control method recommended for fruit rot also applies for anthracnose.
Ceratocystis Blight C. fimbriata Ellis & Halsted (Microascales: Ceratocystidaceae)
Initially infected branches start wilting and eventually the whole tree succumbs (Fig. 22). The same fungus, C. fimbriata causes 'seca', 'murcha', or 'mango blight' on mango. Abiotic factors such as water stress, extreme high or low temperatures, and micro-nutrient deficiency enhance the damage caused by this fungus (Ploetz 2003). Scolytid beetles, wounds caused by contaminated tools used for pruning, and ringing transmit this disease to healthy plants. The variety Tieu Da Bo is more susceptible to this disease in Vietnam. Pruning and disposal of the affected branches is recommended.
IPM Program for Longan
Components 1. Fertilize the trees with compost inoculated with the antagonistic fungus, Trichoderma sp. 2. Do not induce flowering during November to May. 3. Immediately after the final harvest, prune the trees and safely dispose of the material either by burying or burning. 4. Prune and destroy shoots infected by LgWB syndrome. 5. Set up light and/or pheromone traps to monitor fruit borer, litchi shoot borer, leafminer, and other pests. 6. Set up methyl eugenol traps and protein bait for controlling fruit flies. 7. Set up SOFRI-ant baits for controlling ants, mealybugs, and soft scale. 8. Apply Beauveria bassiana, Paecilomyces sp., or Metarhizium sp.
for controlling stink bug. 9. Spray sulfur, neem oil, petroleum oil, or Paecilomyces sp. to control longan eriophyid mite. 10. Bag the fruit cluster 15 d after fruit set. | 2019-08-16T06:18:23.693Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "09e197be612e8cb754966c3a30307c8a98411bc0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/jipm/pmz016",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "09e197be612e8cb754966c3a30307c8a98411bc0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
92607483 | pes2o/s2orc | v3-fos-license | Application of Electrospray Droplet Impact (EDI) to Surface Science
In Electrospray Droplet Impact (EDI), the charged liquid droplets formed by ambient electrospray are introduced through an orifice into vacuum, accelerated by 10 kV and impact the samples prepared on the metal substrate. The secondary ions are detected by an orthogonal-type time-of-flight mass spectrometer. EDI can afford soft ionization / desorption for various kinds of real-world samples such as biological tissues, dyes, pigments, synthetic polymers, and inorganic materials with no special sample preparation. EDI is capable of atomic- and molecular-level etching with leaving little damage on the etched surface. Moreover, the useful yield (total ions generated divided by the total atoms or molecules desorbed) was found to be larger than 10 − 2 . Due to these unique natures, EDI may be promising for the next-generation nano-scale 3D imaging.
1.Introduction
Secondary ion mass spectrometry (SIMS) is one of the most powerful techniques for material analysis. A number of studies have shown that the sputtering efficiency of secondary ions is improved by increasing the mass of the primary particles. 1∼11) For example, the C60 + ion enhances high-mass ion yields by factors >10 3 or more compared with Ga + , and the damage accumulation rate is lower with C60 + than with Au3 + ion. Mahoney et al. made a pioneering work in generating multiple-charge massive glycerol clusters with the masses of 10 6 -10 7 u and the excess charges of ∼200 via electrohydrodynamic emission in vacuum using 1.5 M solution of ammonium acetate in glycerol. 12∼15) Although this method was found to afford extremely soft ionization/desorption conditions for peptides and proteins, 12∼15) stable ion currents can only be obtained for several hours until the lens electrodes become contaminated by accumulated glycerol at which time source cleaning is necessary. 16) A gas cluster ion beam (GCIB) technology has been proposed by Yamada et al. as a surface smoothing technique. 17) Gas clusters were generated by the supersonic gas expansion. For example, Arn clusters with n ranging from a few thousands to more than 10000 could be formed using Laval nozzle with a diameter of 0.1 mm and Ar stagnation pressure of ∼1×10 6 Pa. A characteristic feature of GCIB processing involves inherent surface smoothing effects.
Intrigued by the original work by Mahoney and coworkers, 12∼15) we have developed the electrospray droplet impact (EDI), that uses contamination-free water droplets generated by an ambient electrospray as a cluster source. 18∼20) In this article, the fundamental aspect and its application of EDI to surface analysis will be described.
2.Methodology
The conceptual experimental setup of EDI ion source coupled with an orthogonal time-of-flight mass spectrometer is displayed in Fig. 1. 18) In brief, the charged liquid droplets generated by electrospraying 1 M acetic acid aqueous solution at atmospheric pressure are introduced into the first vacuum chamber through an orifice with a diameter of 400 µm. The voltages applied to the stainless steel capillary (inner diameter : 0.1 mm, outer diameter : 0.2 mm) are +2.2 kV and −1.5 kV in the positive-and negative-mode electrospray, respectively. The charged droplets sampled through the orifice are transported into a 1st quadrupole ion guide for collimation, and accelerated by 10 kV after exiting the ion guide. The electrospray droplets (i.e. the multiply-charged massive clusters) are allowed to impact the solid sample prepared on a stainless steel substrate. The secondary ions formed by the droplet impact are transported into a 2nd quadrupole ion guide for collisional cooling and massanalyzed by an orthogonal time-of-flight mass spectrometer (AccuTOF, JEOL, Akishima, Tokyo). The typical droplets are roughly represented as [(H2O)100000 +100H] 100+ . The kinetic energy of the droplet is about 10 6 eV with the velocity of ∼12 km/s. The total current of the electrospray charged droplets irradiated on the target was measured to be ∼1 nA using a Faraday cup installed at the target position.
3.Comparison of EDI with other electrospray-based methods
Aksyonov and Williams developed the IDEM (impact desolvation of electrosprayed microdroplets) method. 16) Analytes are dissolved in an electrolyte solution and then electrosprayed in vacuum, producing highly charged micron and submicron sized droplets. These microdroplets are accelerated through potential differences of ∼5-10 kV to velocities of several km/s and allowed to impact on a target surface. The energetic impacts vaporize some of the analyte molecules dissolved in the electrosprayed liquid. Oligonucleotides and peptides yield singly and doubly charged molecular ions with no detectable fragmentation. The basic methodology of the present EDI is similar to IDEM except that the charged droplets are formed by ambient electrospray and the samples are not dissolved in electrospray solution but are deposited on the metal target and ionized/ desorbed by the droplet impact. Aksyonov and Williams found that the polarity of the microdroplets generated by positive-mode or negative-mode of electrospray is unrelated to ion formation in IDEM. This is also the case in EDI, that is, similar positive and negative EDI mass spectra were obtained regardless of the polarity of the charged droplets. This indicates that the mechanism for the formation of secondary ions by EDI is unrelated to the excess charges contained in the water droplet projectile.
In 2004, Cooks et al. developed a new desorption/ ionization method called DESI (desorption electrospray ionization). 21) The difference between DESI and EDI may be worth noting. As shown in Fig. 2, DESI is one of the ambient (atmospheric-pressure) ionization methods. Solid samples are partly dissolved by electrosprayed solvent droplets (methanol is mainly used), picked up by the electrospray charged droplets, ionized by the excess charges of the droplets, and gasphase ions are generated from the charged droplets. The speed of the nebulized droplets is about 120 m/s. In EDI, the electrospray droplets are accelerated in vacuum by 10 kV and impact the sample placed on the metal substrate. The speed of the droplets is about 12 km/s. This value is about one order of magnitude larger than the sonic velocities of solids and thus supersonic collision takes place upon collision with the surface resulting in unique ionization mechanism as described below. As such, the mechanisms for desorption/ionization of DESI and EDI are totally different. bombardment (e.g., Ar + , Ga + ) is typically characterized by low secondary ion yields, an intense chemical background, and extensive degradation of sample molecules. Thus the projectiles that make it possible to perform soft and high-sensitive ionization are highly demanded. We examined whether these demands are satisfied by EDI. 18 ∼20, 22∼36) Figure 3 displays the EDI mass spectrum for 10 fmol gramicidin S deposited on the stainless steel substrate.
5.Atomic-and molecular-level etching
EDI is a surface-sensitive method and only a very thin top layer of samples is ionized/desorbed. 18,22,34) If the kinetic energy of the projectile is dissipated mainly as kinetic energies for the target atoms in the colliding interface, considerable sputtering and fragmentation of sample materials must take place. However, EDI is almost free from the serious damage of the sample materials after irradiation since molecular ions are observed as base peaks with much less fragment ions (see Fig. 3). Figure 4 shows the EDI mass spectra for the selfassembled monolayer Au-(CH2)6-NH2 as a function of irradiation time. At the beginning of the measurement (0.5 min), the ion abundances of [M+H] + , [2M+H] + , and [2M+Au] + (M denotes HS-(CH2)6 -NH2) are observed as major ions and that of Au + is in the noise level. This clearly indicates that only the monolayer organic molecules are selectively ionized/ desorbed. In Fig. 4 a gradual growth of Au + is observed at the expense of [M+H] + , [2M+H] + , and [2M+Au] + ions. Gold cluster ions Aun + were not detected for a prolonged EDI irradiation time. Only the appearance of Au + clearly indicates that no ablation of gold substrate takes place by EDI. Namely, molecular-level desorption for SAM and atomic-level etching for Au substrate are realized by EDI.
As mentioned above, EDI is capable of soft ionization with high-ionization efficiencies and of molecular-level surface etching. These characteristic features suggest that more or less adiabatic collision takes place between the water droplets and the solid surface, i.e., momentum transfer from the water droplets to the sample surface is suppressed to minimal. The H atoms in water droplet projectiles have the lightest mass among all the elements. Because of its lightest mass, the hydrogen atoms in the droplet surface are likely to backscatter in the supersonic collision with the surface atoms that have higher masses than hydrogen atom (forward scattering will result in surface damage). This may be envisaged by the backscattering of ping pong ball collided with the billiards ball. Right after the instant coherent collision (i.e., all the water molecules in the droplet have the same velocity and direction toward the surface), enormous pressure would be exerted at the colliding interface as the collective force due to the high-momentum water droplet impact. This induces the collective in-phase motion of atoms at the interface for collided water droplet and also for the sample. The in-phase motion will result in the shock wave propagation in the colliding systems. Since water droplets are composed of strong hydrogen bond networks, the efficient propagation of shock wave is anticipated in water droplet, i.e., the conversion of kinetic energy to dispersive shock wave in the colliding system. In a sense, the kinetic energy of water droplets is efficiently converted to internal energies in supersonic collision in EDI.
6.Application of EDI to inorganic, organic and biological samples
Some comparative studies were made for the surface characterization by XPS in Ar + and EDI etching for inorganic and organic samples. Marked surface modification (e.g., selective etching) took place by Ar + etching for all the samples examined. In contrast, no recognizable chemical modification was observed by EDI for all of the inorganic and organic materials investigated. As an example, experimental results obtained for indium phosphide (InP) will be shown. 34) For the surface analysis of InP by SIMS, it is of paramount importance to avoid the selective etching of P with respect to In and also to suppress the growth of etching cones on the surface. The surface topology of InP(111) etched by 3 keV Ar + and EDI was examined by AFM. Figure 5 (a) (Fig. 5 (b)), growth of etching cones is observed. The etching cones are known to be formed for InP when analyzed by conventional SIMS. The component on the top of the etching cone is mainly composed of In, i.e., phosphorus is preferentially sputtered by Ar + . 34) Compared with Fig. 5 (b), the surface of InP (111) etched by EDI is much smoother as shown in Fig. 5 (c). The average surface roughness (1.2 nm) after EDI etching (∼48 nm etching depth as calibrated for SiO2) became only slightly greater than that of the sample as received (0.8 nm). In contrast, more than one order of magnitude greater roughness (16.7 nm) was observed for InP etched by 3 keV Ar + (∼60 nm etching depth as calibrated for SiO2).
The relative concentrations of In and P for InP etched by 3keV Ar + and EDI were measured by XPS. The ratio of In to P (i.e., In/P) increased from 1 to 1.7 for 3 keV Ar + etching. Apparently, P was etched more preferentially than In by 3 keV Ar + . In contrast, the ratio In/P was found to be 1.0 after EDI etching. Namely, preferential etching for InP does not occur in EDI. It was also found that the FWHM of In 3d5/2 and P 2p peaks for XPS spectra did not change before and after EDI etching. That is, crystalline structure of InP (111) is preserved and amorphatization does not take place by EDI. Similar results were obtained for SiO2/ Si and HF-treated Si. 34) EDI was applied to various synthetic polymers such as polyethylene terephthalate (PET), polyvinyl chloride (PVC), polyimide (PI), polystyrene (PS), and polymethyl methacrylate (PMMA). 29,32,33,35) The EDI mass spectra are found to be composed of fragment ions that reflect the molecular units of the polymer backbones. That is, random dissociation of polymer structure seems to be minor. In order to obtain more detailed information on the etching process by EDI, XPS was applied to the surface analysis for all the polymers investigated. As an example, the O 1s, N 1s, and C1s XPS spectra of PI as a function of EDI irradiation time are shown in Fig. 6. 35) All the peaks for O, N, and C show little changes by EDI etching up to 120 min. That is, neither selective etching of oxygen and nitrogen nor accumulation of the radiation products on the sample surface takes place as far as the XPS measurements are concerned. In other words, surface damage if there is any formed on the surface is not detectable by the conventional XPS analysis. Similar results were obtained for all polymers investigated (PET, PVC, PS, and PMMA).
EDI/SIMS was applied to mouse brain as the biological tissue sample. In the positive-mode of operation, various kinds of PC (phosphatidylcholin) and GalCer (galactosylceramide) could be detected. In the separate experiments, MALDI measurements using DHB as a matrix were also made for comparison with EDI. The S/N ratios for positivemode mass spectra obtained by EDI and MALDI were similar for PC and GalCer. That is, the sensitivities of EDI and MALDI are about the same. In contrast, negative-mode mass spectra obtained by EDI and MALDI have some distinct difference as shown in Fig. 7. While PE, PS, PI and ST are detected in EDI mass spectrum, PE and PS are absent in MALDI mass spectrum. That is, EDI is capable of more nonselective ionization for the biological molecules. The high efficiency of the formation of positive and also negative ions by EDI may be interpreted as follows.
7.Mechanism for the secondary ion formation
One of the characteristic features of EDI is its capability of formation of strong negative ions as well as positive ions for many organic compounds, e.g., amino acids, 22) peptides, 19) pigments, 23) water molecules, i.e., the occurrence of electrolytic dissociation reaction of water molecules in the colliding interface. 22∼24) H2O+H2O H3O + +OH − ( 1 ) Endoergic reaction ( 1 ) may be one of the primary routes for the energy dissipation of the colliding system. The H3O + , hydronium ion, is known to be a very strong acid, and likely to transfer its proton to the analyte to form protonated molecule [M+H] + in the colliding interface.
( 2 ) The occurrence of reaction ( 2 ) explains the strong appearance of [M+H] + for molecules that have larger proton affinities than H2O (691 kJ/mol).
The OH − in reaction ( 1 ) is also known to be a very strong base and it deprotonates from M to form ( 3 ) In a sense, very strong acid H3O + and also very strong base OH − are inherently incorporated in the supersonic collisional events in EDI.
In EDI mass spectra, some compounds give strong radical cations and anions (e.g., C60, 22) coronene, and organic pigments 23) ) in addition to [M+H] + and [M−H] − ions. For example, in EDI mass spectra for C60, C60 + and C60 − ions are observed with about equal abundances. 22) We conjecture that these ions may be formed by the disproportionation electron transfer reaction ( 4 ) caused by the supersonic collision in EDI. 23) M+M M +・ +M −・ ( 4 ) The MALDI mass spectrum in Fig. 7, PE and PS are not detected when DHB (a typical matrix for MALDI) is used. The ion formation mechanism for MALDI using DHB as a matrix may be ascribed to reaction ( 5 ). Here it should be recognized that DHB is one of the organic acids and acts as a protonating reagent.
8.High useful yields of EDI
In general, dyes and surface-active agents gave lower limits of detection (detectable with higher sensitivities) compared with neutral compounds because they have ionic forms and only desorption may give gaseous ion signals. As shown in Fig. 8, the limits of detection for rhodamine B and aerosol OT are 1 fmol and 100 amol, respectively. On the other hand, the limits of detection for neutral compounds are found to be about 10 fmol or less as shown in Fig. 3. The detection limit of C60 was also found to be about 10 fmol. The lower limits of detection for neutral compounds are reasonable because ionization processes are necessary for the detection of neutral molecules as ion signals. In here, if one assumes that the desorption efficiencies are roughly the same for neutral compounds (e.g., gramicidin S and C60) and ionic compounds (e.g., rhodamine B and aerosol OT) in EDI, useful yields (i.e., total ions generated divided by the total atoms or molecules desorbed) may be crudely estimated to be 0.01 (100 amol/10 fmol) or 0.1 (1 fmol/10 fmol). These values are about 10 4 greater than the estimated values for other techniques (e.g., SIMS, cluster SIMS and MALDI), 10 −5 -10 −6 . That is, EDI improves the useful yields by a factor of about 10 4 compared with SIMS and MALDI. This unprecedented high value of the useful yield is mainly due to the two factors, one is the very high ionization efficiencies for neutral compounds and another is the molecular-level desorption of the sample molecules.
9.Summary
The characteristic features for EDI may be summarized as follows.
1.EDI utilizes the water droplet projectiles, typically represented as [(H2O)100000+100H] 100+ . 2.The kinetic energy of the droplet is about 10 6 eV with the velocity of ∼12 km/s. Since this velocity is about one order of magnitude higher than the sound velocities of solids, the supersonic collision takes place in EDI, resulting in the ionization/desorption of the sample atoms and molecules.
3.In general, the molecular ions are observed as base peaks with weaker fragment ions.
4.In EDI, atomic-and molecular-level etching take place with little damage left on the surface after EDI.
5.From the limits of detections for neutral and ionic compounds, the useful yields of EDI were estimated to be 10 −1 -10 −2 . These values are about 10 4 times higher than those for MALDI and conventional SIMS.
These unique natures of EDI are mainly ascribed to the efficient dissipation of kinetic energy of the projectile as the internal energies of the molecules in the colliding interface and also as the shock wave propagating through the hydrogen bonded water projectile.
If the diameter of the beam of the water droplet projectiles could be focused to submicron, EDI may be a useful method for 3D nano-imaging for organic and biological materials. A further investigation on this respect is in progress in our laboratory. | 2019-04-03T13:16:11.565Z | 2010-11-10T00:00:00.000 | {
"year": 2010,
"sha1": "3f17cf3a0eed4f431e9e33171c74b40c965d7f42",
"oa_license": "CCBYNC",
"oa_url": "https://www.jstage.jst.go.jp/article/jsssj/31/11/31_11_572/_pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "29a13df11298b27b4a3f1b4ed16db62e30b76c2f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
225525672 | pes2o/s2orc | v3-fos-license | Dream the impossible dream? An ecosocialist account of the human right to a healthy environment
: The construction of a human right to a healthy environment (both in academia and practice) has so far tended to neglect some important structural problems posed by the actual stage of global capitalism. Some approaches to that right seem to rise from a mere legalist point of view, forgetting to rise up some important questions in the debate surrounding human rights and the environment. This article intends to explore the Ecosocialist readings in order to question the possibility or not to realize a human right to a healthy environment, focusing on the role of global capitalist relations and trying to spot where third world countries (especially Latin American ones) stand in the middle of these challenges and discussions. Our main conclusions were that: i) the ecological catastrophe sponsored by Ecosocialist movement; iii) In Latin America, especially in Brazil, contemporary neoliberal policies have seriously undermined environmental protection efforts.
Abstract:
The construction of a human right to a healthy environment (both in academia and practice) has so far tended to neglect some important structural problems posed by the actual stage of global capitalism. Some approaches to that right seem to rise from a mere legalist point of view, forgetting to rise up some important questions in the debate surrounding human rights and the environment. This article intends to explore the Ecosocialist readings in order to question the possibility or not to realize a human right to a healthy environment, focusing on the role of global capitalist relations and trying to spot where third world countries (especially Latin American ones) stand in the middle of these challenges and discussions. Our main conclusions were that: i) the ecological catastrophe sponsored by the capitalist system poses a serious threat to the possibility of a human right to a healthy and clean environment; ii) Ecosocialist claims are not inconsistent with human rights demands, on the contrary, they should reinforce each other's agendas in order to achieve a rights-based international
Introduction
Mostly viewed as a "rebel" doctrine seeking to aspire a worldwide vendetta on global capitalism (and capitalists), ecosocialism has often been sadly discarded on research programs and projects. However, as well put by Kovel (2008, p. 4), "the path to ecosocialism has to be made by those who will travel upon it". Thus, even though we do not deny the need for structural reforms on global capitalism in order to achieve the "re-jointing" of nature, we are ought to disregard unprincipled radicalisms in our journey to explore the future relations between human rights and the environment.
We advert however, that is not the aim of this article to dive into the philosophical or metatheoretical discussions regarding the human right to a healthy environment, our goal is a more practical one: to explore how the rights-discourse may be undermined when linked to environmental protection in the context of ever-growing global capitalist relations.
It is hard to deny -and even those who are the most sceptical about it might agree -that human rights have achieved great prominence in the contemporary world, both in academia and practice. As Professor Raz well stated, "this is a good time for human rights" (RAZ, 2010, p. 321), not because they have been observed and respected more than ever before, "it is a good time for human rights in that claims about such rights are used more widely in the conduct of world affairs than before" (RAZ, 2010, p. 321).
The language of rights has therefore entrenched world politics whether it has translated or not into actual practice. Since the globalization of the so-called "talk of rights" in the mid-20 th century (Mazower 2004), human rights treaties and bodies have spread all around the world alongside transnational activism for such rights (SHELTON, 2015;KHAGRAM;RIKER;SIKKINK, 2002). However, divergent opinions have come to emerge regarding the foreseeable futures for human rights. Some scholars believe that these rights have proven to be ineffective (POSNER, 2014;HOPGOOD, 2013), whilst others have shown the power that the rights-talk has in delivering actual social change (RISSE; ROPP; SIKKINK, 2013).
With that said, our efforts shall be directed not in investigating the more general state of affairs regarding human rights possible futures, but rather how these rights, more specifically the right to a healthy environment, may be undermined in the context of -as we intend to argue later on -an unequal and unregulated global economy. A lot has been written already about the moral underpinnings of global capitalist expansion, as scholars and practitioners have gathered together to advocate for the idea that global capitalism "[...] must be not only entrepreneurial and technically competent, but buttressed and challenged by a strong and appropriate moral ecology" (DUNNING, 2003, p. 1), where the focus should be in both economic viability and social justice/acceptance. When talking about the political economy of human rights, authors such as professor Samuel Moyn have provided a more intimate analysis of the nexus between human rights and inequality (MOYN, 2018). In his latest book, "Not Enough: Human Rights in an Unequal World", Moyn is very critical of the capacity of human rights, in the context of neoliberalism in its "most unfettered form" (MOYN, 2018), to provide distributive justice and to top-down a global system of economic and social inequality In his view, the task is to "to argue and make room for two different imperatives of distribution -sufficiency and equality", as "the ideal of material equality has lost out in our time" (MOYN, 2018, p. 3).
However, we fear that not enough attention has been given towards the challenges that the human right to a healthy environment might face in this context of growing global economic and social inequalities. Therefore, our main goal is to explore how the nexus between human rights and environmental protection may be weakened by the capitalist logic of constant economic growth and wealth accumulation (which does not translate in equal distribution) that constantly defies the demands for environmental justice.
The doctrine of ecosocialism was chosen as our main analytical tool because we believe it provides a more critical assessment of current literature on the human right to a healthy environment. When challenging the inequalities that are inherent to the capitalist mode of production, ecosocialists show how "the myth of growth has failed us [...] [and] the fragile ecological systems we depend on for survival" (WALL, 2010, p. 14), whilst pointing out alternatives to our current environmental crisis.
We intend to discuss as well how the countries in the Third World, with the focus being on Latin America, have come to grips or not with the right to a healthy environment. We shall investigate whether Latin American countries (with a special focus on the Brazilian experience) have embraced or not the discourse of a human right to a healthy environment, and if they have, what are the provisions they have developed so far in order to translate such right into practice.
As deeply discussed in the 2016 Global Forum on Environment and Economic Growth, despite the well-known link between economy and the environment, there is still a relative lack of tools to quantify both costs and benefits that might arise from environmental protection policies (OECD, 2016), which may pose a challenge to Third World countries still struggling to juggle the asymmetries of global economy, while environmental concerns may sometimes not rise to the occasion when 10 per cent of the world's population still live on less than US$1.90 a day (WORLD BANK, 2019).
In this case, we are not by any means trying to dismiss the importance of the recognition of a healthy environment as a human right, our contestation is regarding the possibility of this right being sold with a normative cosmopolitan label (RAO, 2010, p. 35-37), easily taking for granted both political and economic willingness of countries outside the Global North spectrum to have environmental issues at the heart of their policy-making strategies.
Our article is divided in a way in which we try first to draw upon more general questions regarding human rights and the environment, and later on we try to interpret such concerns being guided by the theoretical tools provided to us by ecosocialism and the category of "third worldness" (such as "great economic and social disparities, dependent development, and marginalization from the core of international society" [RAO, 2010, p. 29]).
First, we explore the idea of a human right to a healthy environment. Secondly, we summarily review the relation between human rights and Marxist thought. Thirdly, we try to find out what an ecosocialist approach to human rights and the environment would look like. Ultimately, we go on to analyse how countries in the Third World, those in Latin America more specifically (with the focus being on Brazil), have come to accept or not the discourse of a healthy environment as a human right.
The ongoing idea of a human right to a healthy environment
Despite the fact that human rights have had a turbulent journey in the history of its affirmation -being subjected to both "uses" and "misuses" and far from having a linear trajectory (MOYN, 2014) -the categories proposed by the Italian political philosopher Norberto Bobbio to analyze the path of human rights in Western historiography are quite accurate (BOB-BIO, 2000, 1991. First, these rights were translated from metaphysical imaginaries (i.e. from the notions of natural rights/law) to constitutional statuses, mainly in European countries and in the United States during the 17 th and 18 th centuries, as they later have experienced considerable expansion from the first idea of "civil liberties" to other categories such as social and community rights, until its internationalization and alleged "universalization" in the post-Second World War period, inaugurating what Bobbio called as an "Era of Rights" (El Tiempo de los Derechos) (BOBBIO, 1991).
Since the "inauguration" of such Era, the "list" of rights has grown to include different national and international demands regarding the protection of intrinsically varied human values/necessities: such as women's rights (Grimshaw, Holmes and Lake 2001), children's rights (HOLZSCHEITER, 2010), the human rights of older people (Martin, Rodríguez-Pinzón and Brown 2015), rights of migrants and refugees (CHOLEWINSKI; GUCHTENEIRE; PÉCOUD, 2009; ISLAM; BHU-IYAN, 2013), LGBT+ rights (THORESON, 2014) and so forth.
One can say that the idea of humans being entitled to a clean and healthy environment might be inserted in this tendency in which Bobbio called as "specification" of rights (Bobbio 2000, p. 482-483). More than ever, the general and abstract claim that "[…] human beings are born free and equal in dignity and rights" (UN, 1948, Article 1)-as enshrined in Article 1st of the Universal Declaration on Human Rights -has proved to be insufficient, as the aspiration towards equality has shifted to the necessity of a substantive account as opposed to the mere formal interpretation on the right to equality (FREDMAN, 2016): because "even if equality before the law has been established, disadvantage persists, and this disadvantage tends to be concentrated in groups with a particular status, such as women, people with disabilities, ethnic minorities and others" (FREDMAN, 2016, p. 712-713).
This perception is particularly true when talking about the effects of environmental catastrophes on societies across the globe, as countries and peoples experience the effects of natural disasters, climate change and environmental degradation differently according to a variety of factors such as level of income, infrastructure, disaster preparedness, amongst others (AYSAN, 1993).
The correlation between human rights and the environment has been a subject of debate for decades. As pointed out by Boyd, "there has been a lively debate among scholars in the fields of human rights and environmental law about whether explicit legal recognition of the right to a healthy environment would provide tangible benefits" (BOYD, 2018, p. 17). Thus, the debate regarding the safeguard of a "clean", "healthy" or "decent" environment has often focused on the fact of whether it constitutes or not a human right.
Some authors defend that the right to a healthy environment is a derivative of the general human rights movement, mostly because "[...] a human rights perspective directly addresses environmental impacts on the life, health, private life, and property" (BOYLE, 2012, p. 613), therefore, it would be "an entitlement derived from other recognized rights" (WESTON; BOLLIER, 2013a, p. 33) that are firmly established in international human rights law and jurisprudence, as well in constitutional law. According to this view, environmental issues should not, therefore, be thought separately from issues of human rights protection: It has been a well-documented fact that environmental degradation in environmental quality (such as from ambient concentration of pollutants and other activities and processes) can cause a violation of human rights. For example, the Office for High Commissioner for Human Rights (OHCHR) has investigated the varied effects of climate change on the enjoyment of human rights. The 2009 Report of the Human Rights Council analyses such an impact on several human rights, such as the right to life, and adequate housing; health, water and self-determination (FITZMAURICE, 2015, p. 220-221).
This idea was first defended and established in the 1972 "Stockholm Conference on Human Environment", as the Declaration on Human Environment mentioned the necessity of "[...] an environment of a quality that permits a life of dignity and well-being" (UN, 1972, Principle 1), stating that every person has the right to live in an environment that provides him or her "the freedom, equality and adequate conditions of life" (UN, 1972, Principle 1). Consequently, this view constructs a human right to a healthy environment from the catalogue of already existing human rights (FITZMAURICE; MARSHALL, 2007).
However, as stated by Fitzmaurice and Marshall, there are many authors who disagree, as they believe that the right to a clean environment would not acquire the normativity it needs by simply being drawn from already established human rights (FITZMAURICE; MARSHALL, 2007). That is the case for those who defend the necessity of a human right to a clean/ healthy environment to be acknowledged as "as an entitlement autonomous unto itself" (WESTO; BOLLIER, 2013a, p. 33). Which means that the broader human rights language, in the view of some authors, would not be able, in itself, to make a sufficient claim for the protection of the environment: One of the arguments for adopting a new substantive right is that, as Michael Anderson has explained, "established human rights standards approach environmental questions obliquely and, lacking precision, provide a clumsy basis for urgent environmental tasks" (1996: 8). He has argued on that basis that a specific right would be better suited to the challenge of protecting the environment [...] A substantive right to a good environment, it is argued, is necessary to address this shortfall in protection (LEWIS, 2018, p. 63-64).
In sum, one can say that the idea of humans possessing a right to a clean, healthy, good, decent or safe environment (there is no consensus surrounding the more adequate terminology) as a subjective right, has yet to achieve a more robust philosophical and practical framework in order to unleash its full potentials (LEWIS, 2018, p. 61 henceforth).
Therefore, despite the right to a clean and healthy environment being recognized in international law, deriving from different sources such as soft or customary law (RODRÍGUEZ-GARAVITO, 2018), and being encrypted in the constitutional law of many countries as well (BOYD, 2018), whether as a right "unto itself" or as a derivative from other human rights, there are still relevant gaps and remaining issues awaiting for discussion and resolution, taking into account that "in the last decade or so [...] we have watched nature's defilement assume systemic dimensions with almost no legal intervention whatsoever" (WESTO; BOLLIER, 2013b, p. 117), in different parts of the globe.
With that said, further on, we intend to explore how the persistence of barriers related to the fulfilment of the right to a healthy environment could be indeed a direct or indirect result of the expansion of global capitalist relations, utilizing both Marxist approaches to human rights more generally and the Ecosocialist thought as our theoretical basis.
Human rights and the marxist thought
The intent of this section is not to revisit the writings of Marx himself in order to establish the nexus between early Marxist thought and its critique to the liberal doctrine of human rights, a lot of important works and discussions have already been done on such topic. Our goal is to capture the spirit of what Darren J. O'Byrne (2019, p. 1) has recently pointed out as advocacy "[...] for both human rights and Marxian-informed social theory". That suggests a broader reading on the reality of human rights through the contributions made not only by Marx but by those who were inspired by Marxist sociological theory and have drawn from that their narrative on current insufficiencies arising from the hegemonic liberal discourse on human rights. Such "Marxian-informed" views on human rights might provide us with theoretical tools that are closer to Ecosocialist claims, as we intend to argue later on.
The discourse on Ecosocialism has, in our perception, not focused quite enough on the question of rights, maybe because some authors might reject the rights-talk based on a pre-assumption that it may undermine their fight for environmental justice. By providing a "cross-fertilization" between the rights-talk and the Ecosocialist claims, we might be able to reach a rights-oriented critique to both the destruction of the environment and the inequalities inherent to the capitalist mode of production.
The question of whether or not Marx was a human rights advocate or if at the very least he made a case for human rights to uphold some kind of validity is not an easy one. Starting from Marx closest writings on the subject, undoubtedly the "On the Jewish Question" essay (BAUER, 1844), authors both confirm and deny Marx engaging in human rights defence. There are those who believe that Marxist thought and human rights are not only distinct but incompatible (KOLAKOWSKI, 1983). Whilst there are others who make a strong case for a Marxist approach to human rights, focusing especially on the task of political emancipation (MASSINI, 1986).
We would probably find a dead-end if we followed the first road, that is why our efforts should be directed towards the second path. Which means we believe that a Marxist-informed human rights approach is able to denounce the inequalities that arise from a class-based society and how such inequalities undermine the task of the rights agenda (FASENFEST, 2016). Ultimately, "Marxism provides the language of and mechanisms for resistance to neoliberal agendas that strip human rights, and promotes common cause with all who struggle for human rights" (FASENFEST, 2016, p. 3). On that matter, O'Byrne (2019) has provided a robust defence of the potentialities that Marxism has when included as a contributor to the general theory of human rights: [Marxism is able to] 1 -Provide a critique of the bourgeois, individualist nature of the dominant liberal tradition in human rights theory (and thus demonstrate the reality of alternative conceptualizations of rights); 2 -As the basis of that alternative conceptualization, foreground economic and social rights that reflect basic human needs; 3 -Promote the struggle for rights as aspirational, i.e. as a counter-hegemonic strategy; 4 -Provide a theoretical framework within the sociology of human rights that understands the relationship between human rights abuses [...] and the protection of capitalist interests (O'BYRNE, 2019, p. 2).
From such assertions, we infer two basic assumptions regarding a Marxist-informed approach to the human right to a healthy environment. First, we need to recognize that even though this right has been acknowledged by international organizations and courts, as it has also been enshrined in constitutional texts, it does not mean that the fight for human and environmental rights is over. Professor Herrera Flores (2008) defends that the Law is not going to be born and will not work on its own, that is, legal norms, and rights in general, serve as procedures, as means, and will only be able to fulfil a function more in line with social reality if we put them into operation. Flores (2008) also reminds us that rights alone cannot overcome the inequalities arising from the globalization of the capitalist rationality, which demands bottom-up social action (acciones sociales "desde abajo") in order to tackle systematic inequalities which pose a threat to the full realization of human (and environmental) rights in the contemporary world.
Especially in environmental causes, the call for collective action and struggle towards the translation of formally recognized environmental rights into social reality is one of great importance. The role of national and international NGOs, as well as unions and advocacy networks, has proven to be not only necessary but essential to the advancement of environmental justice agendas: as "there is a growing emphasis on governance as a critical aspect of environmental protection which calls for active and vibrant participation of civil society" (AHMAD, 2018, p. 16).
Secondly, environmentalists, as well as environmental and human rights lawyers and advocates, must recognize that, in some cases (if not in most of them), violations to the human right to a healthy environment take place in order to protect or advance capitalist interests. The link between capitalist growth and environmental degradation represents nothing new under the sun, however, many authors believe that changes within the global capitalist system are able to solve the environmental problems humanity face today and those we may face in the future.
That is the case of Newell and Paterson, who believe in a "Climate Capitalism", i.e., "a model which squares capitalism's need for continual economic growth with substantial shifts away from carbon-based industrial development" (NEWELL; PATERSON, 2010, p. 1). Meanwhile, authors such as Park defend that in the context of neoliberal capitalism and cost-benefit analysis, mitigation as an investment does not always offer a precise, quantifiable return (PARK, 2015). According to him, "these two goals -economic growth and environmental sustainability -are at irreconcilable odds" (PARK, 2015, p. 195). (2019), who adverts that "police tweaks", such as carbon tax, won't be able, in a long-term point of view, to vanish the perils of human-induced climate change and the consequences of global warming. This second group of authors are more aligned with a Marxist approach to the human right to a healthy environment. In the Economic Theory, there is also a growing movement of "radical economists" who argue for the recognition of the intrinsic destructive nature of capitalism regarding the environment:
This view is also endured by McDuff
For radical economists many of the root causes of social and environmental problems reside in the nature of the capitalist system itself, driven by the need to accumulate capital and the class-class based nature of the capitalist system. Thus, while progressive politics can alleviate some of the ills caused by the capitalist system, they cannot abolish these injustices (FISHER, 2014, p. 7).
Ultimately, anyone involved in daily-based environment protection actions, as well as academicians and legal scholars involved with human and environmental rights issues, have to acknowledge that the capitalist system bears a great amount of responsibility regarding contemporary ecological problems: as "[...] many environmentalists admit that the way capitalism is currently working is a major cause of ecological destruction" (MAGDOFF, 2011). Therefore, we believe that a more robust defence of a human right to a healthy environment should behold such right as a "counter-hegemonic strategy" (O'BYRN, 2019, p. 2) that is based in essentially Marxian-informed social and rights theory. Such view of human and environmental rights is totally compatible with ecosocialist claims and should, therefore, be taken into account by ecosocialist theorists who, from our point of view, have yet to come to grips with the general human rights-talk. Further on we intend to provide such theoretical crossover.
An ecosocialist approach to human rights & the environment
Ecosocialism is, summarily, an attempt to revisit the political economy of Marx, and the socialist experience at some level, in order to insert environmental concerns in the heart of the Marxist political, economic and sociological theory. Michael Lowy has presented in his book "Écosocialisme: L'alternative radicale à la catastrophe écologique capitaliste", what is perhaps the one of most coherent theoretical defences of Ecosocialism, which is, in essence, "un courant politique fondé sur une constatation essentielle: la sauvegarde des équilibres écologiques de la planète, la préservation d'un environnement favorable aux espèces vivantes […] sont incompatibles avec la logique expansive et destructrice du système capitaliste" (LOWY, 2011, p. 7). And he goes on to advert that traditional Marxist ideas of productive forces (forces productives) and the progress of history carry also a destructive logic of nature domination, that is the reason why he considers that such ideas need constant revisions by Marxists in order to create a different concept of progress which is closer to Ecological claims (LOWY, 2011).
Even though Marx himself was worried about the way modern (capitalist) agriculture was exploiting the soil and have shown these concerns on several passages in Volume I and III of the Capital (SAITO, 2017, p. 209-213), Ecosocialists claim that "nature had been broadly excluded from earlier generations of socialist thought [...]" (KOVEL, 2008). Therefore, the ultimate claim of Ecosocialists is to create a space of convergence between ecological movements and a renewed Marxism (QUERIDO, 2013). In this sense, neither traditional Marxist thought nor political ecology alone are able to find solutions for the challenges that the contemporary capitalist system poses on the environment. Political ecology, according to Lowy, focuses on the illusion of "clean" capitalism, while traditional Marxian political economy didn't completely abandon the logic of productivism (QUERIDO, 2013). The only way out is then Ecosocialism, or, an Ecocatastrophe, as pointed out by Schwartzman (2009). Therefore, one might deduce that for Ecosocialists, the human right to a healthy environment -as a result of the current ecological crisis sponsored by capitalism -is nothing but a sweet dream. And that is the case if we buy such right with a classic, liberal cosmopolitan label, but not with if we imagine the human right to a healthy environment as we proposed in the previous session of this article. Envisioning the human right to a healthy environment from a Marxian-informed point of view, with a focus on social actions/struggle, can indeed help Ecosocialists in their claim for radical changes in the contemporary global economy.
But are Ecosocialist activists ready to do so? If Ecosocialism aims "[...] a non-hierarchical society respectful of ecological systems" (JOHNS; KOVEL;LOWY, 2003, p. 128), it means we no longer need the rightstalk? It can simply be left behind? Rights were thought in the first place to limit the power of Government over its citizens and to guarantee equality before the Law, but in such non-hierarchical society respectful of ecological systems, we would no longer be in need of equality rights or environmental rights? Our answer is that one cannot easily find the answers for that in the current Ecosocialist literature. Ecosocialism still lacks theoretical foundations to discuss issues of human rights and global justice which may still persist in an Ecosocialist future.
As argued before, human and environmental rights are not inconsistent with the Ecosocialist thought. If authors had spent time and effort to rethink traditional Marxist political economy as well as political ecology, why can't they do the same regarding human rights? If a Marxist theory of rights, as put by O'Byrne (2019), lies on a social constructionist approach, of rights being not a list of proscribed entitlements, but a representation of socially constructed demands for political emancipation and material conditions of existence, Ecosocialists should acknowledge that the struggle for rights does not end with the achievement of a global Ecosocialist experience. Rights represent therefore a permanent language of resistance, that cannot, by any means, be abandoned nor mined at any time, at the risk of repeating the totalitarian experiences of the 20 th century.
David Pepper (1993) is perhaps one of the few Ecosocialists who draw an intensive defence of the need for bridging Ecosocialism to the ideals of social justice and human rights, as he says: "I am, anthropocentric enough to insist that nature's rights (biological egalitarianism) are meaningless without human rights (socialism). Eco-socialism says that we should proceed to ecology from social justice and not the other way around" (PEPPER, 1993, p. 3). From our point of view, Pepper's conceptions should be embedded in the larger Ecosocialist movement more than ever, alongside a robust theory of rights, so Ecosocialism can indeed be built upon a humanist ethos.
Where do the third world countries stand with environmental protection? Hints from Brazil and Latin America
Acknowledging the barriers posed by the actual stage of global capitalism to the realization of a human right to a healthy environment requires also the acknowledgement of the existence of an unequal international economic order. As Niheer Dasandi (2013) has suggested, differently from the dominant development literature focused on poverty measurement around the world -which focus mainly on domestic factors -a structural analysis show that there are inequalities in the structure of the international system itself, thus, the international economic order has great impact on how poverty is distributed across the globe. At the World Economic Forum meeting in Davos in January 2017, many International Organizations, such as Oxfam, pointed out the need to discuss and find solutions for a growing -scale of inequality in the global political economy (PHILLIPS, 2017).
But how is poverty related to environmental protection? Well, many poor, low-income countries, are still either cautious or sceptical about the capability of concepts such as "green growth" or "sustainable development" in delivering poverty reduction, higher social welfare and job creation (OECD, 2012), which are often the main goals for emerging economies. According to the OECD (2012), one of the reasons for that is the way in which green growth policies are being discussed, as the focus on low-carbon and high-technology does not always help address poverty and other development priorities (CORDERO; ROTH; SILVA, 2005).
In fact, in developing countries, there are still competing views of whether or not sustainable development policies can actually guarantee economic growth and wealth distribution (CORDERO; ROTH; SILVA, 2005). In Latin America, especially, where most countries depend upon agricultural and industrial production, the environment is sometimes simply viewed as a commodity, which seriously undermines environmental protection policies and puts crucial natural resources in danger.
That is why the category "Third World" is important in our analysis. The idea of a Third World -as originally proposed from the tripartite division of the world during the Cold War -as a project to denounce global hierarchical structures, is still in fashion, even though the term has lost most of its attractiveness (RAO, 2010, p. 1-34). By putting the term into use again, scholars and activists try to show that despite the shifting in terminology (to "developing" or "emerging" countries), the inequalities of the global capitalist system have not vanished, and Third World countries still struggle with injustices such as great economic and social disparities, dependent development, and marginalization from the core of international society (RAO, 2010). This large view of persistent economic inequalities in the contemporary global economy has led the countries of Latin America, in the 1992 Rio Conference (also known as the Earth Summit), to adopt among the "Rio Principles", a principle of "common but differentiated responsibilities" regarding environmental policies.
Therefore, one can say that Latin American countries have acknowledged that the protection of the environment is the duty of humankind, from a holistic point of view, but have argued that such duty translates into different levels of responsibilities. However, there are authors who point out that since the Earth Summit, Latin American countries have been too vague or not straightforward enough in the task of defining what is their level of responsibility regarding the protection of the environment, even though a lot of them have adopted "Environmental Framework Laws" since then (CORDERO; ROTH;SILVA, 2005).
Despite these challenges, we have to consider that many Latin American countries have engaged considerably in the attempt to create norms and standards related to general environmental protection. Erika Castro-Buitrago and Felipe Valencia (2018) defend that, since the Earth Summit (1992), and the Rio+20 (2012), Latin American countries have sought to establish minimum standards of environmental protection.
While regarding the recognition of the environment as a human right, the Inter-American Court of Human Rights has issued a landmark opinion on the matter. The Advisory Opinion responded to a request that the Republic of Colombia submitted on 2016, in which Colombia asked the Court if a State could be held accountable for human rights violationsunder the American Convention of Human Rights (1969) -due to environmental harms emanating from that State (FERIA-TINTA;MILNES, 2016, p. 2-3).
In its landmark opinion on the matter, the Court has acknowledged the existence of a human right to a healthy environment as an autonomous right, as well as interrelated to other human rights, including those enshrined in the American Convention (FERIA-TINTA; MILNES, 2016, p. 5). Therefore, one has to recognize that at least the "talk" of human environmental rights in Latin America has indeed been developed in the region.
However, recent turns in the Latin American political scenario has led to a more "conservative" if not retrogressive approach to environmental protection policies, which has the potential to seriously undermine efforts to implement the human right to a healthy environment in the region, mainly due to the growth of unrestricted neoliberal agendas in Latin America nowadays. Our main focus shall be on Brazil, due to the fact that since 2016 -after the Coup D'état which deposed ex-president Dilma Roussef from office -the country has witnessed an emerging conservative movement mixed with ultraliberal policies which have resulted in the reduction of social rights, social services and in an never-before-seen assault on the environment (SANTANA; FERNANDEZ; FERREIRA, 2018; DAMASIO, 2019).
Since the recently elected Brazilian president Jair Bolsonaro has assumed office, Amazon rainforest protections have been mined (PHILLIPS, 2019), as the Amazon was being in May of 2019, deforested in 0,19 km2 per hour (BORGES, 2019). The election of Bolsonaro represented the victory of ultraliberal capitalist interests over environmental protection causes, a fact which becomes clearer as the Government goes on to destroy environmental protection policies, despite the large environmental protection legislation Brazil possess (FIORILLO, 2013). Our president has even been publicly called the "Exterminator of the future", as he has worked towards the deconstruction of major Brazilian environmental policies (KAISER, 2019).
In a Conference held by eight Brazilian former Ministers of the Environment, which took place at the University of São Paulo, the former Ministers warned that Bolsonaro's government was systematically aiming to destroy Brazil's environmental protection policies: "The ex-ministers highlighted the 'depletion' of the environment ministry's powers, including stripping it of jurisdiction over the country's water agency and forestry service and also eliminating three senior officials, including the secretary on climate change" (KAISER, 2019, on-line).
The president has also issued a Legislative Decree which reduced from 96 to 23 the number of members of the National Council for the Environment (CONAMA), a public organ which has been crucial to the democratization of environmental policy-making in the country since its foundation back in 1981 (GORDILHO; OLIVEIRA, 2014).
Just recently, Brazil has been named and shamed by the international community for letting the Amazon rainforest burn down for weeks in a roll (Bramwell 2019), while the president was in complete denial of the fires and has rejected the $20 million in foreign aid to help fight fires in the Amazon (KOTTASOVÁ et al., 2019), while at the same time neither G7 or the EU have effectively sanctioned Brazil for its actions, as they have only threatened to do so.
The Brazilian case contributes to enhancing the idea that "there is little evidence […] that the Latin American environment is better protected under neoliberal policies" (LIVERMAN; VILAS, 2006, p. 356, our italics). Thus, the future of environmental protection policies, and subsequently, the future of the human right to a healthy environment in Latin America, could be at great risk of extinction in face of neoliberal and populist policies in their most unfettered forms. Brazil is one great example of how, independently from constitutional and international legal provisions, the assault of the environment can be explicitly engineered in order to advance capitalist interests as well as the private sector demands, as Bolsonaro has demonstrated how inclined his government agenda is towards the agribusiness lobby.
Therefore, both legal scholars and ecosocialist activists, acting on the field of human environmental rights protection, should be discussing how to radically change the actual structure of global economy while at the same time taking into account the contemporary experiences of Third World countries, which for many reasons have chosen not to put environmental protection causes in the heart of their policy-making strategies.
How is Ecosocialism going to be appealing in such environments dominated by ultraliberal elites who have almost taken over the power of Government and could not care less for the environment? How are ecosocialists going to make sure that their fight for environmental justice is also a fight for social justice and human rights? These are questions we leave for further research on the matter, as our analysis served, from our point of view, as a modest contribution aimed to provoke more critical assessments of the human right to a healthy environment, to both human rights and ecosocialist scholars.
Conclusions
Our main goal with this research was to explore how the nexus between human rights and environmental protection may be weakened by the capitalist logic of constant economic growth and wealth accumulation, which constantly defies the demands for environmental justice. The doctrine of ecosocialism was chosen as our main analytical tool because we believed it could provide a more critical assessment of current literature on the human right to a healthy environment.
First, we tried to explore the idea of a human right to a healthy environment. As argued, the human right to a healthy environment, both as an 'autonomous right unto itself' as well as a derivative from previously stablished human rights, has been largely acknowledged in international law as well as in many constitutions across the globe. However, one can say that the idea of humans possessing a right to a clean and healthy environment, as a subjective right, has yet to achieve a more robust philosophical and practical framework in order to unleash its full potentials. Therefore, despite the right being indeed recognized in its existence and validity, there are still relevant gaps and remaining issues waiting for discussion and resolution.
Secondly, we summarily reviewed the relationship between human rights and Marxist thought. We argued that, ultimately, anyone involved in daily-based environment protection actions, as well as academicians and legal scholars involved with human and environmental rights issues, have to acknowledge that the capitalist system bears a great amount of responsibility regarding contemporary ecological problems. Therefore, we believe that a more robust defence of a human right to a healthy environment should behold such right as a counter-hegemonic strategy, that is based on essentially Marxian-informed social and rights theory. Such view of human and environmental rights is totally compatible with ecosocialist claims and should, therefore, be taken into account by ecosocialist theorists who, from our point of view, have yet to come to grips with the general human rights-talk.
Ultimately, we tried to demonstrate that, in the face of the current ecological crisis sponsored by capitalism, many might think of the human right to a healthy environment as nothing but a sweet dream. And that is the case if we buy such right with a classic, liberal cosmopolitan label, but not with if we imagine the human right to a healthy environment as we proposed during the construction of our article.
Envisioning the human right to a healthy environment from a Marxian-informed point of view, with a focus on social actions/struggle, can indeed help Ecosocialists in their claim for radical changes in the contemporary global economy. As argued before, human and environmental rights are not inconsistent with the Ecosocialist thought. However, Ecosocialist activists and scholars have yet to engage with the rights-talk, as they have partially failed to do so thus far.
We intended to discuss as well how the countries in the Third World, with the focus being on Latin America (mainly Brazil), have come to grips or not with the right to a healthy environment. Our main findings were that, despite the fact that since the Earth Summit (1992) and the Rio +20 (2012) Latin American countries have sought to establish minimum standards of environmental protection in the region.
However, recent turns in the Latin American political scenario has led to a more "conservative" if not retrogressive approach to environmental action, which has the potential to seriously undermine efforts to implement the human right to a healthy environment in Latin America. In Brazil, since the election of Jair Bolsonaro, ultraliberal capitalist interests have taken over environmental causes, a fact which becomes clearer as the Government goes on to destroy environmental policies in order to satisfy the interests of the agribusiness elite.
Ultimately, we argue that the future of environmental protection policies, and subsequently, the future of the human right to a healthy environment in Latin America, could be at great risk of extinction in face of neoliberal and populist agendas. | 2020-07-30T02:07:54.781Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "17fc998a6ce509f38d1ec32c5afdfc7719dabc74",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18226/22370021.v10.n2.02",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6702eb48eed832ac4cee56e4ebbe9318536cd095",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Philosophy"
]
} |
259513108 | pes2o/s2orc | v3-fos-license | 3D SCHOLARLY EDITIONS FOR BYZANTINE STUDIES: MULTIMEDIA VISUAL REPRESENTATIONS FOR HISTORY, ART HISTORY AND ARCHITECTURAL HISTORY
: The Byzantine Empire has bequeathed to us a rich legacy of Christian churches, many of which possess historical and cultural significance. Unfortunately, the majority of these structures are currently undergoing a process of decay, having not received adequate preservation efforts. Moreover, the absence of collaboration between researchers in the various fields, each with their own focus on the study of Byzantine churches, presents a pressing need for dialogue and a collective response from these researchers. The region of Laconia, located in the south of Greece, is particularly in need of immediate attention, given the abundance of churches located therein. To address these challenges, the authors propose a novel approach involving the documentation of these churches in a digital format through the presentation of 3D models as scholarly editions that incorporate all available data sets in a multimedia format. This paper delineates several requisite specifications for 3D scholarly editions, which hold the key to solving the twin problems faced by Byzantine churches, namely, their protection and the scarcity of interdisciplinary collaboration amongst researchers. With 3D scholarly editions based on multimedia resources and adequate information management, it will be possible to facilitate collaboration and research between various fields of Byzantine studies, and beyond. Such efforts will serve to ensure that cultural heritage passed down from previous generations will be transmitted to future ones.
INTRODUCTION
Throughout the history of the Byzantine Empire (roughly the 4th to the 15th century), a significant number of Christian churches were constructed, each with their own founders, dedicators, purposes, sizes, and styles. These churches served as a vital infrastructure for social life during the period. These monuments are important not only as valuable historical heritage sites that provide insights into Byzantine society but also due to their lasting influence on the architecture of Eastern Europe and the Middle East over time (Ousterhout, 2019, pp. 679-713). Thousands of these churches remain in the former imperial territories, and some of them continue to be used in present-day communities. However, a minimal number of them have been adequately preserved, while many are undergoing a process of deterioration.
Byzantine churches have traditionally been studied in isolation by different disciplines, including history, art history, architectural history, and archaeology. Each discipline tends to focus on specific aspects of a church, such as its buildings, mural paintings, and inscriptions, leading to limited comprehensive research on the role these elements played in the overall buildings. Consequently, studies on Byzantine churches encounter two significant problems: the inadequate protection of monuments and the scarcity of interdisciplinary collaboration among researchers. Urgent discussions are necessary, with the participation of researchers from various fields, to address these challenges.
In this regard, the region of Laconia in Greece is a prime example of an area that necessitates immediate action (Figure 1). Located in the southern Peloponnese and bordered by two mountain * Contributed equally ranges, this region has a rich and complex history, marked by numerous conflicts and changes of rulers spanning several millennia. As Christianity gradually spread but in a complex manner throughout the region from the 4th century onwards, hundreds of churches were constructed. According to Drandakis (1996), at least 200 churches with paintings are still in existence in Laconia, a number that rises when including those without paintings (cf. Nagatsuka, 1994;Bender, 2019;Bender 2022). Most of the remaining churches date from the 13th century onwards, with many built after the Byzantine reconquest of Laconia from the Franks in the second half of that century. Given the challenge of implementing sufficient conservation measures for all the hundreds of churches, our approach is to document these monuments in digital formats, ranging from inscriptions to the buildings themselves. Our proposed method for heritage conservation involves creating 3D scholarly editions that integrate available data sets in a multimedia format. This paper examines the specifications necessary for producing 3D scholarly editions that can help solve the two primary challenges that Byzantine churches face. Our discussion is grounded in our fieldwork conducted in Laconia, where we conducted research in February 2023, August 2022, and a pilot survey in March 2019, with the permission of the Ephorate of Antiquities of Laconia.
WHAT IS A 3D SCHOLARLY EDITION?
Recent advances in information technology have made 3D measurement more accessible. Light detection and ranging (LiDAR) devices and photogrammetry software have become increasingly popular for surveying archaeological sites and documenting cultural heritage, such as architecture (e.g. Vitale, 2018;Balletti et al., 2021;Guerriero et al., 2022).
As Yang et al. (2020) pointed out, such heritage documentation can be combined with 3D modelling and information management, especially since the 2010s, with the development of the building information modelling (BIM) technique in historic/heritage building information modelling (HBIM). At the same time, applying such 3D models to humanities research requires what are known as scholarly editing practices. Scholarly editing originally emerged to ensure that a text is academically credible, but in the context of recent digital scholarly editions, it is no longer limited to written texts (Sahle, 2016). For example, the integration of 3D models and scholarly-edited text was reported by Leoni et al. (2015). Schreibman and Papadopoulos (2019) explained the need for 3D models used to recreate past events to have a certain reliability, much like a digital scholarly edition of texts.
Although Schreibman and Papadopoulos (2019) focused their discussion on the (re)construction of 3D models, these concepts should be applicable to heritage documentation, especially for fields within the humanities that focus on the past, including Byzantine studies.
For old churches, it is crucial to record not only their current state but also any alterations that have been made. In Byzantine studies, the part of a church before 1453 (the year when Constantinople, the capital of the Byzantine Empire, fell to the Ottomans) is of utmost importance. However, churches often undergo changes in subsequent periods, such as repainting and renovation, making it necessary to carefully observe and identify which parts of a church originated in a particular period.
Only a small number of Byzantine churches are fortunate enough to have a precise date of foundation, and most lack specific clues, such as dedicatory inscriptions, for dating. Therefore, the foundation date of a church often has to be inferred based on the style of wall paintings and/or architectural form. Even when inscriptions have survived, they are often damaged and written in abbreviated and vernacular mediaeval Greek, so it is necessary to interpret and edit them by a specialist. To use the results of heritage documentation as a source for humanities research, particularly in Byzantine studies, it is necessary to unify 3D models, 2D images, and textual data of these churches, accompanied by scholarly annotations for each element. In recent times, protecting and passing on cultural heritage has become more diverse with multidisciplinary and international initiatives. One example is ARCHES (At-risk Cultural Heritage Education Series), funded by the National Endowment for the Humanities and hosted by smarthistory, a public art history centre with thousands of free videos and essays by experts willing to share their knowledge with learners worldwide. Because of the strong belief that an informed public is essential to ongoing efforts to protect cultural heritage, ARCHES offers a mini-course on endangered heritage around the world (Harris and Zucker, 2017). Open Heritages 3D (https://openheritage3d.org/about), hosted by the Cultural Heritage Engineering Initiative at the Qualcomm Institute at the University of California San Diego, is a collaborative project between CyArk, the University of California San Diego, Historic Environment Scotland, and the University of South Florida Libraries. It provides free access to high-resolution 3D datasets of cultural heritage sites around the world. In contrast to these cases, our 3D scholarly editions, which will unite 3D models, 2D images, and textual data with scholarly annotations, will form a comprehensive database of Byzantine churches in Laconia. It will be highly accessible, particularly to scholars studying the region, with multimedia resources providing visual representations.
3D SCHOLARLY EDITIONS FOR BYZANTINE CHURCH STUDIES
This section discusses the requirements for 3D editions in Byzantine church studies, using the church of Agios Iōannēs Chrysostomos (St. John Chrysostom) in Geraki, Laconia, as a case study (Figure 2). This small church, measuring 4.7 m × 11 m, is valuable because of an inscription that mentions the explicit date of 1450 and of the good condition of wall paintings that covers almost the entire interior. The inscription is not from the period of original construction but was installed later, and the building and murals are estimated to date from an earlier time.
According to Moutsopoulos and Dimitrokallis (1981, pp. 44-45), the foundation of the church was approximately 1300, and this assumption is widely accepted (Gerstel, 2001, p. 278;Papalexandrou, 2013, p. 43), although it has been debated (Zias, 1976-78, pp. 331-333). The church was abandoned at some point during the Ottoman occupation and was in a state of disrepair in the early 20th century (Adamantiou, 1908, p. 23). However, since the 1930s, the roof, the pavement, and most of the murals have been restored intermittently, and it is now in reasonably good condition. One of the church's architectural features is the extensive use of spolia, or reused building materials, from the ancient Greek and Roman periods. In addition to the stonework inscribed with Emperor Diocletian's price edict on the doorframe of the entrance to the south, a remarkable amount of spolia is found on the south façade (Figure 3).
During the field survey, we discussed the focus of each discipline and what kind of information should be integrated into the 3D edition of a church. For example, historians are primarily concerned with the reproducibility of written texts, while architectural historians are concerned with the documentation of metric data and the precise form of a building. Below are some of the specific areas of focus for each discipline.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-M-1-2023 29th CIPA Symposium "Documenting, Understanding, Preserving Cultural Heritage: Humanities and Digital Technologies for Shaping the Future", 25-30 June 2023, Florence, Italy In terms of textual content, this church has numerous epigrams and explanative captions that accompany the wall paintings. For example, the dated inscription mentioned earlier is located next to a portrait (Figure 4). The text is mostly legible, written in medieval Greek, but it omits certain letters according to the conventions of the time and requires specialist knowledge to be understood. The Greek text and an English translation are as follows (Feissel and Philippidis-Braat, 1985, pp. 356-357 English translation: The servant of God, Christophoros Kontoleos, cleric and chartophylax, was laid to rest on … January in the year 6958 (i.e., 1450 AD).
Historians have emphasized the importance of the content of this text in church studies, but they have not paid attention to the visual effects of the inscription, such as its placement and its relationship with the portrait, in its context within the church. By integrating the transcription of this inscription into the 3D scholarly edition and presenting it in a way that is close to reality, it is expected that research progress can be made from this perspective.
Most murals have short captions that explain the scene and person depicted, but these texts are often overlooked and sometimes not transcribed in research literature. However, these short texts are important for interpreting the role of the mural paintings and for understanding the characteristics of the language at the time. Therefore, appropriate transcription and editing should be done. As an example, Figure 5 shows an image of the Presentation of Mary, painted on the upper south wall of the church. The following caption, located in the upper left, plays a role in explaining the scene to the viewer: "Τά εἰσόδια τῆς ὑπεραγίας/ Θ(εοτό)κου" (2) (The Presentation of Most Holy Mother of God).
In terms of 3D scholarly editions, it will be necessary to present the murals themselves as high-resolution images and include 3D information such as mural locations in the building. As murals in poor condition are not uncommon in Byzantine churches, one role of a 3D scholarly edition may be to propose a restoration plan for them. That is, as Barreiro Díaz et al. (2022) pointed out, virtual restoration of the wall paintings and their annotation would also need to be incorporated into any 3D scholarly edition of Byzantine structures. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-M-1-2023 29th CIPA Symposium "Documenting, Understanding, Preserving Cultural Heritage: Humanities and Digital Technologies for Shaping the Future", 25-30 June 2023, Florence, Italy Many Byzantine churches, including the one under study, suffer from insufficient natural light due to a lack of windows, making it difficult to see murals clearly with the naked eye. As for research resources, high-quality photographs taken under optimal lighting conditions and, at times, enhanced through image processing, must be integrated into a 3D scholarly edition. Moreover, it may be necessary to document metadata on the motifs of the murals and the individuals and objects portrayed in each scene.
Architectural history involves studying history through architecture. Many historical buildings were built manually, resulting in distortions and irregularities in their walls and ceilings. Karydis (2011) suggests that the distortion of vaults and arches in Byzantine architecture was caused by the location of the centring during construction. Therefore, information on warping is essential for understanding the construction activities of the time. Traditional 2D drawings may overlook such distortions, whereas 3D measurements provide an as-built record of the monument. Despite being crucial resources for architectural history studies, the drawings omit key details. Figure 6 is an example and comparison of drawings of the church of Agios Iōannēs Chrysostomos (Moutsopoulos and Dimitrokalis, 1981) with the 3D model from our survey. The vaults in the drawings appear as neat semicircles, whereas in reality, they are more squashed. In the absence of written sources that directly show the activities of craftsmen at the time, 3D scholarly editions can also serve as an essential tool for understanding the construction methods used for arches and vaults. In addition, continuous 3D measurements of the building can help track any deformations over time. This information will allow structural reinforcement, conservation, and restoration work to be carried out at the appropriate time.
Furthermore, although Byzantine churches were dimly lit, they were by no means neglectful of light. They created sacred spaces that effectively utilized both natural and artificial light to enhance their spiritual atmosphere (Potamias, 2017). However, there are various difficulties in reproducing such light spaces in physical churches. Nevertheless, by incorporating metric data and information on the surrounding environment, 3D scholarly editions can simulate lighting and sound environments that are difficult to recreate in real-life settings. This is only achievable in a virtual space.
Thus, 3D modelling provides accurate and precise information about building forms. It allows architectural historians to understand the construction process without historical sources and to understand how buildings deteriorate over time. 3D scholarly editions can show such information or analysis by architectural historians, which was difficult to understand without an architectural background. Figure 7 shows a conceptual diagram of a 3D scholarly edition. In the following section, we discuss how 3D scholarly editions can serve as a platform in the fields of history, art history and architectural history. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-M-1-2023 29th CIPA Symposium "Documenting, Understanding, Preserving Cultural Heritage: Humanities and Digital Technologies for Shaping the Future", 25-30 June 2023, Florence, Italy
PROGRESS OF THE PROJECT AND WHAT A 3D SCHOLARLY EDITION BRINGS
There are more than 200 Byzantine churches with murals in the region of Laconia, as Drandakis (1996) points out. However, previous studies have either been geographically limited, such as the investigation of the Mani peninsula by Drandakis (1995), or focused on rock-hewn churches, as in Bender's recent work (2022). A comprehensive survey of all the churches in the area is necessary for academic purposes.
To address this gap, we, together with Dr P. Perdikoulias, Dr K. Takeda, Dr F. Condorelli and Ms E. Ota, initiated a full-scale survey in 2022, based on the pilot survey in 2019. Currently, we have finished taking photographs for photogrammetry of the 22 churches and are working on their 3D modelling and organization of the various information needed for 3D scholarly editions. These editions aim primarily to assist researchers, while also promoting new collaborations. Therefore, 3D scholarly editions may only serve as a part of data preservation, rather than an immediate and significant benefit in real restoration work. However, it is also true that there is a lack of resources for conserving and restoring churches throughout the Laconian region. This kind of documentation work can in itself compensate for the situation.
The 3D scholarly editions could themselves become museum content if a future collaboration with the local Ephorate can be established. If made available to the general public, they could increase momentum for protecting the churches themselves among the local population and beyond. Protecting cultural heritage, as Harris and Zucker (2017) point out, requires the consent of many. The Smithsonian Voyager Story could be an effective tool for providing multimedia annotations of 3D scholarly editions to a wider audience, as suggested by the PURE 3D group (https://pure3d.eu/).
A 3D scholarly edition of a Byzantine church encompasses a wealth of information equivalent to or even surpassing that of the physical church. This means that 3D scholarly editions enable researchers to conduct investigations without physically visiting the churches. Numerous Byzantine churches are constantly threatened by decay and harm, leading to the loss of invaluable information at an alarming pace.
In many instances, wall paintings, inscriptions, or the structures themselves, which were documented during their prime, have either vanished or are no longer visible. In addition to the loss of cultural heritage due to natural and human-made disasters, access to churches can be restricted at any time and for any reason, as demonstrated by the COVID-19 pandemic since 2020. Given that the state of many churches continues to deteriorate, it is essential to develop and transfer digital editions of these structures that meet scholarly requirements, with the goal of preserving cultural heritage for future generations.
3D scholarly editions provide a valuable means of promoting interdisciplinary research by consolidating information and enabling researchers from various disciplines to collaborate and share insights regarding the features of a church. The complex nature of historical, art-historical, and architectural-historical research necessitates a high level of expertise and knowledge within each field, which often presents a challenge when researchers attempt to interpret information that pertains to other areas of study. However, by creating a platform that presents a wide range of information in a visually appealing manner, 3D scholarly editions foster a deeper understanding of the different disciplines involved and thus encourage researchers to collaborate more closely. We believe that such interdisciplinary, collaborative research is crucial to comprehending the social function of churches in Byzantine society. Additionally, we hope that such research will help clarify various phases of a church's history, including its founding, renovation, and alteration. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-M-1-2023 29th CIPA Symposium "Documenting, Understanding, Preserving Cultural Heritage: Humanities and Digital Technologies for Shaping the Future", 25-30 June 2023, Florence, Italy
CONCLUSIONS
The above discussion has focused on the churches of the Byzantine Empire, which have had a significant cultural impact on Eastern Europe and the Middle East to the present day, and on the potential significance of 3D scholarly editions of these churches. Despite their being an essential social infrastructure and although a significant number are still in use today, many churches have not been adequately preserved and are in danger of deterioration and extinction. Of course, it would be most desirable to conserve and restore churches in the real world, but it is difficult to do so due to budget and human resource constraints. In this context, we propose a 3D scholarly edition in which 3D modelling is created through 3D measurement, which has become possible with developments in information technology.
In Byzantine studies, history, art history and architectural history each focus on a different aspect of a building, so the information required differs. A 3D scholarly edition, which has been edited after collecting all such information, removes a significant barrier to Byzantine studiesdistanceand makes it possible to study churches without visiting them. At the same time, the visual presentation of diverse information on a single platform encourages individual researchers to understand and collaborate with those in other disciplines and to contextualise the elements they are focused on in a broader context, such as the relationship between text, image, and space. Finally, a 3D scholarly edition, edited to the best of humanity's current knowledge, is a digital twin of a church in danger of disappearing, documented in as much detail as possible in the virtual world. It is a small step towards passing on the cultural heritage inherited from previous generations to the next. | 2023-07-11T18:02:06.213Z | 2023-06-23T00:00:00.000 | {
"year": 2023,
"sha1": "2e920a55ddaa04170dffa5d8e79a84db3903662b",
"oa_license": "CCBY",
"oa_url": "https://isprs-annals.copernicus.org/articles/X-M-1-2023/125/2023/isprs-annals-X-M-1-2023-125-2023.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "089cc3701aca295bf0841b406417c6aebfaa9dc0",
"s2fieldsofstudy": [
"Art",
"History"
],
"extfieldsofstudy": []
} |
259722126 | pes2o/s2orc | v3-fos-license | Experimental Study on the Fracture Toughness of Bamboo Scrimber
In the past decade, bamboo scrimber has developed rapidly in the field of building materials due to its excellent mechanical properties, such as high toughness and high tensile strength. However, when the applied stress exceeds the ultimate strength limit of bamboo scrimber, cracks occur, which affects the performance of bamboo scrimber in structural applications. Due to the propensity of cracks to propagate, it reduces the load-bearing capacity of the bamboo scrimber material. Therefore, research on the fracture toughness of bamboo scrimber contributes to determining the material’s load-bearing capacity and failure mechanisms, enabling its widespread application in engineering failure analysis. The fracture toughness of bamboo scrimber was studied via the single-edge notched beam (SENB) experiment and compact compression (CC) method. Nine groups of longitudinal and transverse samples were selected for experimental investigation. The fracture toughness of longitudinal bamboo scrimber under tensile and compressive loadings was 3.59 MPa·m1/2 and 2.39 MPa·m1/2, respectively. In addition, the fracture toughness of transverse bamboo scrimber under tensile and compressive conditions was 0.38 MPa·m1/2 and 1.79 MPa·m1/2, respectively. The results show that, for this material, there was a significant distinction between longitudinal and transverse. Subsequently, three-point bending tests and simulations were studied. The results show that the failure mode and the force–displacement curve of the numerical simulation were highly consistent compared with the experimental results. It could verify the correctness of the test parameters. Finally, the flexural strength of bamboo scrimber was calculated to be as high as 143.16 MPa. This paper provides data accumulation for the numerical simulation of bamboo scrimber, which can further promote the development of bamboo scrimber parameters in all aspects of the application.
Introduction
Bamboo scrimber is produced through a series of processes including cutting, splitting, defibering, dipping, drying and hot pressing, using bamboo as the raw material [1][2][3]. This material exhibits numerous advantages compared to regular bamboo [4,5]. Bamboo scrimber boasts superior performance and can be customized to address issues such as susceptibility to failure and other limitations of bamboo [6,7]. Its applications extend beyond construction and encompass furniture, transportation and road safety fencing [8][9][10].
In recent years, there has been some progress in exploring the mechanical properties of bamboo scrimber [11][12][13]. However, the majority of research on bamboo scrimber has focused on its fundamental mechanical properties [14][15][16]. Nevertheless, there is a dearth of research on the fracture toughness of bamboo scrimber. The fracture toughness plays a crucial role in assessing the material's ability to withstand crack propagation [17,18]. He and Evans et al. [19] conducted tests on birch single-layer boards to assess their type I fracture performance. They improved the fracture toughness of birch composites by reinforcing the adhesive and incorporating glass fiber adhesive. However, only a comparison of the apparent fracture toughness was made between the same samples, and no exact fracture toughness value was measured. Zhou et al. [20] performed four-point bending tests on bamboo scrimber and investigated its damage mechanism. They observed that bamboo scrimber exhibited ductile behavior during bending failure, with brittle fracture occurring in the compressive and tensile zones. Nonetheless, the fracture toughness of bamboo scrimber has not been thoroughly studied. Liu et al. [21] determined the fracture behavior and type I fracture toughness of bamboo scrimber. They verified the effectiveness of a method for directly determining the cohesion parameters by comparing numerical simulations with experiments. Ortega et al. [22] experimentally investigated the tensile and compressive fracture toughness of composite laminates using the compact tension and compression methods. He et al. [23] conducted compact compression experiments on carbon fiber, analyzing the displacement and strain field using the J-integral method to obtain the energy dissipation under different loading rates. The numerical value of the fracture toughness was indirectly obtained through the J-integral method. Li et al. [24] studied the fracture toughness of high-strength steel through three-point bending tests and proposed a relationship between fracture toughness and macro tensile parameters. The fracture toughness and tensile strength exhibited an inverse relationship.
Based on the previous research on the fracture toughness of materials, this study adopted the single-edge notched beam (SENB) and compact compression (CC) methods to study the fracture toughness of bamboo scrimber in different directions. In addition, through the three-point bending tests combined with a finite element simulation, the simulation parameters were obtained and determined, which are of great significance for the study of the fracture mechanical properties of bamboo scrimber.
Specimen Design and Manufacture
This paper aims to determine the fracture toughness of bamboo scrimber in both the longitudinal and transverse directions under tensile and compressive loading. Bamboo scrimber is produced as a unidirectional sheet using a series of hot-pressing processes [25]. Extensive research has demonstrated significant differences in the mechanical properties between the longitudinal and transverse orientations of unidirectional laminates. Therefore, it is crucial to select and process specimens that differentiate between the transverse and longitudinal stripes on the sheet [26], as depicted in Figure 1. Nine groups of specimens were prepared, each oriented in one of these two directions. The dimensions of the tension and compression specimens are defined by ASTM E399-12 [27] and ASTM E1820-08a [28], respectively, as illustrated in Figure 2. A span length of S = 160 mm was chosen for the experiment, adhering to the standard requirement of 1 ≤ H/B ≤ 4. Consequently, the final dimensions of the tensile fracture toughness specimens were determined as follows: height H = 40 mm, length L = 200 mm, width B = 20 mm and crack length a = 20 mm. The crack length was chosen in accordance with the standard range of 0.5 ≤ a/H ≤ 1. To create the initial crack for the tension and compression tests, an 18 mm artificial sharp crack was cut into the specimen. Additionally, a micro-crack measuring 2-3 mm was introduced on the tensile specimen, and a 4-5 mm cut was made on the compression specimen, as depicted in Figure 2. Each specimen was labeled and recorded, with the 9 longitudinal specimens labeled as FT-1~FT-9, and the 9 transverse specimens were named MT-1~MT-9. The length of the pre-crack was also recorded beforehand.
Compression fracture toughness specimens are required to adhere to the standard, which specifies that the specimen width (B) and the distance between the hole center and edge (W) should be chosen within the range of 2 ≤ W/B ≤ 4. In this study, the distance between the hole center and edge was W = 30 mm, and the width (B = 15 mm), length and height (H = 40 mm) were maintained [29]. The 9 longitudinal compression specimens were labeled as FC-1~FC-9, and the 9 transverse compression specimens were denoted as MC-1~MC-9. The respective sizes of each longitudinal and transverse specimen were also recorded in advance. height (H = 40 mm) were maintained [29]. The 9 longitudinal compression specimens were labeled as FC-1~FC-9, and the 9 transverse compression specimens were denoted as MC-1~MC-9. The respective sizes of each longitudinal and transverse specimen were also recorded in advance.
Test Method
The fracture toughness of bamboo scrimber was evaluated using the single-edge notched beam (SENB) test and compact compression (CC) test, performed with an electronic universal material testing machine. The test setup and specific details are depicted in Figure 3. Figure 3a illustrates the longitudinal and transverse specimens, and an additional fixture was custom made for the compression fracture toughness tests, as shown in Figure 3b. To ensure a zero initial position of force loading, it is necessary to maintain a gap between the bolt and the specimen during clamping. Therefore, attention should be given to aligning the two nuts properly during tightening, and the upper and lower clamps should also be aligned to avoid nut interference. In the tensile fracture toughness tests, symmetrical positioning of the span and support points with respect to the indenter height (H = 40 mm) were maintained [29]. The 9 longitudinal compression specimens were labeled as FC-1~FC-9, and the 9 transverse compression specimens were denoted as MC-1~MC-9. The respective sizes of each longitudinal and transverse specimen were also recorded in advance.
Test Method
The fracture toughness of bamboo scrimber was evaluated using the single-edge notched beam (SENB) test and compact compression (CC) test, performed with an electronic universal material testing machine. The test setup and specific details are depicted in Figure 3. Figure 3a illustrates the longitudinal and transverse specimens, and an additional fixture was custom made for the compression fracture toughness tests, as shown in Figure 3b. To ensure a zero initial position of force loading, it is necessary to maintain a gap between the bolt and the specimen during clamping. Therefore, attention should be given to aligning the two nuts properly during tightening, and the upper and lower clamps should also be aligned to avoid nut interference. In the tensile fracture toughness tests, symmetrical positioning of the span and support points with respect to the indenter
Test Method
The fracture toughness of bamboo scrimber was evaluated using the single-edge notched beam (SENB) test and compact compression (CC) test, performed with an electronic universal material testing machine. The test setup and specific details are depicted in Figure 3. Figure 3a illustrates the longitudinal and transverse specimens, and an additional fixture was custom made for the compression fracture toughness tests, as shown in Figure 3b. To ensure a zero initial position of force loading, it is necessary to maintain a gap between the bolt and the specimen during clamping. Therefore, attention should be given to aligning the two nuts properly during tightening, and the upper and lower clamps should also be aligned to avoid nut interference. In the tensile fracture toughness tests, symmetrical positioning of the span and support points with respect to the indenter is crucial, and the indenter should make contact with the specimen before loading to minimize the error between the load and the corresponding loading displacement. Data collection during the experiments was carried out using a force sensor and a built-in displacement sensor. The indenter velocity for the SENB tests was set to 3 mm/min [30], and for the CC tests, it was set to 0.5 mm/min [23]. is crucial, and the indenter should make contact with the specimen before loading to minimize the error between the load and the corresponding loading displacement. Data collection during the experiments was carried out using a force sensor and a built-in displacement sensor. The indenter velocity for the SENB tests was set to 3 mm/min [30], and for the CC tests, it was set to 0.5 mm/min [23].
Calculation of Tensile Fracture Toughness
The peak loading (PQ) in the SENB tests is obtained from the force-displacement curves, and the fracture toughness (KIC) is calculated using Equation (1). Additionally, the energy release rate is calculated using Equation (2). Due to the smaller strain in the thickness direction compared to the length direction, the SENB tests of bamboo scrimber in the longitudinal and transverse directions are classified as plane strain problems. Considering the linearly elastic deformation of bamboo scrimber under longitudinal and transverse tension [1], the critical strain energy release rate is twice the fracture energy, as shown in Equation (3) [31].
The fracture toughness K and energy release rate G of the SENB tests are calculated as follows [ where PQ is the peak force.
If E is a plane strain problem, then E = E/(1 − µ 2 ). If it is a plane stress problem, then E = E. In addition, µ is Poisson's ratio.
is the potential energy, A is the increase in crack area, and γ is the fracture energy.
The Theory of Compressive Fracture Toughness
In the CC tests, the J-integral method is employed to evaluate the energy release rate. The J-integral is defined for a two-dimensional cracked body, as depicted in Figure 4. A
Calculation of Tensile Fracture Toughness
The peak loading (P Q ) in the SENB tests is obtained from the force-displacement curves, and the fracture toughness (K IC ) is calculated using Equation (1). Additionally, the energy release rate is calculated using Equation (2). Due to the smaller strain in the thickness direction compared to the length direction, the SENB tests of bamboo scrimber in the longitudinal and transverse directions are classified as plane strain problems. Considering the linearly elastic deformation of bamboo scrimber under longitudinal and transverse tension [1], the critical strain energy release rate is twice the fracture energy, as shown in Equation (3) [31].
The fracture toughness K and energy release rate G of the SENB tests are calculated as follows [32]: where P Q is the peak force.
If E is a plane strain problem, then E = E/(1 − µ 2 ). If it is a plane stress problem, then E = E. In addition, µ is Poisson's ratio.
In Equation (3), ∂U is the potential energy, ∂A is the increase in crack area, and γ is the fracture energy.
The Theory of Compressive Fracture Toughness
In the CC tests, the J-integral method is employed to evaluate the energy release rate. The J-integral is defined for a two-dimensional cracked body, as depicted in Figure 4. A Materials 2023, 16, 4880 5 of 13 random point along the crack is selected, and a continuous closed loop is formed in a counterclockwise direction [33], The J-integral is calculated using Equation (4), as follows: where p α is the external loading on the crack surface, V is the strain energy density, u q represents the component of the displacement vector, and ds represents the infinitesimal increment along the integration path Γ [34], which can be defined as Materials 2023, 16, x FOR PEER REVIEW 5 of 13 random point along the crack is selected, and a continuous closed loop is formed in a counterclockwise direction [33], The J-integral is calculated using Equation (4), as follows: where pα is the external loading on the crack surface, V is the strain energy density, uq represents the component of the displacement vector, and ds represents the infinitesimal increment along the integration path Г [34], which can be defined as The value of the J-integral is equal to the value of the energy release rate [35]. Assuming that the compressive fracture toughness of bamboo scrimber under compression is treated as linear elasticity, the J-integral has a corresponding relationship with the stress intensity factor Kj, as given by the following equation: In addition, the energy release rate is obtained as The fracture toughness K can be solved. For the force Pi exerted on the compacted samples, K can be calculated as where Pi refers to the peak force in the force-displacement curve during the compression process.
The calculation of the J-integral is the sum of elastic-specific work and plastic-specific work, as follows: Therefore, the total J-integral can be calculated as The value of the J-integral is equal to the value of the energy release rate [35]. Assuming that the compressive fracture toughness of bamboo scrimber under compression is treated as linear elasticity, the J-integral has a corresponding relationship with the stress intensity factor K j , as given by the following equation: In addition, the energy release rate is obtained as The fracture toughness K can be solved. For the force P i exerted on the compacted samples, K can be calculated as where P i refers to the peak force in the force-displacement curve during the compression process.
The calculation of the J-integral is the sum of elastic-specific work and plastic-specific work, as follows: Therefore, the total J-integral can be calculated as (11) In addition, J pl is calculated as In Equation (12), A pl refers to the area enclosed by the initial line segment with the same slope at the points corresponding to the force-displacement curve and maximum loading, as shown in Figure 5. B N is the thickness of the specimen (B N = B in this study); b 0 = (W − a 0 ); and η = 2 + 0.522b 0 /W.
In addition, pl J is calculated as In Equation (12), pl A refers to the area enclosed by the initial line segment with the same slope at the points corresponding to the force-displacement curve and maximum loading, as shown in Figure 5.
Experiment Results of Tensile Fracture Toughness
The results of the tensile fracture toughness tests for both the longitudinal and transverse specimens are presented in Figure 6a and Figure 6b, respectively. Bamboo scrimber exhibits elastic behavior during the initial stage. However, as the applied loading approaches the peak force, it deviates from its linear response. As the applied force approaches the peak force, the curve no longer exhibits a linear behavior. At this stage, cracks may occur within the specimen, absorbing energy and impeding crack propagation. Upon reaching the ultimate bearing capacity, the samples fracture, leading to a steep decline in the curves. The peak force of the longitudinal samples from each group are extracted and incorporated into Equation (1), along with the measured value of the width (B), height (W) and crack length (a) before the tests. The average longitudinal tensile fracture toughness is 3.59 MPa·m 1/2 , whereas the average transverse tensile fracture toughness is 0.38 MPa·m 1/2 . The longitudinal tensile fracture toughness of bamboo scrimber is approximately 9.45 times higher than the transverse tensile fracture toughness, suggesting superior load-bearing capacity in tension for bamboo scrimber. In this study, the Young's modulus E1 = 11,212 MPa, E2 = 2561 MPa and Poisson's ratio µ = 0.304 are adopted from [36]. The obtained longitudinal and transverse fracture toughness values for bamboo scrimber are inserted into Equation (2). The longitudinal and transverse tensile energy release rates are 1043.26 J/m 2 and 51.17 J/m 2 , respectively. Consequently, the longitudinal and transverse tensile fracture energy values are 521.63 J/m 2 and 25.59 J/m 2 , respectively, because the energy release rate is twice the fracture energy. The longitudinal tensile energy release rate for bamboo scrimber is 992.09 J/m 2 higher than the transverse tensile energy release rate.
Experiment Results of Tensile Fracture Toughness
The results of the tensile fracture toughness tests for both the longitudinal and transverse specimens are presented in Figures 6a and 6b, respectively. Bamboo scrimber exhibits elastic behavior during the initial stage. However, as the applied loading approaches the peak force, it deviates from its linear response. As the applied force approaches the peak force, the curve no longer exhibits a linear behavior. At this stage, cracks may occur within the specimen, absorbing energy and impeding crack propagation. Upon reaching the ultimate bearing capacity, the samples fracture, leading to a steep decline in the curves. The peak force of the longitudinal samples from each group are extracted and incorporated into Equation (1), along with the measured value of the width (B), height (W) and crack length (a) before the tests. The average longitudinal tensile fracture toughness is 3.59 MPa·m 1/2 , whereas the average transverse tensile fracture toughness is 0.38 MPa·m 1/2 . The longitudinal tensile fracture toughness of bamboo scrimber is approximately 9.45 times higher than the transverse tensile fracture toughness, suggesting superior load-bearing capacity in tension for bamboo scrimber. In this study, the Young's modulus E 1 = 11,212 MPa, E 2 = 2561 MPa and Poisson's ratio µ = 0.304 are adopted from [36]. The obtained longitudinal and transverse fracture toughness values for bamboo scrimber are inserted into Equation
Test Results of Compressive Fracture Toughness
The experimental results of the longitudinal and transverse compressive fracture toughness of bamboo scrimber are shown in Figure 7a and Figure 7b, respectively. The force-displacement curve exhibits a small non-linear segment at the beginning, which is attributed to the initial minor slippage occurring during the clamping of the specimen in the experiment. Subsequently, the displacement shows an almost linear relationship with the increasing force. Moreover, the test results for each group of specimens exhibit a high level of consistency. The peak loading from the longitudinal and transverse tests, along with the measured values of the width (B) and the distance between the hole center and edge (W) before the tests are substituted into Equations (8) and (9). The average longitudinal fracture toughness is 2.39 MPa·m 1/2 , and the average transverse fracture toughness is 1.79 MPa·m 1/2 . The longitudinal compressive fracture toughness of bamboo scrimber is approximately 1.34 times greater than the transverse compressive fracture toughness. These results indicate a lower fracture toughness in the transverse direction compared to the longitudinal direction, highlighting a weaker load-bearing capacity under compressive forces in the transverse direction. In this study, the Jel value can be obtained by substituting the calculated fracture toughness into Equation (11). Then, the value of Apl, b0, and η of each group is calculated and substituted into Equation (12) to calculate the Jpl. Then, the corresponding Jel + Jpl is calculated, and the energy release rate can also be obtained. The average longitudinal energy release rate is 324.67 J/m 2 , and the average transverse energy release rate is 222.45 J/m 2 . The longitudinal compressive energy release rate for bamboo scrimber is 102.22 J/m 2 higher than the transverse compressive energy release rate.
Test Results of Compressive Fracture Toughness
The experimental results of the longitudinal and transverse compressive fracture toughness of bamboo scrimber are shown in Figure 7a and Figure 7b, respectively. The force-displacement curve exhibits a small non-linear segment at the beginning, which is attributed to the initial minor slippage occurring during the clamping of the specimen in the experiment. Subsequently, the displacement shows an almost linear relationship with the increasing force. Moreover, the test results for each group of specimens exhibit a high level of consistency. The peak loading from the longitudinal and transverse tests, along with the measured values of the width (B) and the distance between the hole center and edge (W) before the tests are substituted into Equations (8) and (9). The average longitudinal fracture toughness is 2.39 MPa·m 1/2 , and the average transverse fracture toughness is 1.79 MPa·m 1/2 . The longitudinal compressive fracture toughness of bamboo scrimber is approximately 1.34 times greater than the transverse compressive fracture toughness. These results indicate a lower fracture toughness in the transverse direction compared to the longitudinal direction, highlighting a weaker load-bearing capacity under compressive forces in the transverse direction. In this study, the J el value can be obtained by substituting the calculated fracture toughness into Equation (11). Then, the value of A pl , b 0 , and η of each group is calculated and substituted into Equation (12) to calculate the J pl . Then, the corresponding J el + J pl is calculated, and the energy release rate can also be obtained. The average longitudinal energy release rate is 324.67 J/m 2 , and the average transverse energy release rate is 222.45 J/m 2 . The longitudinal compressive energy release rate for bamboo scrimber is 102.22 J/m 2 higher than the transverse compressive energy release rate.
Test Results of Compressive Fracture Toughness
The experimental results of the longitudinal and transverse compressive fracture toughness of bamboo scrimber are shown in Figure 7a and Figure 7b, respectively. The force-displacement curve exhibits a small non-linear segment at the beginning, which is attributed to the initial minor slippage occurring during the clamping of the specimen in the experiment. Subsequently, the displacement shows an almost linear relationship with the increasing force. Moreover, the test results for each group of specimens exhibit a high level of consistency. The peak loading from the longitudinal and transverse tests, along with the measured values of the width (B) and the distance between the hole center and edge (W) before the tests are substituted into Equations (8) and (9). The average longitudinal fracture toughness is 2.39 MPa·m 1/2 , and the average transverse fracture toughness is 1.79 MPa·m 1/2 . The longitudinal compressive fracture toughness of bamboo scrimber is approximately 1.34 times greater than the transverse compressive fracture toughness. These results indicate a lower fracture toughness in the transverse direction compared to the longitudinal direction, highlighting a weaker load-bearing capacity under compressive forces in the transverse direction. In this study, the Jel value can be obtained by substituting the calculated fracture toughness into Equation (11). Then, the value of Apl, b0, and η of each group is calculated and substituted into Equation (12) to calculate the Jpl. Then, the corresponding Jel + Jpl is calculated, and the energy release rate can also be obtained. The average longitudinal energy release rate is 324.67 J/m 2 , and the average transverse energy release rate is 222.45 J/m 2 . The longitudinal compressive energy release rate for bamboo scrimber is 102.22 J/m 2 higher than the transverse compressive energy release rate.
Three-Point Bending Tests
In order to validate the energy release rate damage parameter used in ABAQUS for bamboo scrimber, the results of the three-point bend tests were compared with the corresponding numerical simulations. The fracture of bamboo scrimber in three-point bending represents a complex damage mechanism, involving the release of stored energy within the material during the fracture process [37]. Five longitudinal bamboo scrimber samples, each with a width of 15 mm, were prepared and labeled as QW-1 to QW-5. Their dimensions are illustrated in Figure 8b. The distance between the two supports was 240 mm, and the loading speed was set at 10 mm/min following the guidelines of GB/T 1936. 1-2009 [38]. The specimen placement is illustrated in Figure 8a. The failure pattern of the three-point bending specimens at the end of loading is shown in Figure 8c. The primary failure mode observed in bamboo scrimber is brittle fracture, with cracks propagating inward from the fracture surface. Due to defects in the lamination process, cracking occurs gradually between the layers of bamboo scrimber, eventually resulting in a loss of load-bearing capacity.
Three-Point Bending Tests
In order to validate the energy release rate damage parameter used in ABAQUS for bamboo scrimber, the results of the three-point bend tests were compared with the corresponding numerical simulations. The fracture of bamboo scrimber in three-point bending represents a complex damage mechanism, involving the release of stored energy within the material during the fracture process [37]. Five longitudinal bamboo scrimber samples, each with a width of 15 mm, were prepared and labeled as QW-1 to QW-5. Their dimensions are illustrated in Figure 8b. The distance between the two supports was 240 mm, and the loading speed was set at 10 mm/min following the guidelines of GB/T 1936. 1-2009 [38]. The specimen placement is illustrated in Figure 8a. The failure pattern of the threepoint bending specimens at the end of loading is shown in Figure 8c. The primary failure mode observed in bamboo scrimber is brittle fracture, with cracks propagating inward from the fracture surface. Due to defects in the lamination process, cracking occurs gradually between the layers of bamboo scrimber, eventually resulting in a loss of load-bearing capacity. Figure 9 displays the force-displacement curves. The graph clearly indicates that, in the initial stage, the curve exhibited linear growth, with the bamboo scrimber specimen experiencing slight bending during the loading process. At the point where the bamboo scrimber reached its ultimate strength, corresponding to the peak force, the specimens underwent brittle fracture. Subsequently, the curve began to decline, indicating a significant loss of load-bearing capacity. The fracture loading of each curve is recorded in Table 1. In addition, the average failure force was calculated as 2552.10 N. The width, height, fracture force and span, measured prior to the tests, were used as inputs in Equation (13) to calculate the bending strength [39]. The average value obtained was 142.78 MPa.
where m ax P is the failure loading, and the unit is N; l is the span between the two supports, and the unit is mm; and b and h are the width and height of the specimens, respectively, and the unit is mm. Figure 9 displays the force-displacement curves. The graph clearly indicates that, in the initial stage, the curve exhibited linear growth, with the bamboo scrimber specimen experiencing slight bending during the loading process. At the point where the bamboo scrimber reached its ultimate strength, corresponding to the peak force, the specimens underwent brittle fracture. Subsequently, the curve began to decline, indicating a significant loss of load-bearing capacity. The fracture loading of each curve is recorded in Table 1. In addition, the average failure force was calculated as 2552.10 N. The width, height, fracture force and span, measured prior to the tests, were used as inputs in Equation (13) to calculate the bending strength [39]. The average value obtained was 142.78 MPa.
where P max is the failure loading, and the unit is N; l is the span between the two supports, and the unit is mm; and b and h are the width and height of the specimens, respectively, and the unit is mm.
Numerical Results
The ABAQUS nonlinear explicit analysis was employed in this study to simulate the three-point bending tests. By incorporating the experimentally determined energy release rate parameters into ABAQUS, the congruity between the deformation patterns and forcedisplacement curves obtained from the numerical simulations and experimental results can substantiate the reliability of the numerical model and the applicability of the experimental result parameters. Bamboo scrimber consists of the fiber reinforced phase and the matrix phase, which is defined as bamboo fiber reinforced composite, and the failure process was simulated by using the Hashin failure criterion [40]. The hexahedral sweeping meshing technique was employed for grid partitioning. Due to the fact that actual bamboo scrimber is manufactured by hot-pressing laminates, the layers were arranged along the thickness direction based on the geometric shape of the bamboo scrimber [41]. The simulation model and mesh division of the three-point bending tests are shown in Figure 10. The model size was 370 mm × 15 mm × 20 mm. Table 2 summarizes the in-plane and outof-plane shear parameters of bamboo scrimber. The layup angle was set at 0°, and the layering direction followed the Z-axis. The basic Hashin damage parameters are provided in Table 3. The element type was SC8R, and the total number of 14,880 elements were produced. The element deletion was set. Element deletion can be employed to simulate damage. As the loading progresses, the elements reach the critical strain energy release rate, leading to failure and automatic deletion of the elements. The support base and indenter both had a cylindrical shape, with a diameter of 20 mm and a height of 15 mm. The two support bases and the upper indenter were assigned as rigid bodies, with the element type set as C3D8R. Fully fixed boundary conditions were applied to the two supports, and a downward displacement of 10 mm was imposed on the indenter.
Numerical Results
The ABAQUS nonlinear explicit analysis was employed in this study to simulate the three-point bending tests. By incorporating the experimentally determined energy release rate parameters into ABAQUS, the congruity between the deformation patterns and forcedisplacement curves obtained from the numerical simulations and experimental results can substantiate the reliability of the numerical model and the applicability of the experimental result parameters. Bamboo scrimber consists of the fiber reinforced phase and the matrix phase, which is defined as bamboo fiber reinforced composite, and the failure process was simulated by using the Hashin failure criterion [40]. The hexahedral sweeping meshing technique was employed for grid partitioning. Due to the fact that actual bamboo scrimber is manufactured by hot-pressing laminates, the layers were arranged along the thickness direction based on the geometric shape of the bamboo scrimber [41]. The simulation model and mesh division of the three-point bending tests are shown in Figure 10. The model size was 370 mm × 15 mm × 20 mm. Table 2 summarizes the in-plane and out-of-plane shear parameters of bamboo scrimber. The layup angle was set at 0 • , and the layering direction followed the Z-axis. The basic Hashin damage parameters are provided in Table 3. The element type was SC8R, and the total number of 14,880 elements were produced. The element deletion was set. Element deletion can be employed to simulate damage. As the loading progresses, the elements reach the critical strain energy release rate, leading to failure and automatic deletion of the elements. The support base and indenter both had a cylindrical shape, with a diameter of 20 mm and a height of 15 mm. The two support bases and the upper indenter were assigned as rigid bodies, with the element type set as C3D8R. Fully fixed boundary conditions were applied to the two supports, and a downward displacement of 10 mm was imposed on the indenter. Figure 10. Model and mesh of bamboo scrimber three-point bending tests.
Comparison of Results
The results of the three-point bending tests and numerical simulations are presented in Figure 11a and Figure 11b, respectively. Figure 11c illustrates an enlarged comparison of the failure patterns observed in both the test and simulation, revealing their near-identical nature. The numerical simulation and experimental force-displacement curves are presented in Figure 11. It can be seen that the simulation results are similar to the test results. Furthermore, the simulation yielded a peak force of approximately 2500.64 N, exhibiting an error of approximately 2.02% when compared to the peak force observed in the experimental results. The ultimate bending strength was calculated to be 150.04 MPa by substituting the model dimensions and the peak force obtained from the numerical simulation into Equation (13). As a result, the outstanding agreement between the simulation results and experimental findings verified the reliability of the numerical model and the applicability of the experimental parameters.
Comparison of Results
The results of the three-point bending tests and numerical simulations are presented in Figure 11a and Figure 11b, respectively. Figure 11c illustrates an enlarged comparison of the failure patterns observed in both the test and simulation, revealing their near-identical nature. The numerical simulation and experimental force-displacement curves are presented in Figure 11. It can be seen that the simulation results are similar to the test results. Furthermore, the simulation yielded a peak force of approximately 2500.64 N, exhibiting an error of approximately 2.02% when compared to the peak force observed in the experimental results. The ultimate bending strength was calculated to be 150.04 MPa by substituting the model dimensions and the peak force obtained from the numerical simulation into Equation (13). As a result, the outstanding agreement between the simulation results and experimental findings verified the reliability of the numerical model and the applicability of the experimental parameters.
Comparison of Results
The results of the three-point bending tests and numerical simulations are presented in Figure 11a and Figure 11b, respectively. Figure 11c illustrates an enlarged comparison of the failure patterns observed in both the test and simulation, revealing their near-identical nature. The numerical simulation and experimental force-displacement curves are presented in Figure 11. It can be seen that the simulation results are similar to the test results. Furthermore, the simulation yielded a peak force of approximately 2500.64 N, exhibiting an error of approximately 2.02% when compared to the peak force observed in the experimental results. The ultimate bending strength was calculated to be 150.04 MPa by substituting the model dimensions and the peak force obtained from the numerical simulation into Equation (13). As a result, the outstanding agreement between the simulation results and experimental findings verified the reliability of the numerical model and the applicability of the experimental parameters.
Conclusions
This study aimed to experimentally investigate the fracture toughness and energy release rate of bamboo scrimber, as well as to determine the simulation parameters for the material. The following conclusions can be drawn from this study: The tensile fracture toughness of bamboo scrimber was determined using the SENB test method for both the longitudinal and transverse directions. The average longitudinal tensile fracture toughness of bamboo scrimber was 3.59 MPa·m 1/2 , and the average transverse tensile fracture toughness was 0.38 MPa·m 1/2 . Similarly, the average longitudinal tensile energy release rate was 1043 J/m 2 , whereas the average transverse tensile energy release rate was 51.17 J/m 2 . The compressive fracture toughness of bamboo scrimber in both the longitudinal and transverse directions was determined using the CC method. The average longitudinal compression fracture toughness of bamboo scrimber was 2.39 MPa·m 1/2 , and the average transverse compression fracture toughness was 1.79 MPa·m 1/2 . Similarly, the average longitudinal compression energy release rate was 324.67 J/m 2 , whereas the average transverse compression energy release rate was 222.45 J/m 2 . The longitudinal tensile fracture toughness of bamboo scrimber was significantly higher than its compressive fracture toughness, demonstrating superior tensile resistance. The higher values of longitudinal tensile and compressive energy release rates compared to the transverse direction indicate that bamboo scrimber possesses greater strength, stiffness and deformability along the longitudinal axis, while exhibiting relatively weaker behavior in the transverse direction. Overall, this finding holds significant implications for the application and engineering design of bamboo scrimber, allowing for the better utilization of its performance characteristics. Specifically, in situations where resistance to tensile stresses is crucial, bamboo scrimber can leverage its strengths, facilitating the selection of suitable application orientations and design parameters.
Three-point bending tests were conducted to obtain the force-displacement curves of bamboo scrimber. The average bending strength of bamboo scrimber was calculated as 142.78 MPa using the fracture load and specimen parameters. To determine the constitutive parameters of failure Hashin damage in ABAQUS, the measured parameters were introduced into the three-point bending simulation. The force-displacement curves obtained from the numerical simulations closely matched the experimental force-displacement curves. It should be pointed out that the failure load of the simulated force-displacement curves can be used to calculate the flexural strength. The error of the calculated results was about 5.08%, which indicates that the numerical simulation could simulate the test well. This demonstrates the reliability of the numerical model and the applicability of the experimental result parameters. Furthermore, the experimental result parameters can be utilized for subsequent numerical analyses and predictions. However, the numerical model has its limitations, such as the possibility of manufacturing defects in the resin impregnation during the bamboo scrimber process. The numerical simulations were conducted under ideal conditions and could not replicate the actual situation completely. Therefore, there is still room for improvement in the generation of random defects in the numerical simulations.
Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2023-07-12T05:43:01.081Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "71d83eea5494f5f1478136475cfb734a3598aac7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/13/4880/pdf?version=1688727135",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a1c3c55c7204f06ac4a89bf338229e234bbbf58",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251201026 | pes2o/s2orc | v3-fos-license | Adrenocortical Carcinoma Diagnosed by Endoscopic Ultrasound-guided Fine-needle Aspiration
Adrenocortical carcinoma (ACC) is a rare malignancy with a very poor prognosis. A 77-year-old man underwent imaging studies due to poorly controlled hypertension, which revealed a mass measuring 43 mm in diameter near the left adrenal gland. There were no findings indicative of pheochromocytoma. Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) was performed for the preoperative pathological evaluation, and the findings indicated a possibility of ACC. Based on these results, curative surgery was performed. If the diagnosis of pheochromocytoma is excluded, then EUS-FNA for adrenal lesions is relatively safe. It can also be used for the preoperative diagnosis of ACC.
Introduction
Adrenocortical carcinoma (ACC) is a rare malignant tumor with a reported frequency of 0.5-2 cases per 1 million individuals annually (1).It usually occurs in adults in the fourth to fifth decade of life (2).Additionally, a smaller peak is observed in children aged <5 years (3).It occurs more frequently in women than in men; the female to male ratio is between 2.5 and 3 to 1 (4).Approximately 55% of the affected patients present with symptoms indicative of hypersecretion of adrenocortical hormones; 30-40% have symptoms related to the tumor volume, and 10% are asymptomatic (5).The mortality rate associated with ACC is >70% (6).Stages I, II, III and IV accounted for 3%, 29%, 19%, and 50% of the cases, respectively (7).Approximately 40% of the cases have distant metastases (8).The median duration of survival is 101 months for stages I and II, whereas it is 15 months for stages III and IV (2).Advanced cases have a poor prognosis.
ACC is a rare tumor that is sometimes difficult to differentiate when only imaging findings are used.Moreover, owing to its rapid progression, few cases are diagnosed at an operable stage.In this report, we present a case of ACC diagnosed by endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA), which was surgically resectable.
Case Report
The patient was a 77-year-old man who had been treated by another doctor for hypertension for the past 3 years.His hypertension was poorly controlled by oral therapy; therefore, he was referred to a hospital to rule out secondary hypertension.Abdominal computed tomography (CT) revealed a heterogeneous mass, measuring 43 mm in diameter, and abutting the left adrenal gland.The margins of the mass were enhanced on contrast imaging; however, the central part was not enhanced, thus indicating either degeneration or necrosis of the cyst (Fig. 1).T1-weighted magnetic resonance imaging (MRI) showed that the mass had low signal intensity.It also showed a faint high-signal area within the mass.T2-weighted MRI showed that the mass had a relatively high signal intensity.There was a high-signal region with liquid surface formation inside the mass.Fatsuppressed T2-weighted MRI showed no change in the internal properties of the tumor compared with the normal T2weighted MRI.These results indicated that the lesion was an adrenal mass with cystic degeneration containing blood components (Fig. 2).The results of endocrinological examinations are shown in Table 1.Specifically, under the administration of candesartan cilexeil 12 mg/day, the plasma aldosterone level was 126 pg/mL, the serum potassium level was 3.6 mEq/L, and the plasma renin activity was 0.4 ng/ mL/h.The aldosterone-renin ratio (ARR) was relatively high.However, iodine-131 adsterol scintigraphy showed a normal uptake in both adrenal glands (right, 2.1%; left, 1.4%) (Fig. 3).This suggested that it could be an aldosterone-secreting tumor.The level of serum adrenocorti-cotropic hormone (ACTH) was 10.7 pg/mL and that of serum cortisol was 12.0 μg/dL.A low-dose dexamethasone suppression test revealed subclinical Cushing's syndrome based on the serum ACTH level of 9.8 pg/mL and the unsuppressed serum cortisol level of 7.7 μg/dL.By contrast, the serum dehydroepiandrosterone sulfate (DHEA-S) level was 72 μg/dL, which was within the normal range.
He was referred to our hospital for further diagnosis and treatment.The levels of plasma adrenaline, noradrenaline, and dopamine were 136 pg/mL, 1,100 pg/mL, and 27 pg/ mL, respectively.The level of plasma noradrenaline was relatively high.However, the 24-hour urinary metanephrine was 0.15 mg/day, and normetanephrine was 0.23 mg/day, both of which were not elevated.Iodine-123 adrenal metaiodobenzylguanidine (MIBG) scintigraphy showed no increase in the uptake of this tumor (Fig. 4).The possibility of pheochromocytoma was considered to be low.The lesion was relatively large with hormone secretion, and there was a ACTH: adrenocorticotropic hormone, DHEA-S: dehydroepiandrosterone sulfate possibility of malignancy; therefore, surgery was considered, and a biopsy was performed for the preoperative evaluation.EUS (GF-UCT260; Olympus Optical Tokyo, Japan) detected a 38-mm diameter mass adjacent to the left adrenal gland via a gastric approach.It was not continuous with the tail of the pancreas or the left kidney.The surface was smooth, with cystic degeneration inside; however, there was no obvious calcification.EUS-FNA was performed via the gastric approach, and two passes were made with a 22gauge needle (Acquire; Boston Scientific, Marlborough, USA).Solid lesions of the tumor edges were punctured (Fig. 5).No fluctuations in blood pressure occurred either during or after the procedure.The histopathological findings were as follows: Atypical cells with round nuclei and mostly eosinophilic cytoplasm were seen in alveolar-like patterns.There were some swollen nuclei; however, the overall nuclear atypia was mild to moderate, and the mitotic image was unclear.Immunohistologically, the tumor cells were positive for steroidogenic factor-1 (SF-1) and negative for S 100 protein, cytokeratin AE1/AE3, and chromogranin A, and the Ki-67 index was approximately 30% (Fig. 6).Based on the above, the possibility of a primary adrenal tumor, especially ACC, was suspected.
Left adrenal resection was performed, and the left adrenal gland and adrenal tumor were excised as a single mass.The excised material was a 9.5-6.5 cm mass containing a cyst with internal hemorrhaging and normal adrenal glands.Histopathologically, there was a diffuse proliferation of atypical cells with eosinophilic cytoplasm in most regions of the tumor.Partially, there were myxoid areas of mucus deposition in the stroma.Overall, the tumor cells had nuclear atypia, and some mitotic images were observed [>5/50 high power field (HPF)], but no atypical mitotic images were observed.Coagulation necrosis was observed in the tumor.The tumor invaded the capsule; however, extracapsular invasion was unclear.Elastica van Gieson (EVG) staining showed venous and sinusoidal capillary invasion; however, no lymphatic invasion was observed.Immunohistologically, SF-1 was positive, and the Ki-67 index was approximately 15% (Fig. 7).Seven of the Weiss criteria (9, 10) (Table 1) were met: nuclear atypia, an increased mitotic rate, the percentage of clear cytoplasm, coagulation necrosis, invasion of venous and sinusoidal structures, and capsular invasion.Considering the presence of mucus in the tumor stroma, it was diagnosed as an ACC of myxoid variant type.The final pathological stage was T2N0M0 stage II.After the surgery, his hormone levels, including aldosterone and renin, normalized (Table 1), and the antihypertensive medication could also be reduced.We recommended adjuvant therapy with mitotane, but he declined, so he has thereafter been carefully monitored on an outpatient basis.
Discussion
ACC most commonly presents with features of excessive hormonal secretion or symptoms of compression due to the enlarging mass.An increasing percentage of patients with ACC are diagnosed with incidentaloma during abdominal imaging (12).In the case of adrenal incidentalomas, it is especially important to differentiate primary benign lesions from malignant lesions and to rule out metastasis for the proper management and staging of each case.Most adrenal incidentalomas are benign, and the determination of the malignant potential of an adrenal mass depends on the size of the lesion, imaging features, and hormonal status (13).Tumors measuring greater than 40 mm and heterogeneous enhancement on CT are important discriminators of malignancy in adrenal incidentalomas (14).In this case, the tumor diameter was 43 mm, which was large, and the contrast enhancement on CT was heterogeneous, thus leading to a suspected malignancy.ACC often shows a low-attenuation central region on CT that represents tumor necrosis, irregular
×200] (c). Immunohistologically, the Ki-67 index was approximately 15% (×200) (d).
contrast enhancement, calcification, and a thin capsule-like margin surrounding the tumor (15).On ultrasound imaging, ACC usually appears as a rounded or oval well-defined hypoechoic mass, with a few displaying a thick partial or com-plete echogenic rim (16).Although conventional imaging modalities, as mentioned above, are used to rule out adrenal involvement in different malignancies, false-positive and negative-results have been observed in approximately 10% US: ultrasound, CT: computed tomography, MRI: magnetic resonance imaging, HU: Hounsfield unit, T1WI: T1-weighted image, T2WI: T2-weighted image, SF-1: steroidogenic factor 1, MIBG: metaiodobenzylguanidine of the cases (17).In addition, ACC has imaging findings similar to those of pheochromocytoma, adrenal adenoma, adrenal metastasis, adrenal lymphoma, ganglioneuroma, adrenal hemorrhaging, adrenal pseudocyst, and adrenal hemangioma (Table 2) (18)(19)(20)(21)(22)(23).Although malignancy was suspected in this case, it was difficult to differentiate ACC from other diseases such as pheochromocytoma, adrenal lymphoma, and adrenal metastasis based on the imaging findings alone.Therefore, endocrinological examinations were necessary.When evaluating some primary adrenal neoplasms, the differential diagnoses can be refined by labora-tory biochemical testing, including serum cortisol, aldosterone, and metanephrine measurements (24).ACC secretes various hormones, including androgens, cortisol, estrogen, and aldosterone (25).Among adult patients with ACC, 30% present with Cushing's syndrome and 20% with virilization.Feminization and hyperaldosteronism are much rarer, accounting for approximately 2% of the cases (26).In the present case, subclinical Cushing's syndrome was revealed by the low-dose dexamethasone suppression test.Although the accumulation of lesions was not clear on iodine-131 adsterol scintigraphy, the plasma aldosterone level and plasma renin activity revealed the aldosterone production capacity of the tumor.However, the DHEA-S level was within the normal limits.Endocrinological examinations were somewhat atypical, but ACC was suspected first, considering the imaging findings.Resection was considered, and a pathological examination was performed to confirm the diagnosis before surgery.
Adrenal FNA for the diagnosis of predominantly solid masses gained attention in the 1970s and has been increasingly used in clinical practice following improvements in imaging technologies and deep-seated FNA techniques (27).For decades, adrenal FNA has been primarily performed by interventional radiologists using percutaneous approaches guided by CT.In recent years, EUS-FNA advancements have provided an alternative approach for the biopsy of adrenal mass lesions.EUS-FNA sampling of the adrenal gland started in the 1990s and has become more common with improvements in technology and equipment (28).EUS is useful as a diagnostic imaging modality because of its superior resolution, and it also provides direct access to the left and sometimes to the right adrenal glands.Complications of FNA in the adrenal glands, such as adrenal hematoma, abdominal pain, adrenal abscess formation, and tumor recurrence along the needle track, have been reported but are infrequent (29).In a review of 416 patients who underwent EUS-FNA of the adrenal glands, no major complications were reported, except for one case of adrenal hemorrhaging (30).EUS-FNA of the adrenal gland can be performed safely.However, if the targets of the puncture are pheochromocytomas, extreme caution is required because it might cause hemorrhaging and hypertensive crisis (31).When performing EUS-FNA for adrenal tumors, it is important to rule out pheochromocytoma.In addition, we always have phentolamine, an alpha-blocker, on standby, to deal with a hypertensive crisis if it occurs.In this case, the iodine-123 adrenal MIBG scintigraphy showed no accumulation in the lesion, and 24-hour urine metanephrine and normetanephrine levels were within the normal limits, suggesting a low probability of pheochromocytoma.However, we took all possible preparations for the examination and were able to perform the procedure safely.
Pathologically, ACC is one of the most difficult tumors to differentiate benign disease from malignancy, and it is often impossible to distinguish a benign tumor from amalignant tumor using only commonly used indices such as nuclear atypia and vascular invasion.A scoring system that combines multiple indices is recommended for the diagnosis of ACC, and the Weiss criteria are the most commonly used (9, 10) (Table 3).It is useful in that it is simple and can diagnose ACC based on the pathological findings alone.ACC can be diagnosed using EUS-FNA; however, it is important to understand that there are certain limitations to this diagnosis.To diagnose ACC accurately, an evaluation of the entire lesion after surgery is necessary.However, EUS is a useful tool to confirm the location of the tumor and adjacent organs, and EUS-FNA allows for pathological evaluations including immunostaining.ACC is often difficult to differentiate from other diseases based on imaging, as it sometimes shows endocrinologically atypical characteristics.Preoperative pathological evaluations are important because it can identify benign and malignant tumors and exclude other malignant tumors such as pheochromocytoma, adrenal metastasis, and adrenal lymphoma.Therefore, it is worthwhile to consider tissue sampling by EUS-FNA before performing surgical resection.ACC is a rare tumor, and there are few reports of cases diagnosed by EUS-FNA.To the best of our knowledge, only three cases of ACC diagnosed using EUS-FNA have so far been reported (32)(33)(34).This case is valuable because the ACC was diagnosed using EUS-FNA and then surgically resected.
We encountered a rare case of ACC diagnosed using EUS-FNA.When performing EUS-FNA for adrenal tumors, it is important to rule out pheochromocytoma in advance.There were no major complications at the time of examination, and complete resection was possible based on the diagnosis.An early diagnosis of ACC and surgery is important for improving the prognosis.In conclusion, EUS-FNA is therefore considered to be useful for the preoperative diagnosis of ACC.
Figure 1 .
Figure 1.Computed tomography findings.Abdominal plain computed tomography showing a mass measuring 43 mm in diameter in contact with the left adrenal gland (arrow) (a).The margins of the mass were enhanced on contrast; however, the central part was not enhanced, indicating cystic degeneration or necrosis (arrow) (b).
Figure 2 .
Figure 2. Magnetic resonance imaging (MRI) findings.T1-weighted MRI showed that the mass had low signal intensity.It also contained a faint high-signal area within the mass (arrow) (a).T2-weighted MRI showed that the mass had a relatively high-signal intensity.Inside the mass, there was a region of high-signal intensity with liquid surface formation (arrow) (b).The fat-suppressed T2-weighted MRI showed no change in the internal properties of the tumor compared to the normal T2-weighted MRI (arrow) (c).
Figure 5 .
Figure 5. Endoscopic ultrasound (EUS) findings.EUS via a gastric approach detected a 38 mm diameter mass adjacent to the left adrenal gland.The surface was smooth with cystic degeneration; however, there was no obvious calcification (a).EUS-FNA was performed via the gastric approach, and two passes were made using a 22-gauge needle (b).
Figure 7 .
Figure 7.The excised material and histological findings of surgery.The excised material consisted of a 9.5×6.5 cm mass containing a cyst with internal hemorrhage and normal adrenal glands (a).Histopathologically, there was a diffuse proliferation of atypical cells with eosinophilic cytoplasm in most areas of the tumor (Hematoxylin and Eosin staining, ×40) (b).Partially, there were myxoid areas of mucus deposition in the stroma [enlarged image of the yellow rectangle in (b).×200] (c).Immunohistologically, the Ki-67 index was approximately 15% (×200) (d).
Table 3 . Weiss System for Differentiating Benign from Malignant Adre- nocortical Neoplasms (9, 10).
The presence of three or more criteria correlates with subsequent malignant behavior. * | 2021-06-22T06:16:07.220Z | 2021-06-19T00:00:00.000 | {
"year": 2021,
"sha1": "e342c17e2f5f6414c657410f754f3113c510c9de",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/60/24/60_7555-21/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c58805670648d9fe8b4cff79f1da2d902ba952c5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
259521742 | pes2o/s2orc | v3-fos-license | Application of the Scrum Method in the Android-based TPQ Learning Application
ABSTRACT
Introduction
Currently, technology has become a primary need for human life to support its activities. Whether it's activities in various fields such as business, health, fashion, or even the world of education, One product of technology is a mobile application. Mobile can be interpreted as easy movement from one place to another, for example, a mobile phone. This means that a telephone terminal that can move easily from one place to another without interruption or interruption of communication This application can be accessed via wireless devices such as pagers, cell phones, and PDAs, which are software composed of various programming language logics that are structured to become tools on a mobile computer system [1], which can also be e-ISSN: 2622-1659 Jurnal Teknologi dan Open Source, Vol. 6, No. 1, June 2023: 120 -130 121 interpreted as allocating device transfers to other devices. To make it easier, the mobile application itself is more familiar with the Android application. Android is a Linux-based mobile device operating system that includes an operating system, middleware, and applications. Android provides an open platform for developers to create applications. Initially, Google Inc. bought Android Inc., which is a newcomer that makes software for cell phones or smartphones. Android can run on several types of devices developed by many different smartphone vendors [2].
Advances in mobile-based technology can also be used to create a new atmosphere in learning methods that will make the learning process more interesting, such as in building TPQ (Al-Qur'an Education Park) learning media for early childhood, which is currently being implemented at RA Al Hikmah. Kandis, TPQ itself covers getting to know, reading, and memorizing hijaiyah letters, daily prayers, and juz amma, which are part of the Al-Qur'an and must be introduced to children as early as possible in the development of learning to recognize hijaiyah letters, daily prayers, and juz amma. The development of interactive learning media is expected to make students more interested in the learning process and less easily bored if they are in class long enough during the teaching and learning process. In previous studies, it was stated that learning applications were able to attract students' attention and interest because they were presented in the form of images and animations, as evidenced by 80% of the 26 respondents agreeing with this [3]. To support the creation of learning media for recognizing hijaiyah letters, daily prayers, and jus amma at RA Al-Hikmah Kandis, a supporting method is needed in building this application; in this case, the author uses the scrum method.
The scrum method can be referred to as a method contained in the Agile model. Scrum is more interpreted as a way of developing a product that is freer and, as a whole, makes developers work as a unit that wants to achieve a common goal. In Scrum, a loop can also be said to be a Sprint, the duration of which ranges from one week to one month [4]. Scrum is also flexible for developing software, especially in building an information storage system, where, by using Scrum, the development team can make the necessary changes. Scrum is also a method used for the development of an activity that includes requirements, analysis, design, development, and delivery [5]. Scrum is not a process or technique for developing a product, but a framework that can contain various processes and techniques. Scrum has three characteristics: being light, easy to understand, and difficult to master. Empirical process control theory is the basis of Scrum, or it can be called empiricism, which focuses on knowledge gained from experience and making decisions based on that knowledge. In addition, to reduce risk and increase Scrum's predictability, it applies an Iterative (periodic) and Incremental (gradual) approach [6]. There are not many differences between Scrum and Agile, but they are not the same. Scrum is part of the Agile methodology; it can be said that Scrum is definitely agile, but agile is not necessarily Scrum. use in the development of long-term and repetitive applications as an example a few years ago the rise of viruses, with existing maps on google maps or any application that shows a map of the area, with the agile method you can add a system which areas have been affected by the spread of the virus in the application the map, another example of data in the application can be converted into data in the form of an API (Application Programming Interface) where if there is a change in data in the application the user does not need to update the application every time, if the data is already in the form of an API the data will be added by itself without having to update first, but with the application API it must be accessed with the internet [7].
Research Method
The methodology is used to obtain data that can be used as information to encourage problem solving in accordance with what is being studied, so in Figure 1 below, there are stages in the research carried out to find out whether the method used is feasible or not. This study uses the scrum methodology in designing its applications. Scrum is used by researchers to build applications that are in accordance with the wishes and needs of resource persons who interact directly with the teaching and learning process, such as teachers and parents, where the application will later run on the Android platform. Students can use the application, but under the supervision of teachers or parents, the stages carried out by researchers in supporting the establishment of this method in the development of applications later include identifying problems, designing systems, implementing scrum on systems that have been designed, testing applications, and drawing conclusions.
Identification of Problems
Identification of problems will be carried out by conducting interviews with the school regarding obstacles or problems in the teaching and learning process, as well as by building applications based on the needs of the school and teachers. Information results will be identified and will be used as supporting material in the development of an information system that will be built.
Design
From the need for problem identification, it is necessary to make system design modeling or applications in the form of Unified Modeling Language (UML) modeling. The UML modeling that is designed includes use case diagrams and activity diagrams and uses whimsical tools in making an initial description of the application that will be built later..
Use Case Diagrams
In the diagram in Figure 2, there is only one actor labeled by the user because the application to be built is a learning media application. The intended users of this application are students, parents, and teachers. The diagram shows that the system does not have a login, sign in, register, or access to enter an account before accessing the menu in the application, which is usually found in other applications. The absence of an entry menu and register is not without reason, based on the results of the implementation of previous problems that the author However, there are problems that almost all people experience, namely ignorance of registering accounts, logging in to accounts with email, and often forgetting the passwords they created themselves. The absence of important data in this application also underlies the absence of a login menu in this application..
Activity Diagrams
The user and system interactions are shown in this diagram, which also shows the application system's work flow. The following diagram shows the various parts of the activity that are divided into them : 2.1 Diagram of the Activity The user's initial entry into the application is shown on the Spalsh Screen. A splashscreen, which often displays the logo of an application, will greet the user before they reach the homescreen. The flow between the user and the system will be constructed, as shown in Figure 3..
Hijaiyah Letter Menu Activity Diagram
This diagram depicts an interaction between the user and the system contained on the homescreen. An instruction from the user, where the user requests a hijaiyah letter menu, will take the user to a screen that will show a list of all hijaiyah letters, where the activity will be depicted on Figure 4. Not much different from the previous menu, this daily prayer menu is located in one layout on the homescreen. In order to see the prayers on the menu, the user simply presses the prayer menu, and the system will take the user to the prayer list screen. The activity will be explained in the Figure below. been on Figure 5.
Figure 5. Activity Diagram Daily Prayer Menu
Juz Amma Menu Activity Diagram Activity Diagram The juz amma menu is the same as the other menus previously described; this menu is also located on the homescreen along with the other main menus. In order to read the juz amma, the user simply has to press the juz amma menu on the homescreen, and the system will take the user to the list. the last surah or juz in the Qur'an.
Design Wireframe
The first stage is where the initial idea or rough description of the application will be built by the author. This wireframe design will be the author's reference in building the application later. This design is still rough, and later there may be changes to the appearance or additional features outside the wireframe figure 7.
Scrum implementation
In developing applications using the Scrum method, there are several stages, including the product backlog, sprint planning, daily scrum, and spin review.
a. Product Backlog
The components that will become the system of learning applications that will be built by the author are listed in a structured way, from easy work to difficult work. The product backlog consists of features or items that will become the system of the application that will interact with the user. The outline of the features that will be used in the application is listed in the table, among them :
b. Sprint Planning
This stage compiles what activities will be carried out in making the system within 1 to 4 weeks of work. In making the system that will be carried out by the author, the work on the system is carried out with a sprint duration of 2 weeks. The plan is to make the application until it is finished and can be used. Making the application will be carried out in depth 3 sprint , what has not been completed, and the obstacles why it has not been completed. The daily scrum lasts 2 weeks in the previous sprint planning; therefore, the contents of the daily scrum also follow the many predetermined sprint plans.
d. Spint Review
The results of the sprint that the author has previously worked on for 2 weeks will be tested to see whether the system is running well and whether the features are in accordance with the sprint planning that has previously been set targets. If the first sprint has not been met, then it will be continued in the next sprint.
Result and Discussion
The results of this study are a technology-based learning application for children at RA Al Hikmah Kandis that will be used by teachers and parents to be able to develop learning media for RA Al Hikmah Kandis students in learning hijaiyah letters, daily prayers, and juz amma, with the following display :
Application Results
The results of the application discuss what features exist in the application that has been built based on the implementation and design of the research method..
a. Display splash screen and home screen: This display is the initial appearance of the application and will welcome the user when using the learning application. Home Screen Display The second display after the Splash Screen on this display aims to display the main features of this application, namely hijaiyah letters, prayers, and juz amma..
System Testing
Testing this learning media system includes stages that must be carried out, including interface testing. This test is carried out after the system has been built and aims to test whether the application runs properly or not before the user uses it.
3.2.1
Interface Testing This interface test aims to ensure that the functions in the application are in accordance with the Description of the selected prayer The system displays a description of the selected prayer.
Valid
Description of the selected surah The system displays the description of the selected surah.
Valid
Favorite Page Added prayer list The system displays the added prayer list.
Valid
Added Surah List The system displays the added Surah list. Valid
Test Results
Based on the test results on the creative learning media system using the black box testing technique, it can be concluded that the creative learning media system functions as expected. | 2023-07-11T18:26:38.012Z | 2023-06-23T00:00:00.000 | {
"year": 2023,
"sha1": "294b54d6dd789dadcfa17cf16f86456ec3e2ed48",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.uniks.ac.id/index.php/JTOS/article/download/3036/2395",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "931610119d12829203e7753b6483ea73820ff05c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
108460986 | pes2o/s2orc | v3-fos-license | Dynamic Expression Pattern of SERPINA1 Gene from Duck (Anas platyrhynchos)
SERPINA1 is a member of serine protease inhibitors and is increasingly considered to be a regulator of innate immunity in human and animals. However, the expression and function of SERPINA1 gene in immune defense against viral infection remain unknown in ducks. The full-length du SERPINA1 cDNA sequence was obtained using reverse transcription polymerase chain reaction (RT-PCR) and rapid amplification of cDNA ends (RACE). It contained 1457 nucleotide, including 47-bp 5' UTR, 135-bp 3' UTR, and 1275-bp open reading frame (ORF), and encodes a 424-amino acid protein. Then, the tissue expression profile of du SERPINA1 gene was determined. Real-time quantitative polymerase chain reaction (real-time qPCR) analysis revealed that du SERPINA1 mRNA is ubiquitous in various tissues, but higher expression levels were observed in lung and liver tissues. In addition, the expression pattern was investigated when the ducklings were challenged with duck hepatitis virus 1(DHV-1) and polyriboinosinic polyribocytidylic acid (poly I:C). After DHV-1 injection or poly I:C treatment, du SERPINA1 mRNA was up-regulated in the liver and kidney tissues. However, the peak time in two tissues was not consistent. In kidney, the expression lever of SERPINA1 increased immediately after the treatment while in liver tissue it kept steady until 12 h post-infection. Our results indicate that SERPINA1 has an active role in the antiviral response, and thus improve our understanding of the role of this protein.
Introduction
Serpins, as a superfamily of serine protease inhibitors, play a vital role in complement regulation inflammation, angiogenesis, tumor suppression, apoptosis and other physiological processes [1,2]. There is ample clinical evidence that mutation in this gene could cause emphysema or liver disease, which showed a serious impact on the function and homeostasis of tissues and organs [3]. So far, sixteen clades have been identified, designated A through P, with an additional 10 serpins that are unclassified "orphans" [1]. SERPINA1, also known as Alpha-1 anti-trypsin (AAT), is a vital member of the SERPIN superfamily, which plays an important role in anti-inflammatory properties [4][5][6]. Recent findings indicate that Alpha1-antitrypsin may not only prevent damage from proteolysis but may also specifically degrade elastin in tissues and organs, and inhibit some immune pathways to affect regulation of innate immunity [4][5][6].
Ducks, as a feasible model, play an important role in studying avian influenza and human hepatitis, and has raised interest in the duck immune system [7]. DHV-1 is a small RNA virus causing high mortality in ducks (Anas platyrhynchos), especially in younger ducklings. In order to reduce the effect of DHV-1, a large number of studies have been reported on this virus [8][9][10][11][12][13][14]. In our previous study, we identified differentially expressed sequence tags (ESTs) of SERPINA1 using a suppression subtractive hybridization (SSH) cDNA library of 3-day-old ducklings treated with DHV-1 [15]. Our study showed that during DHV-1 and poly I:C infection, the expression of SERPINA1 mRNA was upregulated. In this study, we aim to expand those preliminary results, by assessing tissue-specific gene expression and the dynamic expression change of the SERPINA1 gene against the virus and thereby provide a theoretical basis for future immune pathological studies. . We also confirm that all efforts were made to minimize suffering.
Ducks, Challenge Experiments, and Sample Collection.
The 120 three-day-old domestic ducklings (Jingding duck) were purchased from the Chinese Waterfowl Germplasm Resource Pool (Taizhou, China). RT-PCR was used to make sure that the ducklings had not been exposed to DHV previous to our study [16]. Then, the ducklings were randomly divided into three groups, the 40 ducklings were injected with 0.4 mL of DHV-1(ELD50 10 −4.6 /0.2ml) according to our earlier trials [17,18], and the 40 ducklings were injected with 0.4 mL of poly I: C (0.5 mg/mL, Invivogen, California, USA), another 40 treated with normal saline (as uninfected controls). Injection dose and injection method are consistent with our earlier trials [18]. Besides, the ducklings after the infection showed the typical symptoms of hepatitis by carrying the DHV-1 virus, and the relative results have been published [17]. Additionally, five healthy three-day-old domestic ducks (Jingding duck, Anas platyrhynchos) were purchased from the Chinese Waterfowl Germplasm Resource Pool (Taizhou, China). Tissues, including liver, spleen, lung, heart, kidney, thymus, breast muscle and leg muscle, were obtained after euthanasia that the ducklings were immediately anesthetized with sodium pentobarbital (intraperitoneal injection; 150 mg/kg) and killed by exsanguination. These birds were referred to as morbid ducklings. The tissues were snap-frozen in liquid nitrogen immediately and stored at -80 ∘ C.
RNA Extraction and Cloning of du SERPINA1 cDNA.
Total RNA was isolated from the liver of ducks with TRIzol (Invitrogen) according to the manufacturer's instructions, and the quality of the isolated RNA was assessed by visualizing the ribosomal RNA bands after electrophoresis on a 1.0% agarose gel. The PrimeScript6 1st Strand cDNA Synthesis Kit (TAKARA, Dalian, China) was used according to the manufacturer's instructions with 1 g of total RNA as a template to produce cDNA. Then the polymerase chain reaction (PCR) amplification was conducted using LA Taq (TAKARA) with the following conditions: 94 ∘ C for 5 min, 35 cycles of 94 ∘ C for 30 s, 50 ∘ C for 30 s, and 72 ∘ C for 1 min 30s, followed by one cycle of 72 ∘ C for 10 min. Rapid amplification of cDNA ends (RACE) was used to obtain the 5' and 3' ends of du SERPINA1 using the SMART RACE cDNA amplification protocol (Clontech, Mountain View, CA, USA) and the 3'-Full RACE Kit (TaKaRa, Dalian, China), respectively. Gene-specific primers used for the amplification of RACE cDNA fragments were designed based on the obtained SERPINA1 nucleotide sequence. The sequences of du SERPINA1 was submitted to GenBank under the accession number KY471047. All the primer sequences mentioned above are shown in Table 1.
Bioinformatics
Analysis. Bioinformatic analysis of du SERPINA1 was performed using a software program like DNASTAR, the NCBI website (http://www.ncbi.nlm.nih.gov) and BLAST analysis (https://blast.ncbi.nlm.nih.gov/Blast.cgi). Multiple sequence alignment was created by AlignIR V2.0 and CLC Sequence Viewer 6 was performed to construct multiple sequence alignments of the amino acid sequences of du SERPINA1 proteins. The neighbor-joining phylogenetic tree was constructed based on the alignment result using the Neighbour-Joining (NJ) algorithm within the MEGA 6.0 program.
Quantitative Real-Time Polymerase Chain Reaction.
Total RNA was extracted from different tissues using TRIzol (Invitrogen), and 1 g total RNA was used with FastQuant RT Kit (With gDNase) (Tiangen, Beijing, China) during reverse transcription. The process included an initial phase at 42 ∘ C for 3 min, then incubation at 42 ∘ C 15 min, and incubation at 95 ∘ C for 3 min. The cDNA was stored at -80 ∘ C. Realtime qPCR was carried out on Applied Biosystems 7500 Real-Time PCR System (ThermoFisher Scientific, Waltham, MA, USA) with the following program: 1 cycle at 95 ∘ C for 5 min, followed by 40 cycles of 94 ∘ C for 30 sec, 60 ∘ C for 30 sec, and 72 ∘ C for 30 sec, and a final incubation at 72 ∘ C for 10 min. Relative quantitative of gene expression was calculated using the 2 −ΔΔCt method [19]. The GAPDH gene was used as an internal standard for relative expression levels. In addition, at each time point, the mean ΔCt value of control ducks was used to calibrate analysis of expression patterns in infected ducks. Therefore the results calibrated by the expressions of control ducks are displayed in the plot of du SERPINA1 expression of liver and kidney after DHV-1 and poly I: C challenge. All the primers are listed in Table 1.
Enzyme-Linked Immunosorbent Assay (ELISA) Analysis.
For ELISA analysis, by using double antibody sandwich method to determine the levels of IFN-, IFN-, IgG and IL-8, duck blood were gathered, and supernatants were collected after centrifugation at 3000 rpm for 20 min at 4 ∘ C and subjected to ELISA for detection of duck IFN-, IFN-, IgG and IL8. The concentrations of IFN-, IFN-, IgG and IL8 in the samples were measured with a multifunctional microplate reader (Tecan Infinite M200 PRO; Switzerland) and determined by comparing the optical density of the samples to the standard curve.
Statistical
Analysis. Data were analyzed using SPSS 22.0, and differences among tissue samples were assessed with oneway ANOVA. P values less than 0.05 were considered as significant. All data were processed by GraphPad Prism 5.0. Comparing of the full-length aa sequence of du SER-PINA1 to the SERPINA1 gene of other species by CLC Sequence Viewer 6, multiple alignments and amino acid sequence are shown in Figure 1(a). The duck SERPINA1 shared high similarity with SERPINA1 proteins from other vertebrates. Compared to chickens protein from the GenBank database, the du SERPINA1 protein had identities of 99 %. A condensed phylogenetic tree was constructed based on the derived amino acid sequences of SERPINA1 of organisms (Figure 1(b)). The overall tree topology revealed the following three major groups: bird, mammal and fish.
Tissue-Specific Expression Profile of du SERPINA1 Gene.
Real-time qPCR was carried out to determine the tissuespecific expression profile of du SERPINA1 gene in healthy ducks. The results showed that du SERPINA1 mRNA was widely expressed in all 8 tissues tested (liver, spleen, lung, heart, kidney, thymus, breast muscle and leg muscle). The expression levels of du SERPINA1 mRNA were more significant higher in lung and liver tissues, whereas the expression level in other tissues was extremely low (Figure 2).
The Expression of Cytokine following DHV-1 and
Poly I:C Treatment. In order to investigate the changes of cellular immunity and humoral immunity in ducklings during DHV-1 and poly I:C infection, the ELISA analysis was used to detect the changes of serum IgG, IFN-and other cytokines. The results showed that the cytokines, including IFNand IFN-generally indicated the fluctuating trend, which performed the trend of first increasing, then decreased, and then increased (Figures 3(a) and 3(b)), while IL8 and IgG did not change much as a whole (Figures 3(c) and 3(d)).
du SERPINA1 Expression after DHV-1 and Poly I:C
Challenge. To further define the expression change of SER-PINA1 during DHV-1 or poly I: C treatment, time-dependent expression levels in the liver and kidney after DHV-1 or poly I: C challenge were characterized (Figure 4). In the liver upon DHV-1 treatment, expression of du SERPINA1 increased at 12 h post-infection (h.p.i), and reached peak at 48 h.p.i, after this peak, expression of du SERPINA1 then progressively dropped. In the poly I: C treatment group, expression of du SERPINA1 increased at 36 h.p.i, reaching a peak at 72 h.p.i, and then decreased dramatically (Figure 4(a)). The expression of du SERPINA1 in kidney was increased immediately at 4 h.p.i with DHV-1 and poly I: C (Figure 4(b)).
Discussion
SERPINA1, also known as Alpha-1 anti-trypsin (AAT), is a multifunctional protein with proteinase inhibitory, antiinflammatory and cytoprotective properties. Some studies have shown AAT could prevent the lethality of TNF or endotoxin in mice [20]. It is also illustrated in human that AAT can decrease the release of TNF-from LPS stimulated and unstimulated monocytes in vitro. Besides, in most circulation, AAT was synthesized in the liver, and when suffering an acute phase of inflammation or infection, the AAT was released rapidly [21].
In healthy ducks, the results of tissue-specific transcriptional profile of du SERPINA1 showed the uneven expression in different tissues. : du SERPINA1 mRNA expression in various tissues (liver, spleen, lung, heart, kidney, thymus, breast muscle and leg muscle). All assays were repeated at least three times, and data are shown as mean ± S.E. (n = 5) from one representative experiment. The expression of SERPINA1 was normalized to GAPDH. Different letters showed significant difference (p < 0.05). liver was significantly higher than in other tissues, supporting the fact that SERPINA1 was attributed to different roles in physiology, particularly including immune responses associated with liver disease, which was similar with the expression pattern of SERPINA1 gene in human and mice [22,23]. Besides, the high expression of SERPINA1 was also existed in lung, which was consistent with previous studies that SERPINA1 may attenuate inflammation in ventilatorinduced lung injury by modulating inflammation-related protein expression [24,25].
Cytokines can regulate the physiological functions of leukocytes, mediate inflammatory responses, participate in immune responses, and repair tissue. In order to further study the changes of host immune and inflammatory response caused by DHV-1 and poly I: C infection, the levels of cytokines in serum were tested during the process of infection. IFN-, IFN-, as immunomodulatory pleiotropic factors, have been used as an indicator of cell-mediated immunity in infected organisms [26], and IL8 acts as an inflammatory cytokine and participates in the host's inflammatory response. In this study, IFN-and IFN-got peaked at the early stage after DHV-1 and poly I:C infection, and the changes of IL8 and IgG were not significant after infection with DHV-1 and poly I:C [27,28]. These results revealed the cells of the immune system was directly damage after viral infection and T-cell immunity played a major role in the process of DHV-1 and poly I:C infection, supplementing by B-cell immunity. Subsequently, in order to explore the dynamic expression pattern of du SERPINA1 in ducks during viral infection, the du SERPINA1 mRNA expression after DHV-1 or poly I: C challenge was detected. Previous studies have showed that DHV-1 rather than other DHV types could specifically infect duck embryo liver and duck embryo kidney cells [29,30] and in this study, liver and kidney tissues were selected to be candidate tissues. During the early period of viral injection, the expression of SERPINA1 was no significant difference in the liver, while it was rapidly increased in the kidney tissue, which may be related that during suffering the infection of the viral antigen, the host's various non-specific defenses, such as natural killer lymphocyte cell immunity, secrets more IFNand IFN- [31] to regulate the expression of SERPINA1 gene regulation pathway, which lead to an increase in early SERPINA1 gene expression. After achieving the peak, the expression of du SERPINA1 decreased to a normal level, which may be prevented excessive inflammatory reaction. In pig [32], when the expression of du SERPINA1 was upregulated after PCV2 infection, the immune factor was transported to the immune related tissues through the circulation of blood, which prevented excessive inflammatory reaction and maintained the integrity and normal function of the immune tissue. With the increase of age, the immune system of ducklings is becoming more and more adaptive. Studies have shown that the immune level of 7-day-old chicken is comparable to that of adult chickens [6], and the mature immune system will produce a large number of cytokines to resist the attack of virus. These results suggested SERPINA1 might be associated with the host defence response against viral infection. The specific mechanism is unclear but may autocrine regulation of SERPINA1 mRNA synthesis when virus invasion.
Previous studies have proved that SERPINA1 has a vital role in protect host tissue against injuring at sites of inflammation. Now, increasing evidence showed that SERPINA1 (AAT) may exhibit biological activity independent of its protease inhibitor function [33,34]. From our results, we speculated that when being attacked by virus, the host needs more SERPINA1 which is consistent with the previous research results. To conclude, our results provide a better understanding of du SERPINA1 function in immunity during viral infection.
Conclusions
In this study, we first cloned and characterized the gene in duck and demonstrated that the du SERPINA1 gene shared high similarity with SERPINA1 proteins from other vertebrates. Transcriptional analyses showed ubiquitous expression of SERPINA1 gene in eight examined tissues. Expression analyses showed that SERPINA1 gene was significantly upregulated in vivo after DHV-1 or poly I: C stimulation. Taken together, our results provide a better understanding the dynamic expression change of SERPINA1 gene against virus and thereby provide a theoretical basis for future immune pathological studies.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2019-04-12T13:41:36.556Z | 2019-03-19T00:00:00.000 | {
"year": 2019,
"sha1": "fdc920a9062129273759421f48284aef66ff69a7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2019/1321287",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fdc920a9062129273759421f48284aef66ff69a7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16153849 | pes2o/s2orc | v3-fos-license | {Chemo-spectrophotometric evolution of spiral galaxies: IV. Star formation efficiency and effective ages of spirals
We study the star formation history of normal spirals by using a large and homogeneous data sample of local galaxies. For our analysis we utilise detailed models of chemical and spectrophotometric galactic evolution, calibrated on the Milky Way disc. We find that star formation efficiency is independent of galactic mass, while massive discs have, on average, lower gas fractions and are redder than their low mass counterparts; put together, these findings convincingly suggest that massive spirals are older than low mass ones. We evaluate the effective ages of the galaxies of our sample and we find that massive spirals must be several Gyr older than low mass ones. We also show that these galaxies (having rotational velocities in the 80-400 km/s range) cannot have suffered extensive mass losses, i.e. they cannot have lost during their lifetime an amount of mass much larger than their current content of gas+stars.
INTRODUCTION
The Star Formation Rate (SFR) is the most important and the less well understood ingredient in studies of galaxy evolution. Despite more than 30 years of observational and theoretical work, the Schmidt law still remains popular among theoreticians and compatible with most available observations (e.g. Kennicutt 1998a).
It is well known that there are systematic trends in the SF history of the Hubble sequence: The ratio Ψ/ Ψ of the current SFR Ψ to the past average one Ψ (integrated over the galaxy's age) increases as one goes from early to late type galaxies, albeit with a large dispersion within each morphological type. The Hubble sequence seems to be determined by the caracteristic timescale for star formation, with early type galaxies forming their stars in shorter timescales than those of late types. However, this simple description of the Hubble sequence fails to answer two important, and probably related, questions: i) what determines the caracteristic timescale for SF in galaxies? and ii) what is the role (if any at all) of a galaxy's mass? found an anti-correlation between the SFR per unit total mass and galactic luminosity; part of this trend may reflect the aforementioned dependence of SFR on Hubble type, but it may also be that this trend is fundamentally related to the galactic mass. In a recent work Bell and de Jong (2000) use a large sample of spiral galaxies with resolved optical and near-infrared photometry and find that it is rather surface density that drives the star formation history of galaxies, while mass is a less important parameter.
Photometric studies alone cannot lift the agemetallicity degeneracy, namely the fact that young and metal-rich stellar populations may be redder than old and metal-poor ones. Studies of the chemical aspects of galaxy evolution (i.e. gas fractions, star formation rates, metal abundances) can help to tackle the problem from a different angle, but they are also limited by the unknown history of gaseous flows from and to the system. A study combining elements from both photometric and chemical evolution offers the best chances to understand this complex situation.
In this work we study the star formation history of spirals by using detailed models of chemical and spectrophotometric galactic evolution. The numerical code has been presented in detail elsewhere (Boissier and Prantzos 1999, hereafter Paper I) and is only briefly described in Sec. 2. It has been successfully applied to the modelling of global properties of spirals (Boissier and Prantzos 2000, hereafter Paper II), as well as to the corresponding abundance and photometric gradients (Prantzos and Boissier 2000, herefter Paper III). Based on the recent work of Boselli et al. (2000), we use here a large and homogeneous sample of data for normal spirals (presented in Sec. 3), ideally suited for the purpose of this work. Our results and the comparison to the observations are presented in Sec. 4. They suggest that galactic mass is the main driver of galactic evolution, although local surface density may also play a role. We argue that massive galaxies are, on average, older than less massive ones, based on the fact that star formation efficiencies seem to be independent of galactic mass, while gas fractions are systematically low in massive spirals. Based on the observed star formation efficiencies and gas fractions, we derive in Sec. 5 corresponding galactic ages by using simple analytical models of galactic chemical evolution, taking into account gaseous flows. We find that low mass spirals must be several Gyr younger than massive ones. Also, we find that spirals with rotational velocities in the 80-400 km/s range must not have suffered extensive mass losses during their past history. Our conclusions are summarized in Sec. 6.
The model and the Milky Way
In Boissier and Prantzos (1999, paper I), we presented a model for the chemical and spectrophotometric evolution of the Milky Way disc. We recall here the main ingredients of the model. The disc is simulated as an ensemble of concentric, independently evolving rings, slowly built-up by infall of primordial composition. For each ring we solve the classical equations of chemical evolution (e.g. Pagel 1997) without the assumption of Instantaneous Recycling Approximation. We use: stellar lifetimes from Schaller et al. (1992); the yields of Woosley and Weaver (1995) for massive stars and Renzini and Voli (1981) for intermediate mass stars; SNIa producing Fe with a rate given in Matteucci and Greggio (1986); and the Initial Mass Function of Kroupa et al. (1993) between 0.1 M⊙ and 100 M⊙. The adopted star formation rate (SFR) varies with gas surface density Σg and radius R as: where V (R) is the rotational velocity, assumed to be 220 km/s in the largest part of the Milky Way disc. This radial dependence of the SFR is suggested by the theory of star formation induced by density waves in spiral galaxies (e.g. Wyse and Silk 1989). The efficiency α of the SFR (Eq. 1) is fixed by the requirement that the observed local gas fraction, at R=8 kpc from the Galactic centre (σgas ∼0.2), is reproduced at T=13.5 Gyr (our adopted value for the age of the local disc). The disc is built up by infall with a rate exponentially decreasing in time, and a characteristic time-scale τ inf (R) increasing with radius (as to mimic the inside-out formation of the disc). The relation between τ inf (R) and Σ(R) (total surface density) is shown on Fig. 1 and is a posteriori justified, since it is crucial in shaping the various radial profiles (of gas, SFR, abundances, etc) of the disc, which compare favourably to observations. The spectro-photometric evolution is followed in a selfconsistent way, with the metallicity dependent stellar tracks from the Geneva group (Schaller et al. 1992, Charbonnel et al. 1996 and stellar spectra from Lejeune et al. (1997). Dust absorption is included according to the prescriptions of Guiderdoni et al. (1998) and assuming a "sandwich" configuration for the stars and dust layers.
It turns out that the number of observables explained by the model is much larger than the number of free parameters. In particular the model reproduces present day "global" properties (amounts of gas, stars, SFR, and supernova rates), as well as the current disc luminosities in various wavelength bands and the corresponding radial profiles of gas, stars, SFR and metal abundances; moreover, the adopted inside-out star forming scheme leads to a scalelength of ∼4 kpc in the B-band and ∼2.6 kpc in the K-band, in agreement with observations (see paper I).
Scaling relations
For a simplified extension of the model to the the case of other disc galaxies we adopt the "scaling properties" derived by Mo, Mao and White (1998, hereafter MMW98) in the framework of the Cold Dark Matter (CDM) scenario for galaxy formation. For the details, the reader should refer to MMW98 and Boissier and Prantzos (2000, paper II). Discs form inside non baryonic dark matter haloes of various masses and concentration factors. Assuming constant disc to halo mass ratios (here taken to be m d =0.05), discs are characterized by two parameters: VC, the circular velocity measuring the mass of the disc, and λ, the spin parameter measuring its angular momentum. A disc is described by its scale-length R d and its central surface density Σ0, which can be related to the ones of our Galaxy (designated by M W ) by: and The total mass of the disc M d is proportional to V 3 C , but independent of λ. The λ distribution deduced from numerical simulations (see MMW98 and references therein) presents a maximum at λ ∼ 0.04 − 0.05 and extends from ∼0.01 to ∼0.20. Since the value of λMW is not far from the peak value, we explored here values of λ in the range 1/3 λMW to 3 λMW . Assuming that λ and VC are independent, we constructed models with velocities in the observed range 80 to 360 km/s. We calculated the velocity profile resulting from the disc plus an isothermal dark halo, and we used Equ. 1 to calculate self-consistently the SFR (with the same coefficient α, "calibrated" on the Milky Way).
At this point it should be noticed that our scaling relations (2 and 3) are based on the assumption that angular momentum is conserved during the evolution of the disk. Numerical hydrodynamical simulations of galaxy formation (e.g. Navarro and Steinmetz 1999 and references therein) do not support this idea; indeed, it is found that as the baryonic halo gas cools down and collapses to the disk it loses most of its angular momentum. In those conditions, the final configuration of the disk cannot be related in a simple way to the initial λ of the halo. However, such simulations lead , in general, to disk sizes much smaller than observed. It is possible that star formation and feedback are not properly described in those simulations. For instance, Sommer-Larsen, Gelato and Vedel (1999) found that delaying the cooling of the gas reduces the loss of angular momentum. Since the situation is not clear yet, we make here the assumption of disk evolution at constant angular momentum, a posteriori justified by the fact that the resulting disk sizes are in agreement with observations (see Paper II). For a more detailed discussion of these issues, see also Cole et al. (2000).
Infall timescales
The infall rate in our models is exponentially dependent on time, with a time-scale τ inf that depends on the surface mass density and is calibrated on the Milky Way disc (see Fig. 1); as explained in paper II, we found that a dependence on the total mass of the galaxy is necessary in order to reproduce the observations of present-day discs. We consider that all the galaxies started forming their stars 13.5 Gyr ago and that the formation of their exponential discs (characterized by Σ0 and R d ) was completed at the present epoch. Notice that the value of T=13.5 Gyr plays no essential role in the results of this work (values in the 10-15 Gyr range would lead to quite similar conclusions); these results depend mainly on the infall timescales. The more massive galaxies are characterized by shorter formation time-scales, while less massive galaxies are formed on longer ones ( Fig. 1). This assumption turned out to be a crucial ingredient of our models, allowing to reproduce an impressive amount of observed properties of spirals that depend on mass (or VC): colours, gas fractions, abundances and integrated spectra. Those observables are thoroughly presented in paper II, while some of them are revisited in this work. In Prantzos and Boissier (2000, paper III in the series), we show that this model also reproduces fairly well the observed colour and abundance gradients in spirals.
We notice that Bell and de Jong (2000) suggested recently that the observed colour gradients of disc galaxies correlate very well with the local surface density, in the sense that inner and denser regions are older. Our assumption about the infall timescale agrees, at least qualitatively, with their findings.
We stress that infall timescales are inputs to the model, and adjusted as to reproduce observations. Star formation timescales are outputs of the model (resulting from the adopted prescriptions for infall and SFR) and are presented in Sec. 4.1.
The observational sample
The sample of galaxies analysed in this work, which has been extracted from the large multifrequency database of nearby galaxies of Gavazzi and Boselli, is extensively described in Boselli et al. (2000). Here we give just a brief description of the sample selection criteria: we refer the reader to Boselli et al. (2000) for the detailed references on the data and on their analysis.
Galaxies analysed in this work are taken from the Zwicky catalogue (CGCG, Zwicky et al. 1961Zwicky et al. -1968)(mpg ≤ 15.7). They are either late-type (type>S0a) members of 3 nearby (recession velocity cz ≤ 8000 km s −1 ) clusters (Cancer, A1367, Coma), or located in the relatively low-density Figure 1. Infall time-scale τ inf as a function of the local surface density Σ. The dependence on Σ is calibrated on the Milky Way (solid curve, V C =220 km/s). The dependence on V C is chosen as to reproduce the properties of the homogeneous sample of spiral galaxies presented in Sec. 3. When τ inf < 0, the infall is exponentially increasing with a time-scale |τ inf |.
regions of the Coma-A1367 supercluster (11 h 30 m <RA< 13 h 30 m ; 18 o <dec<32 o ) as defined in Gavazzi et al. (1999a). To extend the present study to lower luminosities, we include in the sample the late-type Virgo cluster galaxies brighter than mpg ≤ 14.0 listed in the Virgo Cluster Catalogue as cluster members (VCC, Binggeli et al. 1985). Furthermore VCC galaxies with 14.0 ≤ mpg ≤ 16.0 included in the "ISO" subsample described in Boselli et al. (1997a) and CGCG galaxies in the region 12 h <RA< 13 h ; 0 o <dec<18 o but outside the VCC, are considered.
To avoid systematic environmental effects we consider the subsample of late-type galaxies whose HI deficiency (defined as d = log(HI/ HI ), the ratio of the HI mass to the average HI mass in isolated objects of similar morphological type and linear size, see Haynes & Giovanelli 1984) is d ≥0.3, typical of unperturbed, isolated galaxies. The final combined sample comprises 233, mainly "normal" galaxies.
We assume a distance of 17 Mpc for the members (and possible members) of Virgo cluster A, 22 Mpc for Virgo cluster B, 32 Mpc for objects in the M and W clouds (see Gavazzi et al. 1999b). Members of the Cancer, Coma and A1367 clusters are assumed at the distance of 62.6, 86.6 and 92 Mpc respectively. Isolated galaxies in the Coma supercluster are assumed at their redshift distance adopting Ho = 75 km s −1 Mpc −1 .
For the 233 optically selected galaxies, data are available in several bands as follows: 100% have HI (1420 MHz) and 99% H band (1.65 µm) data, while a much coarser coverage exists in the UV (2000Å)(29 %), CO (115 GHz)(38%) and Hα (6563Å) (65%), as shown in Table 1 and 2 of Boselli et al. (2000). The distribution of the sample galaxies in the different morphological classes is given in Table 3 of Boselli et al. (2000).
As previously discussed, the present sample is optically selected and thus can be biased against low surface brightness galaxies; the inclusion of the Virgo cluster should in principle favor the presence of some low surface brightness galaxies, not easily detectable at higher distances. Being volume limited, the sample is not biased toward bright, giant spirals, but it includes also dwarfs and compact sources. Its completeness at different wavelengths makes this a unique sample suitable for statistical analysis.
Data analysis
Hα and UV fluxes, corrected for extinction (and [NII] contamination) as described in Boselli et al. (2000), are used to estimate star formation rates through population synthesis models given in that work; a power-law IMF of slope -2.5 with a lower and upper mass cutoff of 0.1 M⊙ and 80 M⊙, respectively, is adopted. Its high mass part is quite similar to the one of the IMF of Kroupa et al. (1993), adopted in our models. Given the uncertainty in the UV and Hα flux determination, in the extinction correction and in the transformation of the corrected fluxes into SFRs via population synthesis models, we estimate an uncertainty of a factor of ∼3 in the determination of the SFR.
The total gas content of the target galaxies, HI+H2, has been determined from HI and CO measurements. CO (at 2.6mm) fluxes have been transformed into H2 masses assuming a standard CO to H2 conversion factor of 1.0 10 20 mol cm −2 (K km s −1 ) −1 (Digel et al. 1996). For galaxies with no CO measurement, we assume that the molecular hydrogen content is 10% of the HI, as estimated from isolated spiral galaxies by Boselli et al. (1997b). The total gas mass has been corrected for He contribution by 30%. HI fluxes are transformed into neutral hydrogen masses with an uncertainty of ∼ 10%. The average error on CO fluxes is ∼ 20%; the error on the H2 content, however, is significantly larger (and difficult to quantify) due to the poorly known CO to H2 conversion factor (see Boselli et al. 1997b).
Galaxy colours have been determined from broad band near-IR and optical photometry. Near-IR (H band) images are available for 230 of the 233 sample galaxies, while B images or aperture photometry for 214 objects. H and B magnitudes have been corrected for extinction as in . No correction has been applied to galaxies of type later than Scd. The estimated error on B and H magnitudes is 15 %.
Rotational velocities have been determined from the HI line width at 21cm, and corrected for inclination as in Gavazzi (1987). To avoid large systematic errors, we estimate rotational velocities only for galaxies with inclinations > 30 deg and with the 21cm line width accurately determined (double or single horned profile with high signal-tonoise). The uncertainty on the determination of the rotational velocity is ∼ 15 km s −1 .
In summary, we have a homogeneous sample of 233 normal (non-perturbed) disc galaxies. For 96 of them, we evaluated all the quantities of interest in this work: Blue magnitude MB, total mass MT , total gas mass Mg, star formation rate Ψ and rotational velocity VC. We shall see below how our models fit those properties and what kind of inferences can be made on the star formation history of those galaxies. The curves are the results of our models and are labeled by the value of the ratio λ/λ M W , where λ M W is the spin parameter for the Milky Way disc. The thin curve corresponds to very low values of λ, producing very compact galaxies that look like bulges or ellipticals rather than spirals.
Mass-driven colours and gas fractions
In Fig. 2 we compare our model results to the colours B − H of our galaxy sample. A clear correlation is obtained between B − H and VC, with the more massive discs being, on average, redder than their lower mass counterparts. Our models (solid curves) reproduce naturally the observed correlation, since by construction the more massive discs form their stars earlier (Fig. 1). The corresponding timescales for star formation will be discussed in Sec. 4.1. The observed dispersion in the lower panel of Fig. 2 can be accounted for by the range of λ values in our models, but only partially. Discs with lower λ are more compact, have higher central surface densities and evolve earlier than those with larger λ. Obviously, λ values larger than 3 λMW (the largest value used here), would lead to even smaller B − H values than the models displayed on Fig. 2, possibly accounting for the rest of the scatter.
We notice that our models do take extinction by dust into account (with the prescriptions presented in Sec. 2.1), and that extinction contributes somewhat (by ∼0.5 mag) in redenning the most massive discs in our models, which have relatively large amounts of metals. However, we insist on the fact that it is age, not extinction, which is mainly responsible for the trends of our models on Fig. 2.
In Fig. 3 we present two important pieces of data, on which the main argument of this paper is based. In the upper panel, we show the ratio Mg/LH (mass of gas to H-band luminosity) vs rotational velocity VC. Since LH reflects the mass of the stellar population , this ratio is a measure of the gas fraction of the system. Despite the large scatter, it is clear that a correlation does exist, the more massive galaxies being, in general, gas poor. Our model results can fairly well describe the observations, accounting for both the slope and the dispersion of the Mg/LH vs VC relation.
In the lower panel of Fig. 3 we plot the ratio Ψ/LH (cur- rent SFR to H−band luminosity) vs. VC . As discussed in several places (e.g. Kennicutt 1998a), this ratio is a measure of the parameter b = Ψ/ Ψ , the ratio of the current SFR to the past average one. Massive galaxies display smaller Ψ/LH , and thus lower b values, than their low mass counterparts. This means that they formed their stars at much higher rates in the past. Again, our results compare fairly well with the data, concerning both the slope and the scatter of the correlation. For a given VC, large λ discs have larger gas fractions today and were less active in the past than their lower λ counterparts. As we shall see in Sec. 4.3, and discuss in Sec. 5, when the two panels of Fig. 3 are combined, they suggest quite convincingly that low mass spirals are on average younger than massive ones. At this point, we notice that Gavazzi and Scodeggio (1996) already advanced a similar hypothesis to explain the observed colors of galaxies as a function of their H-band luminosity (a mesure of their dynamical mass according to them). came to similar conclusions and suggested that mass is the main parameter of galaxy evolution, on the basis of multiwavelength observations concerning a variety of disc properties (colours, gas content, star formation rate, radius, surface brightness). . Time-scale of an exponential fit to the star formation rate history of our model galaxies; τ −1 EXP (Gyr −1 ) is plotted as a function of disc rotational velocity V C and parametrised with the spin parameter λ/λ M W . For low values of V C , τ EXP is negative, i.e. the star formation rate is increasing with time. τ −1 EXP is larger for massive galaxies, which form their stars earlier.
Formation time-scales
The Hubble sequence of galaxies is usually interpreted in terms of different star formation timescales (e.g. Kennicutt 1998a), although such an interpretation leaves unclear the role of the galaxy's mass (Prantzos 2000). In order to put our results in that context, we performed an exponential fit to the star formation histories of our model galaxies (excluding the first 2 Gyr, where such a fit turns out to be inadequate). The resulting timescales are shown in Fig. 4. Notice that in some models the SFR is continuously increasing with time, resulting in negative caracteristic timescales. For that reason we present τ −1 EXP , which has the advantage of varying continuously when going from a SFR increasing in time to one decreasing with time. For an exponential star formation rate, Ψ ∝ exp −t/τ EXP , τ −1 EXP is equal to −Ψ/Ψ (whereΨ = dΨ/dt) and can be considered as the normalized rate of change of the SFR. Obviously, the larger is τ −1 EXP , the earlier the galaxy forms its stars.
As can be seen on Fig. 4, massive galaxies in our models form their stars on shorter timescales than their lower mass counterparts. This general trend is somewhat modulated by the spin parameter λ: discs with smaller λ (i.e. more compact) have larger τ −1 EXP than discs with larger λ of similar rotational velocity. The most massive discs of our simulations (VC=360 km/s) have decreasing SFR with τEXP ∼4-5 Gyr. Discs with VC ∼200 km/s have long timescales, of the order of ∼10 Gyr, i.e. essentially constant SFR. Finally, low mass discs ( VC ∼100 km/s) have SFR increasing in time, with caracteristic timescales τ −1 EXP ∼-0.2 to -0.5 Gyr −1 .
The evolution of the star formation efficiency
What is the reason for the vastly different SF timescales obtained in our models as a function of VC ? Is the overall SF efficiency ǫ = Ψ/Mg directly affected by the mass of the Low λ values correspond to more compact galaxies, while values larger than 3 λ M W lead to Low Surface Brightness Galaxies. For each λ, three curves are shown corresponding to V C =80, 220 and 360 km/s, respectively. The SF efficiency ǫ depends more on λ than on V C . At T=13.5 Gyr, the present-day observations (Sec. 4.3) are shown; an artificial small age spread is introduced. Our grid of models at T=13.5 Gyr covers well the observed ǫ values.
galaxy? An inspection of Eq. 1 suggests that this cannot be the reason. Indeed, at the caracteristic radius of the disc R d , the local efficiency is: Eq. 3. In other terms, the SF efficiency at R d varies very little with VC and depends much more on λ than on VC. Since the value of any intensive quantity (like SF efficiency) at R d is typical of the whole disc, it is expected that the global SF efficiency also depends little on VC. In order to show this quantitatively, we plot in Fig. 5 the evolution of the global SF efficiency ǫ of our models as a function of time. We show the results for 3 values of λ (1/3, 1 and 3 times λMW ) and three values of VC (80, 220 and 360 km/s, respectively). We compare our results at T=13.5 Gyr to estimates of ǫ = Ψ/Mg in our sample of nearby spirals (presented in Sec. 4.3) . The following points should be noted concerning our models: i) The efficiency ǫ depends very little on VC, especially during the last half of the history of the galaxy.
ii) ǫ is mainly determined by λ: "compact" galaxies have higher efficiency because of their smaller size (Ψ ∝ 1/R, for a given VC) and larger gas surface density at the caracteristic radius R d . At T=13.5 Gyr, the variation of our ǫ values due to λ can fully account for the dispersion in ǫ measured in nearby spirals.
iii) The SF efficiency in compact galaxies (low λ) presents a peak at early times and then decreases. This happens because the star formation migrates to outer regions (due to the inside-out disc formation scheme) where the local SF efficiency is lower (because of lower surface densities and of the 1/R factor). In more extended galaxies (larger λ) ǫ does not present such a decrease because local properties vary little with radius. iv) For the lowest disc velocities VC, ǫ may increase with time. This is due to the adopted form of infall: the gas surface density increases considerably when the gas arrives finally in the disc (which may take a very long time in the case of the largest λ values and lowest VC values, see Fig. 4).
The main point of this section is that in our models the SF efficiency ǫ does not depend directly on the mass of the galaxy. The range of ǫ span by our models during galactic evolution is due to λ and τINF , not to VC ; the most important of the two parameters is τINF . Of course, τINF is adjusted to VC , so that the observations of Fig. 2 and 3 (and many others, presented in Paper II) are reproduced. This is a crucial ingredient for the success of our models, and we discuss its implications in Sec. 5.
Gas fraction and star formation efficiency
The analysis of Sec. 4.1 and Sec. 4.2 lead to an important conclusion: the success of our models is to be interpreted in terms of mass-dependent SF timescales; but this is not due to any explicit dependence of SF efficiency on galaxy mass. Indeed, the SF efficiency of our models is virtually independent of mass during most of galactic history, and in particular at the present time. Is this supported by observations?
In Fig. 6 we display our data of Fig. 3, this time in a more "physical" presentation, appropriate for a quantitative discussion (see Sec. 5). The gas fractions σgas (upper panels) and SF efficiency ǫ (lower panels) are presented as a function of rotational velocity VC (left panels), H-magnitude (middle panels) and B − H colour (right panels).
The gas fraction σgas = Mg/MT (where the total mass MT = M * + Mg is the mass of stars + gas) is obtained by converting the LH luminosity to star mass M * through the mass to light ratio M * /LH obtained in our models. This value is in the 0.3-0.6 range in solar units (M * /LH ∼0.35 for VC=80 km/s, ∼0.47 for VC 220 km/s and ∼0.56 for VC=360 km/s). Despite the uncertainties, a clear trend is present in the upper panel: the more massive, luminous and red a galaxy is, the smaller is its gas fraction. Notice that a similar trend is obtained by McGaugh and de Blok (1997) and Bell and de Jong (2000). Our model results are in excellent agreement with the data, although we have some difficulty in reproducing blue and gas poor discs (with B − H <2.5 and σgas <0.4).
The SF efficiency of our observed galaxies is obtained by simply dividing the SFR Ψ with the gas mass Mg. We notice that the uncertainties in deriving ǫ are rather large: those concerning the SFR Ψ are quite large (a factor of ∼3), whereas those of the gas mass at least of ±20 %. As a result, the observationally derived scatter of ǫ, as appears on Figs. 5 and 6, is certainly larger that the real one. It is clear, however, that the SF efficiency does not seem to be correlated with the mass or the colours of spirals. The scatter in the observed values of ǫ (a factor of ∼10), should be compared to the range of ∼60 span by galaxy mass (since the mass of the disc m d ∝ V 3 C ) or the range of five magnitudes in H-luminosity. Our model SF efficiencies ǫ at T=13.5 Gyr, also displayed in Fig. 6 (lower panel, solid curves), show no The results of our models (Sec. 4.2) are shown by curves, parametrised by the the ratio λ/λ M W , where λ M W is the Milky Way spin parameter. The gas fraction decreases, on average with galaxy's mass, luminosity and colour index, while the global SF efficiency ǫ does not depend on those parameters. Our models account fairly well for these data. We note that ǫ model is a function of λ and is greater for compact galaxies.
dependence on mass or colour. They also reproduce the observed dispersion, the more compact discs (smaller λ) being the more efficient in turning their gas into stars.
The upper and lower panels of Fig. 6, when combined, point to an important conclusion: since the SF efficiency is independent of galactic mass (or luminosity), the fact that low mass galaxies have larger gas fractions today may only be due to their smaller ages. This is the most straightforward interpretation, independent of any theoretical considerations (except for the implicit assumption that the SF efficiency has remained ∼constant during the galaxy's history). This conclusion is corroborated by an independent observable, namely that low mass galaxies are, in general, bluer than more massive ones. Notice that the latter observable concerns also elliptical galaxies, but the well known problem of the age-metallicity degeneracy does not allow to conclude in that case. In the case of spirals, the situation is even worse in principle, because colours may be affected by the presence of dust (presumably more abundant in massive spirals). Because of this complication, the observed gas fraction (smaller in large spirals) is not sufficient in itself to lift the age-metallicity degeneracy. However, when combined with the fact that the SF efficiency is independent of galactic mass (as argued here), the degeneracy is lift, and the aforementionned conclusion is naturally obtained.
Before turning to a more detailed discussion of our findings, we would like to point out that similar results are obtained in other recent works. For instance, McGaugh and de Blok (1997) find a clear trend between the gas fraction and the B−magnitude of their galaxy sample, that they describe by the relation σgas=0.12 (MB+23). This relationship is shown in Fig. 7 (upper panel, dashed line), along with the McGaugh and de Blok (1997) data for normal spirals and our data for normal spirals. It can be seen that there is a very good agreement between the two data sets. We notice that our models fit the observations of McGaugh and de Blok (1997) fairly well (see Fig. 13 of Boissier and Prantzos 2000).
In the lower panel of Fig. 7 we show the SF efficiency, both for our data set and the one of Kennicutt (1998b). Notice that Kennicutt (1998b) gives the average SFR surface density Ψ and average gas surface density Σgas of normal spirals, i.e. the integrated quantities are divided by the disc surface aerea (within the optical radius). Obviously, the ratio Ψ / Σgas gives the overall SF efficiency ǫ (since the disc aerea cancels out). As can be seen in Fig. 7, Kennicutt's values of the SF efficiency are slightly larger than ours (by a factor of ∼ 2), and have a smaller dispersion in the adopted logarithmic scale. Taking into account the various uncertainties in estimating the SFR from the data (see Sec. 3), such a discrepancy between Kennicutt's results and ours is not unexpected. But the important point is that Kennicutt's values are also independent of the galaxy B−luminosity and, by virtue of the Tully-Fisher relation, on the galaxy's mass.
The anti-correlation between gas fraction and luminosity in spirals was noticed by several authors (e.g. , McGaugh and de Blok 1997, Bell and de Jong 2000, Boselli et al. 2000. Based on two independent samples, we showed here that the SF efficiency of spirals is independent of their mass. The two findings combined, point to small discs being younger, on average, than massive ones. Our model, presented in Sec. 3.2, nicely explains these features (and several other, presented in papers II and III). However, one may argue that the complex interdependence between the adopted infall and SFR prescriptions makes a straightforward interpretation difficult; he/she may also argue that other types of models could also account for the observations and give a different interpreration for the same data (e.g. by invoking outflows). For that reason, we discuss in the next section this issue on the basis of simple analytical models of galactic chemical evolution and we make a very rough evaluation of the "ages" of the galaxies in our sample. Our purpose is not to derive the exact ages of the galaxies, but rather to have an order of magnitude estimate and, in particular, to check whether there is any trend of the derived ages with galactic mass.
GALACTIC AGES
In the framework of simple models of galactic chemical evolution adopting the Instantaneous Recycling Appproximation (IRA), one may obtain analytical solution for various quantities. In particular, provided that the Star Formation Rate Ψ is proportional to the gas mass Mg, i.e.
one may obtain a relationship between the gas fraction σgas and time T (assuming that the SF efficiency ǫ is constant in time). The form of this relationship depends on further assumptions about the evolution of the system, i.e. on the possibility of allowing for gas flows in or outside the "box" (e.g. Pagel 1997).
Assuming that the galaxies of our sample have evolved as simple, homogeneous, "boxes", we consider three possibilities: a "closed box" (all the gas is present from the very beginning), an "infall" model (where gas mass is continuously added to the system) and an "outflow" model (with gas continuously leaving the system). In the cases of gaseous flows, further assumptions about the corresponding flow rates are required in order to obtain analytical solutions. More specifically: a) Closed Box: In that case, we have where the return fraction R accounts for the gas returned by stars to the interstellar medium; for the IMF of Kroupa et al. (1993) adopted here we have R ∼0.32. b) Infall: An analytical solution is easily obtained if it is assumed that the infall rate just balances the gas depletion due to star formation. As we have shown in Paper II, with detailed numerical models reproducing a large body of observational data, this situation describes rather well the largest period in the lifetime of most spiral galaxies. In that case, we have: c) Outflow: Analytical solutions may be obtained by assuming that the outflow rate is proportional to the SFR: fout = γΨ. In that case, we have: We shall assume here that the outflow rate is equal to the SFR, i.e. γ=1 (since, for higher outflow rates, we obtain ridiculously low galactic ages).
For each of the three scenarios, we shall consider five combinations between the observed gas fraction and SF efficiency, in order to derive galactic age T through Eqs. 6, 7 and 8. For the gas fraction vs. rotational velocity, we adopt the three curves of the upper panel of Fig. 8, corresponding roughly to the mean trend and the upper and lower bounds of the observations, respectively. For each of those three curves, we derive the corresponding galactic ages by Fig. 6. Given the Star Formation efficiency ǫ, the shaded aerea inside the upper and lower curves may be used to derive approximate ages for the galaxies, in the framework of simple analytical models for galactic chemical evolution, using IRA (Instantaneous Recycling Approximation). This is done in the three next panels, for a Closed Box, a model with Infall (such as the gas mass remains constant in time) and a model with Outflow (with an outflow rate equal to the Star Formation Rate). In each of those panels, the thick curve corresponds to the mean gas fraction of the upper panel and a SF efficiency ǫ=0.3 Gyr −1 (the mean value for the efficiencies in Fig. 6). The shaded aerea is obtained by keeping the same SF efficiency, but considering the upper and lower bounds for the observed gas fraction in the upper panel (lower gas fractions lead to larger ages for a given rotational velocity). The dotted curves are obtained with the low gas fractions combined to a high SF efficiency (ǫ=1 Gyr −1 , a reasonable upper bound for the observed ǫ values in Fig. 6). Finally, the dashed curves are obtained with the high gas fractions combined to a low SF efficiency (ǫ=0.1 Gyr −1 , a reasonable lower bound for the observed ǫ values in Fig. 6).
using a SF efficiency ǫ=0.3 Gyr −1 (the mean value of the observations in Fig. 6 and 7); in that way, we obtain the shaded regions in Fig. 8, with the thick curve corresponding to the mean trend, the largest ages to the lowest gas fractions and vice versa. We perform two more calculations. For the first, we adopt the high gas fractions (uppermost curve in the upper panel of Fig. 8), combined to a low SF efficiency ǫ=0.1 Gyr −1 (a reasonable lower bound for the observations of Fig. 6 and 7); in that way we obtain the dotted curves in Fig. 8. Finally, we adopt the low gas fractions (lower curve in the upper panel of Fig. 8), combined to a high SF efficiency ǫ=1 Gyr −1 (a reasonable upper bound for the observations of Fig. 6 and 7); in that way we obtain the dashed curves in Fig. 8.
An inspection of the results, plotted in the three lower panels of Fig. 8, shows that the derived "effective" galactic age is a monotonic function of rotational velocity VC in all cases. The absolute ages depend, of course, on the adopted gas fractions, SF efficiencies and assumptions about gaseous flows. For a given gas fraction and SF efficiency, the infall model leads to the largest ages; since the gas is constantly replenished, it takes more time to attain a given gas fraction than in a closed box (starting with σgas=1). Also, the outflow model produces the lowest ages; since part of the gas is constantly removed, it takes less time to reach a given gas fraction than in a closed box. The ages derived in the framework of the outflow model are, in general, too low: they are lower than 6 Gyr for all galaxies with VC <200 km/s. It transpires that galaxies in this velocity range should not have suffered extensive mass losses, i.e. they must not have lost during their lifetime an amount of gas as important as their stellar content (since we adopted fout = Ψ here).
The mean age values in the case of the infall model are 4-5 Gyr for the small discs (in the VC ∼100 km/s range) and ∼10 Gyr for VC ∼220 km/s. For discs with VC >300 km/s, extremely large ages (>15 Gyr) are found; however, the approximation of evolution at constant gas mass is certainly not valid in that case (see Fig. 4 in Boissier and Prantzos 2000). We notice that Bell and de Jong (2000), on the basis of a different sample and with a completely different method (based on a photometric estimate of the ages) find an "effective age" of ∼10 Gyr for the more luminous galaxies (MK ∼-26) and ∼6 Gyr for the less luminous ones (MK ∼-20); the galaxies of our sample span also this luminosity range, and its interesting to see that similar ages are found with completely independent methods.
The results obtained in this section confirm the suggestion of Sec. 3.2: low mass discs are, on average, younger than massive ones. The only way to reverse this trend is by assuming that the SF efficiency is strongly correlated with disc mass. Despite the large scatter in the observational data, it is clear that such a correlation does not exist, at least at the present time; and, since the observed galaxies span a large range in masses and metallicities, it does not seem plausible that such a correlation ever existed in the past. In the case of outflow models, another possibility would be to consider that the more massive discs suffered more important mass losses (leading to low gas fractions without having to invoke large ages). However, such a hypothesis is incompatible to the fact that massive discs have deeper potential wells and are less prone to outflows than low mass ones.
SUMMARY
In this work we investigate the properties of the star formation efficiency of spiral galaxies and study the implications for their evolution. We use a large homogeneous sample of disc galaxies, for which we measure gaseous mass Mg (HI), rotational velocity VC, star formation rates Ψ and luminosities in the B and H bands. We are then able to derive the corresponding gas fractions σgas and SF efficiencies ǫ = Ψ/Mg as a function of VC , H−luminosity and B − H colour index. We find that the gas fraction is correlated to VC, H−luminosity and B − H, in the sense that more massive, luminous and redder discs have smaller gas fractions. Previous work by McGaugh and de Blok (1997) reached similar conclusions. The main finding of this work is that the SF efficiency does not correlate with any of the galaxy properties; despite a rather large dispersion (within a factor of ∼10), the observed ǫ is independent of VC, LH or B − H.
We interpret our data in the framework of detailed models of galactic chemical and spectrophotometric evolution, utilising metallicity dependent stellar lifetimes, yields, tracks and spectra. These models are calibrated on the Milky Way disc (paper I) and use radially dependent star formation rates, which reproduce observed gradients (paper III). They are extended to other spirals in the framework of Cold Dark Matter scenarios for galaxy formation, and are described by two parameters: rotational velocity VC and spin parameter λ. As in our previous work (paper II), we find good agreement with the observations, provided a crucial assumption is made: massive discs are formed earlier than less massive ones. With this assumption our models reproduce the observed trends of gas fractions and SF efficiencies vs. VC, while variations due to λ account for the observed dispersion in both cases.
It is important to notice that the dependence of age on galactic mass that we find is not due to any mass-dependent SF efficiency, only to disc formation timescales; in our models this is achieved by varying the infall timescales. Both observations and models suggest that the SF efficiency is independent of galaxy properties. Since the observed galaxies cover a wide range of masses, colours and metallicities, there is no reason to suppose that the SF efficiency was different in the past. The adopted SFR prescription in our models also results in a very slowly varying SF efficiency with time.
When the observed relations of gas fraction vs VC and SF efficiency vs VC are considered in combination, they convincingly suggest that low mass discs are, on average, younger than more massive ones; this conclusion is independent of any model and the only assumption is that the SF efficiency is constant in time (a quite plausible assumption, as argued above). In the framework of simple analytical models of galactic chemical evolution, we evaluate the "effective ages" for the galaxies of our sample, using the observationally derived values of σgas and ǫ. We find that even models with modest outflows (with ejected masses equal to the stellar ones) lead to ridiculously low values for the galaxy ages; our conclusion is that galaxies in the range VC ∼80-400 km/s have not suffered extensive mass losses. Closed box models and infall models lead to more plausible values for the effective ages. In particular, infall models lead to ages of ∼4-5 Gyr for discs of VC ∼100 km/s and ∼8-10 Gyr for VC ∼200 km/s. These "chemically derived" ages are in fair agreement with those derived on the basis of our more sophisticated numerical models, which fit a much larger body of observational data for low redshift spirals. Most importantly, they are also in fair agreement with the "photometric" ages derived in a completely independent way and with a different sample by Bell and de Jong (2000).
In summary, our data, taken at face value, suggest that the bulk of stars in more massive discs are older than in less massive ones. This is supported by our detailed numerical models of galactic chemical and photometric evolution, but also by recent, independent, analysis (Boselli et al. 2000). We notice that Bell and deJong (2000) conclude that it is local surface density that mainly drives the star formation history, while mass plays a less important role. Our study suggests that mass is the main factor, while local surface density plays only a minor role (through the spin parameter λ: lower λ values lead to higher local surface densities for a given rotational velocity VC ).
We notice that this picture is hardly compatible with the currently popular "paradigm" of hierarchical galaxy formation, which holds that large discs are formed by merging of small units at relatively late epochs. If this were the case, massive discs should have large SF efficiencies, in order to have their gas fractions reduced to lower levels than their less massive counterparts. However, such an enhanced SF efficiency is not supported by observations. | 2014-10-01T00:00:00.000Z | 2000-09-14T00:00:00.000 | {
"year": 2000,
"sha1": "348d005c7ebb60a9f0796270b49cf5544e8624e7",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/321/4/733/3183560/321-4-733.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "dc9f672c1ab648722107a35c5cbb61c500d81f3c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118355723 | pes2o/s2orc | v3-fos-license | Production of pions, kaons and protons in pp collisions at $\sqrt{s}=900$ GeV with ALICE at the LHC
The production of $\pi^+$, $\pi^-$, $K^+$, $K^-$, p, and pbar at mid-rapidity has been measured in proton-proton collisions at $\sqrt{s} = 900$ GeV with the ALICE detector. Particle identification is performed using the specific energy loss in the inner tracking silicon detector and the time projection chamber. In addition, time-of-flight information is used to identify hadrons at higher momenta. Finally, the distinctive kink topology of the weak decay of charged kaons is used for an alternative measurement of the kaon transverse momentum ($p_{\rm T}$) spectra. Since these various particle identification tools give the best separation capabilities over different momentum ranges, the results are combined to extract spectra from $p_{\rm T}$ = 100 MeV/$c$ to 2.5 GeV/$c$. The measured spectra are further compared with QCD-inspired models which yield a poor description. The total yields and the mean $p_{\rm T}$ are compared with previous measurements, and the trends as a function of collision energy are discussed.
Abstract. The production of π + , π − , K + , K − , p, and p at mid-rapidity has been measured in proton-proton collisions at √ s = 900 GeV with the ALICE detector. Particle identification is performed using the specific energy loss in the inner tracking silicon detector and the time projection chamber. In addition, time-of-flight information is used to identify hadrons at higher momenta. Finally, the distinctive kink topology of the weak decay of charged kaons is used for an alternative measurement of the kaon transverse momentum (pt) spectra. Since these various particle identification tools give the best separation capabilities over different momentum ranges, the results are combined to extract spectra from pt = 100 MeV/c to 2.5 GeV/c. The measured spectra are further compared with QCD-inspired models which yield a poor description. The total yields and the mean pt are compared with previous measurements, and the trends as a function of collision energy are discussed.
1
In pp collisions at ultra-relativistic energies the bulk of the 2 particles produced at mid-rapidity have transverse mo-3 menta, p t , below 1 GeV/c. Their production is not calcu-4 lable from first principles via perturbative Quantum Chro- 5 modynamics, and is not well modelled at lower collision 6 energies. This low p t particle production, and species com-7 position, must therefore be measured, providing crucial 8 input for the modelling of hadronic interactions and the 9 hadronization process. It is important to study the bulk 10 production of particles as a function of both p t and parti- 11 cle species. With the advent of pp collisions at the Large 12 Hadron Collider (LHC) at CERN a new energy regime is 13 being explored, where particle production from hard in-14 teractions which are predominantly gluonic in nature, is 15 expected to play an increasing role. Such data will pro- 16 vide extra constraints on the modelling of fragmentation 17 functions. The data will also serve as a reference for the 18 heavy-ion measurements. 19 The ALICE detector [1,2] is designed to perform mea- 20 surements in the high-multiplicity environment expected 21 in central lead-lead collisions at √ s NN = 5.5 TeV at the 22 LHC and to identify particles over a wide range of mo-23 menta. As such, it is ideally suited to perform these mea-24 surements also in pp collisions. 25 This paper presents the transverse momentum spectra 26 and yields of identified particles at mid-rapidity from the 27 first pp collisions collected in the autumn of 2009, during 28 the commissioning of the LHC, at √ s = 900 GeV. The 29 evolution of particle production in pp collisions with colli-30 sion energy is studied by comparing to data from previous 31 experiments. 32 We report π + , π − , K + , K − , p, and p distributions, 33 identified via several independent techniques utilizing spe-34 cific energy loss, dE/dx, information from the Inner Track- 35 ing System (ITS) and the Time Projection Chamber (TPC), 36 and velocity measurements in the Time-Of-Flight array 37 (TOF). The combination of these methods provides par-38 ticle identification over the transverse momentum range 39 0.1 GeV/c < p t < 2.5 GeV/c. Charged kaons, identified via 40 kink topology of their weak decays in the TPC, provide a 41 complementary measurement over a similar p t range. All 42 reported particle yields are for primary particles, namely 43 those directly produced in the collision including the prod-44 ucts of strong and electromagnetic decays but excluding 45 weak decays of strange particles. 46 The paper is organized as follows: In Section 2, the AL-47 ICE detectors relevant for these studies, the experimental 48 conditions, and the corresponding analysis techniques are 49 described. Details of the event and particle selection are 50 presented. In Section 3, the π + , π − , K + , K − , p, and p in-51 clusive spectra and yields, obtained by combining the var-52 ious techniques described in Section 2, are presented. The 53 results are compared with calculations from QCD-inspired 54 models and the p t -dependence of ratios of particle yields, 55 e.g. K/π and p/π, are discussed. Comparisons approaches unity for events with more than two tracks.
170
The results presented in this paper are normalized to 171 inelastic pp collisions, employing the strategy described in 172 [12,13]. In order to reduce the extrapolation and thus the In order to compare to previous experimental results, 193 which are only published for the non-single-diffractive 194 (NSD) class, in Section 3, we scale our spectra for the mea-195 sured ratio dN ch /dη| N SD / dN ch /dη| IN EL ≃ 1.185 [12]. To ensure high 213 tracking efficiency and dE/dx-resolution, while keeping 214 the contamination from secondaries and fakes low, tracks 215 are required to have at least 80 clusters, and a χ 2 of the 216 momentum fit that is smaller than 4 per cluster. Since each 217 cluster in the TPC provides two degrees of freedom and 218 the number of parameters of the track fit is much smaller 219 than the number of clusters, the χ 2 cut is approximately 220 2 per degree of freedom. In addition, at least two clusters 221 in the ITS must be associated to the track, out of which 222 at least one is from the SPD. Tracks are further rejected 223 based on their distance-of-closest approach (DCA) to the 224 reconstructed event vertex. The cut is implemented as a 225 function of p t to correspond to about seven (five) stan-226 dard deviations in the transverse (longitudinal) coordi-227 nate, taking into account the p t -dependence of the impact 228 parameter resolution. These selection criteria are tuned to 229 select primary charged particles with high efficiency while 230 minimizing the contributions from weak decays, conver-231 sions and secondary hadronic interactions in the detector 232 material. The DCA resolution in the data is found to be 233 in good agreement with the Monte-Carlo simulations that 234 are used for efficiency corrections (see next Section).
235
Tracks reconstructed in the TPC are extrapolated to 236 the sensitive layer of the TOF and a corresponding signal 237 is searched for. The channel with the center closest to the 238 track extrapolation point is selected if the distance is less 239 than 10 cm. This rather weak criterion results in a high 240 matching efficiency while keeping the fraction of wrongly 241 associated tracks below 1% in the low-density environment 242 presented by pp collisions.
243
The dE/dx measurements in the ITS are used to iden-244 tify hadrons in two independent analyses, based on dif-245 ferent tracking algorithms. One analysis uses the ITS-246 TPC combined tracking, while the other is based on ITS 247 stand-alone tracks. The combined ITS-TPC tracking re-248 sult serves as a cross-check of both the ITS stand-alone 249 and the TPC results in the overlap region. The ITS stand-250 alone analysis extends the acceptance to lower p t than the 251 TPC or ITS-TPC analyses.
252
The combined ITS-TPC analysis uses the same track 253 selection criteria as the TPC only analysis, with the ad-254 ditional requirement of at least four clusters in the ITS, 255 out of which at least one must be in the SPD and at least 256 three in SSD+SDD. This further reduces the contamina-257 tion of secondaries and provides high resolution on track 258 impact parameter and optimal resolution on the dE/dx. 259 The ITS stand-alone tracking uses a similar selection, with 260 a different χ 2 selection and a different DCA selection. In 261 the current tracking algorithm, ITS clusters are assigned 262 a larger position error to account for residual misalign-263 ment of the detector. As a result, the χ 2 values are not 264 properly normalized, but the selection was adjusted to be 265 equivalent to the TPC χ 2 selection by inspecting the dis-266 tributions. The DCA cut in the ITS analysis uses the same 267 p t -dependent parametrization as for TPC tracks, but with 268 different parameters to account for the different resolution. 269
322
In both the ITS stand-alone and in the ITS-TPC analy-323 ses, the dE/dx measurement from the SDD and the SSD 324 is used to identify particles. The stand-alone tracking re-325 sult extends the momentum range to lower p t than can be 326 measured in the TPC, while the combined tracking pro-327 vides a better momentum resolution.
328
The energy loss measurement in each layer of the ITS 329 is corrected for the track length in the sensitive volume 330 using tracking information. In the case of SDD clusters, a 331 linear correction for the dependence of the reconstructed 332 raw charge as a function of drift time due to the com-333 bined effect of charge diffusion and zero suppression is 334 also applied [5]. For each track, dE/dx is calculated using 335 a truncated mean: the average of the lowest two points 336 in case four points are measured, or a weighted sum of 337 the lowest (weight 1) and the second lowest point (weight 338 1/2), in case only three points are measured. For the ITS stand-alone track sample, the histograms 343 are fitted with three Gaussians and the integral of the 344 Gaussian centered at zero is used as the raw yield of the 345 corresponding hadron species. In a first step, the peak 346 widths σ of the peaks are extracted as a function of p t 347 for pions and protons in the region where their dE/dx 348 distributions do not overlap with the kaon (and electron) 349 distribution. For kaons, the same procedure is used at low 350 p t , where they are well separated. The p t -dependence of 351 the peak width is then extrapolated to higher p t with the 352 same functional form used to describe the pions and pro-353 tons. The resulting parametrizations of the p t dependence 354 of σ are used to constrain the fits of the ln[dE/dx] distri-355 butions to extract the raw yields. nents, prompt particles, secondaries from strange particle 387 decays and secondaries produced in the detector material 388 for each hadron species. Alternatively, the contamination 389 from secondaries have been determined using Monte-Carlo 390 samples, after rescaling the Λ yield to the measured val-391 ues [24]. The difference between these two procedures is 392 about 3% for protons and is negligible for other particles. 393 Figure 3 shows the total reconstruction efficiency for 394 primary tracks in the ITS stand-alone, including the ef-395 fects of detector and tracking efficiency, the track selection 396 cuts and residual contamination in the fitting procedure, 397 as determined from the Monte-Carlo simulation. This ef-398 ficiency is used to correct the measured raw yields after 399 subtraction of the contributions from secondary hadrons. 400 The measured spectra are corrected for the efficiency of 401 the primary vertex reconstruction with the SPD using 402 the ratio between generated primary spectra in simulated 403 events with a reconstructed vertex and events passing the 404 trigger conditions. 405 Systematic errors are summarized in Table 1. The sys-406 tematic uncertainty from secondary contamination has been 407 estimated by repeating the full analysis chain with differ-408 ent cuts on the track impact parameter and by comparing 409 the two alternative estimates outlined above. In the lowest p t -bins, a larger systematic error has been 426 assigned to account for the steep slope of the tracking effi-427 ciency as a function of the particle transverse momentum 428 (see Fig. 3). As in the case of the ITS, a truncated-mean procedure 443 is used to determine dE/dx (60% of the points are kept). 444 This reduces the Landau tail of the dE/dx distribution to 445 the extent that it is very close to a Gaussian distribution. 446 Examples of the dE/dx distribution in some p t bins 447 are shown in Fig. 5. The peak centered at zero is from 448 kaons and the other peaks are from other particle species. 449 As the background in all momentum bins is negligible, the 450 integrals of the Gaussian give the raw yields. weak decays amounts to up to 14% and the correction for 478 secondaries from material up to 4% for protons with 400 479 MeV/c < p t < 600 MeV/c. For other particle species and 480 other transverse momenta the contamination is negligible. 481 The systematic errors in the track reconstruction and 482 in the removal of secondary particles have been estimated 483 by varying the number of standard deviations in the dis-484 tance-to-vertex cut, using a fixed cut of 3 cm instead of 485 the variable one, and varying the SPD-TPC matching cut. 486 Their impact on the corrected spectra is less than 5%. The 487 influence of the uncertainty in the material budget has 488 been examined by varying it by 7%. This resulted in the 489 systematic errors given in Table 2. The uncertainty due 490 to a possible deviation from a Gaussian shape has been 491 established by comparing the multi-Gauss fit with a 3-σ 492 band in well separated regions. The precision of the kink 493 rejection is estimated to be within 3%.
494
The correction for the event selection bias has been 495 tested with two event generators, PYTHIA [15,16] and 496 PHOJET [18] and the corresponding uncertainty is less 497 than 1%. 528 Finally, tracks whose particle identity as determined 529 from the TOF information is not compatible with the one 530 inferred from the dE/dx signal in the TPC within five σ 531 have been removed. This TOF-TPC compatibility crite-532 rion rejects about 0.6% of the tracks and further reduces 533 the small contamination coming from tracks incorrectly 534 associated with a TOF signal.
The symmetric treatment of kaons and pions in the defi-
587
The TOF matching efficiency has been tested with 588 data, using dE/dx in the TPC to identify the particles.
589
Good agreement between the efficiencies obtained from systematic errors π ± K ± p and p TOF < 3% < 6% < 4% matching (pt> 1 GeV/c) efficiency < 7.5% (pt= 0.7 GeV/c) PID procedure < 2% < 7% < 3% the data and from Monte-Carlo simulations is observed in 591 case of pions and kaons, with deviations at the level of, 592 at most, 3% and 6% respectively, over the full transverse-593 momentum range. The observed differences are assigned 594 as systematic errors, see Table 3. In the case of protons 595 and antiprotons, larger differences are observed at p t be-596 low 0.7 GeV/c, where the efficiency varies very rapidly 597 with momentum. This region is therefore not considered 598 in the final results (see Table 3).
599
Other sources of systematic errors related to the TOF 600 PID procedure have been estimated from Monte-Carlo 601 simulations and cross-checked with data. They include the 602 effect of the residual contribution from tracks wrongly as-603 sociated with TOF signals, and the quality and stability 604 of the fit procedure used for extracting the yields. Table 3 605 summarizes the maximal value of the systematic errors ob-606 served over the full transverse momentum range relevant 607 in the analysis, for each of the sources mentioned above. (2) K ± → π ± + π 0 , (B.R. 20.66%). for the two-body decay (2) K → π + π 0 is 205 MeV/c.
649
All three limits can be seen as peaks in Fig. 10 (a), which 650 shows the q t distribution of all measured kinks inside the 651 selected volume and rapidity range |y| < 0.7. Selecting 652 kinks with q t > 40 MeV/c removes the majority of π-653 decays as shown by the dashed (before) and solid (after) 654 histograms.
655
The invariant mass for the decay into µ ± + ν µ is cal-656 culated from the measured difference between the mother 657 and daughter momentum, their decay angle, assuming zero 658 mass for the neutrino. Figure 10 Fig. 10. (Color online) (a) qt distribution of the daughter tracks with respect to mother momentum for all reconstructed kinks inside the analyzed sample. The dashed(solid) histograms show the distribution before (after) applying the qt > 40 MeV/c cut. (b) Invariant mass of the two-body decays K ± /π ± → µ ± + νµ for candidate kaon kinks. Solid curve: after applying qt >40 MeV/c; dashed curve: without this selection (hence also showing the pion decays). (c) dE/dx of kinks as a function of the mother momentum, after applying the full list of selection criteria for their identification. of kaons. The few tracks outside these limits are at mo-672 menta below 600 MeV/c (less than 5%) and they have 673 been removed in the last analysis step.
674
Efficiency and acceptance The total correction factor in-675 cludes both the acceptance of kinks and their efficiency 676 (reconstruction and identification). The study has been 677 performed for the rapidity interval |y| < 0.7, larger than 678 the corresponding rapidity interval for the other studies 679 in order to reduce the statistical errors.
680
The acceptance is defined as the ratio of weak decays 681 (two-and three-body decays) whose daughters are inside 682 the fiducial volume of the TPC to all kaons inside the same 683 rapidity window (Fig. 11, upper part). It essentially re-684 flects the decay probability. However, the acceptance is not The efficiency is the ratio of reconstructed and identi-
699
The contamination due to random associations of pri-700 mary and secondary charged tracks has been established 701 using Monte-Carlo simulations and it is systematically 702 smaller than 5% in the studied p t -range as also shown 703 in Fig. 11. Hadronic interactions are the main source of 704 these fake kinks (65%).
705
The systematic error due to the uncertainty in the ma-706 terial budget is about 1% as for the TPC analysis. The 707 quality cuts remove about 8% of all real kaon kinks, which 708 leads to a systematic error of less than 1%. The main un-709 certainty originates from the efficiency of the kink finding 710 algorithm which has an uncertainty of 5%. 711 3 Results Figure 12 shows a comparison between the results from the 713 different analyses. The spectra are normalized to inelastic 714 collisions, as explained in Sec. 2.2. The kaon spectra ob-715 tained with various techniques, including K 0 s spectra [24], 716 are compared in Fig. 13. The very good agreement demon-717 strates that all the relevant efficiencies are well reproduced 718 by the detector simulation.
719
The spectra from ITS stand-alone, TPC and TOF are 720 combined in order to cover the full momentum range. The 721 analyses from the different detectors use a slightly differ-722 ent sample of tracks and have largely independent sys-723 tematics (mainly coming from the PID method and the 724 contamination from secondaries). The spectra have been 725 averaged, using the systematic errors as weights. From this 726 weighted average, the combined, p t -dependent, systematic 727 error is derived. The combined spectra have an additional 728 overall normalization error, coming primarily from the un-729 certainty on the material budget (3%, Sec. 2.5) and from 730 the normalization procedure (2%, Sec. 2.2).
731
The combined spectra shown in Fig. 14 are fitted with 732 the Lévy (or Tsallis) function (see e.g. [26,27] (2) with the fit parameters C, n and the yield dN/dy. This 734 function gives a good description of the spectra and has 735 been used to extract the total yields and the p t , summa-736 rized in Table 4. The χ 2 /degree-of-freedom is calculated 737 using the total error. Due to residual correlations in the 738 point-by-point systematic error, the values are less than 1. 739 Also listed are the lowest measured p t -bin and the fraction 740 of the yield contained in the extrapolation of the spectra to 741 zero momentum. The extrapolation to infinite momentum 742 gives a negligible contribution. The systematic errors take 743 into account the contributions from the individual detec-744 tors, propagated to the combined spectra, the overall nor-745 malization error and the uncertainty in the extrapolation. 746 The latter is evaluated using different fit functions (mod-747 ified Hagedorn [28] and the UA1 parametrization [29]) or 748 using a Monte-Carlo generator, matched to the data for 749 p t < 1 GeV/c (PYTHIA [15], with tunes D6T [16], CSC 750 and Perugia0 [30], or PHOJET [18]). While none of these The ratios of π + /π − and K + /K − as a function of p t are 756 close to unity within the errors, allowing the combination 757 of both spectra in the Lévy fits. The p/p ratio as a function 758 of p t has been studied with high precision in our previous 759 publication [22]. It is p t -independent with a mean value of 0.957±0.006(stat)±0.014(syst). Also here we used the sum 761 of both charges. Table 5 summarizes the fit parameters 762 along with the yields and mean p t . The errors have been 763 determined as for the individual fits.
764
Our values on yield and p t given in Table 4 and 5 765 agree well with the results from pp collisions at the same 766 √ s [31]. Figure 15 compares the p t with measurements 767 in pp collisions at √ s = 200 GeV [32,33] and in pp re-768 actions at √ s = 900 GeV [31]. The mean p t rises very 769 little with increasing √ s despite the fact that the spectral 770 shape clearly shows an increasing contribution from hard 771 processes. It was already observed at RHIC that the in-772 crease in mean p t at √ s= 200 GeV compared to studies at 773 √ s= 25 GeV is small. The values obtained in pp collisions 774 are lower than those for central Au+Au reactions at √ s= 775 200 GeV [32].
776
The spectra presented in this paper are normalized 777 to inelastic events. In a similar study by the STAR Col-778 laboration the yields have been normalized to NSD colli-779 sions [32]. In order to compare these two results, the yields 780 in Table 4 have been scaled to NSD events, multiplying by 781 1.185 (see Section 2.2). The yields of pions increase from 782 √ s= 200 GeV to 900 GeV by 23%, while K + rises by 45% 783 and K − by 48%. 784 Figure 16 shows the K/π ratio as a function of √ s both 785 in pp (full symbols, [32,34,35]) and in pp (open symbols, 786 [36-38]) collisions. For most energies, (K + +K − )/(π + +π − ) 787 is plotted, but for some cases only neutral mesons were 788 measured and K 0 /π 0 is used instead. The p t -integrated 789 (K + +K − )/(π + +π − ) ratio shows a slight increase from 790 √ s= 200 GeV (K/π = 0.103 ± 0.008) to √ s= 900 GeV 791 (K/π=0.123 ± 0.004 ± 0.010) [32], yet consistent within 792 the error bars. The results at 7 TeV will show whether 793 Table 4. Integrated yield dN /dy (|y| < 0.5) with statistical and systematic errors, and pt , as obtained from the fit with the Lévy function together with the lowest pt experimentally accessible, the fraction of extrapolated yield and the χ 2 /ndf of the fit (see text). The systematic error of dN /dy and of the pt includes the contributions from the systematic errors of the individual detectors, from the choice of the functional form for extrapolation and from the absolute normalization. the K/π ratio keeps rising slowly as a function of √ s or 794 saturates.
795
Protons and antiprotons in Table 4 have been cor-
815
The upper panel of Figure 18 shows the p t -dependence 816 of the K/π and also the measurements by the E735 [36] 817 and STAR Collaborations [32]. It can be seen that the 818 observed increase of K/π with p t does not depend strongly 819 on collision energy.
820
A comparison with event generators shows that at p t > 821 1.2 GeV/c, the measured K/π ratio is larger than any of
833
In the bottom panel of Figure 18, the measured p/π 834 ratio is compared to results at √ s= 200 GeV from the 835 PHENIX Collaboration [41]. Both measurements are feed-836 down corrected. At low p t , there is no energy-dependence 837 of the p/π ratio visible, while at higher p t > 1 GeV/c, the 838 p/π ratio is larger at √ s= 900 GeV than at √ s= 200 GeV 839 energy.
840
Event generators seem to separate into two groups, 841 one with high p/π ratio (PYTHIA CSC and D6T), which 842 agree better with the data and one group with a lower 843 p/π ratio (PHOJET and PYTHIA Perugia0), which are 844 clearly below the measured values. These comparisons can 845 be used for future tunes of baryon production in the event 846 generators. 847
848
We present the first analysis of transverse momentum spec-849 tra of identified hadrons, π + , π − , K + , K − , p, and p in pp 850 collisions at √ s = 900 GeV with the ALICE detector. The 851 identification has been performed using the dE/dx of the 852 inner silicon tracker, the dE/dx in the gas of the TPC, 853 the kink topology of the decaying kaons inside the TPC 854 and the time-of-flight information from TOF. The combi-855 nation of these techniques allows us to cover a broad range 856 of momentum. | 2017-09-28T11:35:06.000Z | 2011-01-21T00:00:00.000 | {
"year": 2011,
"sha1": "a896e8f14ceb9d2bc82f3c9e2a9377a21a6db004",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-011-1655-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "a896e8f14ceb9d2bc82f3c9e2a9377a21a6db004",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250334300 | pes2o/s2orc | v3-fos-license | Commuting graph of a group action with few edges
Let $A$ be a group acting by automorphisms on the group $G.$ \textit{The commuting graph $\Gamma(G,A)$ of $A$-orbits} of this action is the simple graph with vertex set $\{x^{A} : 1\ne x \in G \}$, the set of all $A$-orbits on $G\setminus \{1\}$, where two distinct vertices $x^{A}$ and $y^{A}$ are joined by an edge if and only if there exist $x_{1}\in x^{A}$ and $y_{1}\in y^{A}$ such that $[x_{1},y_{1}]=1$. The present paper characterizes the groups $G$ for which $\Gamma(G,A)$ is an $\mathcal{F}$-graph, that is, a connected graph which contains at most one vertex whose degree is not less than three.
introduction
Throughout this paper group means a finite group. A great deal is known on deriving information about the structure of a group G from some certain properties of an associated graph. The commuting graph of a group has been one of the most popular amongst such graphs; and several results indicating the influence of the commutativity relation on the structure of a group have been obtained. As a generalization, the commuting graph of conjugacy classes has been introduced and analyzed in [3]. Recently, focusing attention on a further generalization, the concept of the commuting graph of a group action has been introduced in [1] as follows.
Definition 1.1. Let A be a group acting by automorphisms on the group G. The commuting graph Γ(G, A) of A-orbits is the graph with vertex set {x A : 1 = x ∈ G}, the set of all A-orbits on G \ {1}, where two distinct vertices x A and y A are joined by an edge if and only if there exist x 1 ∈ x A and y 1 ∈ y A such that x 1 and y 1 commute.
In [1] the connectedness of Γ(G, A) has been studied and the structure of the group G has been investigated in cases where Γ(G, A) is complete or triangle free or contains a complete vertex or contains an isolated vertex. We would like to mention here only the fact that G is nilpotent if Γ(G, A) is complete (see Theorem 3.1 in [1]) due to its appearance in the next sections.
Of course, for further study on Γ(G, A) several problems can be suggested. In the present article, as a matter of taste, we want to characterize all groups G admitting a group A of automorphisms so that Γ(G, A) has few edges. More precisely we fix the notion of being a graph "having few edges" as being an F -graph in the following sense. Definition 1.2. A finite simple graph Γ is an F -graph if it is connected and contains at most one vertex whose degree is not less than three. If an F -graph Γ contains a (3) 1 = O p (G) = F (G) and C G (F (G)) ≤ F (G) where either G is a quasisimple group which is isomorphic to one of SL(2, 5), 2P SL(3, 4), 2 2 P SL (3,4), or G = O p (G)×E(G) with O p (G) = z A and E(G) ∼ = P SL (2,5).
(4) 1 = O p (G) = F (G) such that F (G) = P × Q where P = O p (G) is an elementary abelian p-group, Q = O p ′ (F (G)) is a Sylow q-subgroup of G for another prime q which is elementary abelian, G/P is a Frobenius group with kernel F (G)/P and complement either of prime order or a p-group which has a unique subgroup of order p. Furthermore both P \ {1} and Q \ {1} are A-orbits.
This paper is divided into five sections. Section 2 investigates the structure of the group G when Γ(G, A) contains no singular vertex. Section 3 is devoted to the completion of the proof of the theorem above in case Γ(G, A) contains a singular vertex. This section is divided into four subsections the first of which includes several basic observations arising from the existence of a singular vertex to most of which we appeal frequently throughout the rest of the paper, while the second presents some critical groups G for which Γ(G, A) is not an F -graph for any A ≤ Aut(G). The other subsections study the cases O p (G) = 1 and O p (G) = 1 separately and contain several examples. The paper ends with Section 4 including some final remarks on some immediate consequences of the Theorem1.
Before closing this introduction we want to remind the reader of two basic well-known facts which we shall use repeatedly without explicit reference: (1) If A is a noncyclic, elementary abelian group acting coprimely by automorphisms on the group G then G = C G (a) : 1 = a ∈ A . (2) If A = F H is a Frobenius group with kernel F and complement H, acting on the group G by automorphisms so that C G (F ) = 1, then C G (H) = 1. For all the properties of the simple groups appearing in this paper we refer to Atlas [5] without mentioning it explicitly.
To simplify the notation, throughout Γ will denote the commuting graph Γ(G, A) of A-orbits of the finite group G.
When Γ is an F -graph without singular vertex
We begin by examining the structure of G in case where Γ is an F -graph with no singular vertex.
Proposition 2.1. Suppose that Γ has no singular vertex. Then (i) Γ is either P n with n ≤ 3 or C 3 ; (ii) Either G is a p-group for some prime p, or Γ = C 3 and G = P × Q where P and Q are elementary abelian p-and q-groups for some distinct primes p and q, respectively. Furthermore, P \ {1}, Q \ {1} and G \ (P ∪ Q) are all the distinct A-orbits.
Proof. (i) An F -graph with no singular vertex is a connected graph in which every vertex is of degree at most 2 and hence it is either P n for some positive integer n or C m with m ≥ 3. If G is a p-group for a prime p then any vertex x A for some x ∈ Z(G) is a complete vertex with deg(x A ) ≤ 2 which yields that |V (Γ)| ≤ 3 proving the claim. Assume next that |π(G)| ≥ 2 and every element is of prime power order. As the graph is connected there must exist two adjacent vertices x A and y A represented by elements of coprime orders which is impossible. Finally assume that there exists x ∈ G the order of which is divisible by two distinct primes. Then x A is a vertex of a subgraph isomorphic to C 3 whence Γ = C 3 which completes the proof of (i).
(ii) Assume that |π(G)| ≥ 2. An argument in the proof of (i) shows that Γ = C 3 is a complete graph. It follows by Theorem 3.1 of [1] that the group G is nilpotent. Let p and q be two distinct elements of π(G); and x and y be elements of G of orders p and q, respectively. Since x and y commute it holds that Γ[x A , y A , (xy) A ] = C 3 = Γ, that is, In particular, π(G) = {p, q}, x A is the set of elements of G of order p, y A is the set of elements of order q and all the other nonidentity elements of G are of order pq forming the orbit (xy) A .
Let P be the Sylow p-subgroup of G. Clearly P \ {1} = x A . We may assume that x ∈ Ω 1 (Z(P )). It follows that P is elementary abelian. Similarly the Sylow q-subgroup Q of G is elementary abelian and y A = Q \ {1}. Then the set {uv : u ∈ P \ {1}, v ∈ Q \ {1}} = (xy) A consists of elements of G of order pq. This establishes (ii).
Remark 2.2.
Observe that for any two distinct prime numbers p and q and any elementary abelian p-group P and any elementary abelian q-group Q there exists a subgroup A of Aut(P × Q) such that Γ(P × Q, A) = C 3 .
3.
When Γ is an F -graph with singular vertex The following precise description of Eppo-groups (or CP -groups by some authors), namely, groups in which every element is of prime power order will be frequently used in the rest of the paper.
Theorem 3.1 (Main Theorem of [2]). One of the following holds for any Eppo-group E with |π(E)| ≥ 2: (c) E is isomorphic to one of the following groups: P SL(2, q) for q ∈ {5, 7, 8, 9, 17}, P SL (3,4), Sz (8), Sz(32), M 10 ; (d) O 2 (E) = 1 and E/O 2 (E) is isomorphic to one of the following groups: Henceforth we shall concentrate on the question of what information can be deduced about the group G when Γ is an F -graph with singular vertex. All results of this section are obtained under the hypothesis below without explicitly mentioning it in each case.
Hypothesis. Suppose that |π(G)| ≥ 2 and that Γ = Γ(G, A) is an F -graph with the singular vertex z A . Fix p as one of the prime divisors of |z|.
KEY OBSERVATIONS.
Proposition 3.2. The following hold for Γ.
is isomorphic to P n for some positive integer n. One of the pendant vertices of ∆ is connected to z A in Γ and the only vertices of ∆ which are connected to z A in Γ are contained in the set of pendant vertices of ∆.
(c) If x A ∼ y A with (|x|, |y|) = 1 then x and y are of prime orders and z A ∈ {x A , y A }. In particular, for any q-element u where q = p, the C G (u) is a {p, q}group and for any p- (e) If z A is a complete vertex of Γ then either A is not contained in Inn(G) or z ∈ Z(G) and G/Z(G) is an Eppo-group. Furthermore Γ consists of a certain number of C 3 's and a certain number of P 2 's joined at z A .
which is not a subgraph of a triangle in Γ, then |x| divides p 2 for each x A ∈ V (∆). In particular, the exponent of a Sylow p-subgroup of G divides p 3 .
(g) Let Q be a Sylow q-subgroup of G for a prime q = p and let 1 = x ∈ Ω 1 (Z(Q)).
In particular, any two commuting nontrivial q-elements lie in the same A-orbit.
(h) Let H be an A-invariant subgroup of G and q be a prime different from p. Then for any Q ∈ Syl q (G), either Q ≤ H or Q ∩ H = 1. In particular, (|H| , is a power of p. (i) The Grünberg-Kegel graph of G is the complete binary graph K 1,n where n + 1 = |π(G)| .
(j) (z x ) A = z A for any x ∈ GA. In particular z A is invariant under conjugation by elements of G, that is, z A is a normal subset of G.
(k) If there exists a Sylow p-subgroup P such that z ∈ Z(P ) then z A is a complete vertex of Γ and z A ⊂ O p (G).
(l) The distance of a pendant vertex from z A is at most two.
(m) If C n appears as the subgraph of Γ then it contains z A as a vertex and n ≤ 4.
Proof. (a) Suppose that z p = 1. Then (z p ) A is a vertex different from z A which is adjacent to each vertex adjacent to z A . This leads to the contradiction deg((z p ) A ) ≧ deg(z A ) > 2. Therefore |z| = p as claimed.
It follows by a similar argument as in the proof of Proposition 2.1 (i) that ∆ = P n for some positive integer n or C m with m ≥ 3. The latter is impossible as Γ is connected and z A is the only vertex of Γ of degree greater than two. Because of the same reason the only vertices of ∆ which are connected to z A in Γ are contained in the set of pendant vertices of ∆. Clearly at least one of the pendant vertices of ∆ is connected in Γ to z A .
(c) Assume that x A ∼ y A and that (|x|, |y|) = 1. Without loss of generality we may assume that x and y commute. Then Γ[x A , y A , (xy) A ] = C 3 and hence is not a subgraph of Γ[V \ {z A }] by (b). It follows that z A ∈ {x A , y A , (xy) A }. Without loss of generality we may assume that x A = z A . Let q be a prime divisor of |y|. If |y| = q then we get Γ[x A , y A , (xy) A , (y q ) A ] = K 4 which is also impossible by paragraph (b). Let now u be a q-element where q = p and let w ∈ C G (u) be an r-element for some r = q. Then u A ∼ w A , and so r = p by the above argument. Next let v be a p-element such that v A = z A . Clearly v cannot be centralized by an element the order of which is divisible by a prime different from p.
(d) If z A ∩ M ⊂ N, and there exists an element xN of order rs for two distinct primes r and s in M/N . Since x contains an element of order rs we observe by (c) that p ∈ {r, s} and the p-part u of x belongs to z A . This leads to a contradiction as z A ⊂ N but u / ∈ N . Therefore every nontrivial element of M/N has prime power order, that is M/N is an Eppo-group.
(e) That z A is a complete vertex means that for any nonidentity x ∈ G there exists some a ∈ A such that [z a , x] = 1, that is, a∈A C G (z) a = G. If A ≤ Inn(G) then a∈A C G (z) a ⊆ g∈G C G (z) g which yields that C G (z) = G. Thus one can immediately conclude by (d) that G/Z(G) is an Eppo-group.
(f ) Let ∆ be a connected component of Γ[V \{z A }] which is not a subgraph of a C 3 in Γ and pick x A from V (∆). Suppose that |x| is divisible by two distinct primes r and s. Then there are x 1 and x 2 in x of orders r and s, respectively. Then Γ[x A 1 , x A 2 , x A ] = C 3 is a subgraph of Γ which is not possible. This shows that |x| is a power of a prime q. In fact every vertex in ∆ must have a representative having order which is a power of the same prime q because otherwise z A appears as a vertex in ∆ by (c). Let u A be a vertex of ∆ which is adjacent to z A . Without loss of generality we may assume that u and z commute. If q = p then (uz) A ∈ V (∆) which is impossible. Thus we have q = p. Furthermore, if p 3 divides |u|, we see that Γ[u A , (u p ) A , (u p 2 ) A ] = C 3 and hence (u p 2 ) A = z A by (b) and (a). In particular |u| = p 3 completing the proof.
(g) Let Q be a Sylow q-subgroup of G for some q = p and let 1 = x ∈ Ω 1 (Z(Q)).
containing x A and y A . As p = q, we have |∆| = 2 by (f ). We may assume that x A ∼ z A and that x commutes with z. It follows now that (xz) A ∈ V (∆) which is a contradiction. Therefore Q \ {1} ⊆ x A and hence exp(Q) = q.
(h) Let H be an A-invariant subgroup of G, and let Q be a Sylow q-subgroup of G for q ∈ π(G) \ {p} such that Q ∩ H = 1. Recall that Q \ {1} ⊂ x A for any x ∈ Ω 1 (Z(Q)) \ {1} by (e). Pick y ∈ Q ∩ H of order q. It holds now that Q \ {1} ⊆ x A = y A ⊂ H and the claim follows.
(i) We observe by (g) that for each prime q ∈ π(G) \ {p} there exists an element of order pq in G. On the other hand the existence of an element in G of order qr for distinct primes q and r in π(G) \ {p} is impossible by (c).
(j) Notice that, for any x ∈ GA, z x is a p-element of G centralizing a q-element of G for some prime q = p by (g). Then z x is A-conjugate to z and so (z x ) A = z A .
(k) Suppose that there exists a Sylow p-subgroup P such that z ∈ Z(P ). Then for any g ∈ G there exists some a ∈ A by (j) such that z a ∈ Z(P g ) which means that z A ∩ Z(S) = ∅ for any S ∈ Syl p (G). We shall observe that z A ∼ x A for any 1 = x ∈ G. This is clear by (c) if x is of composite order, and by (g) if |x| is a power of a prime different from p. Assume now that x is a p-element, and let S ∈ Syl p (G) such that x ∈ S. As z A ∩ Z(S) = ∅ we see that x A is adjacent to z A as claimed. As x is arbitrary we see that z A is a complete vertex. It follows now by Theorem 3.5 in [1] (l) − (m) Let the induced graph on {x A i : i = 0, 1, . . . , n} be a path with n > 4 so that x 0 = z and x A i ∼ x A i+1 for i = 0, 1, . . . , n − 1. Without loss of generality we may assume that [x i , x i+1 ] = 1, i = 0, 1, . . . , n − 1. By (f ) |x i | divide p 2 for each i > 0. Let T ∈ Syl p (G) containing x 1 , x 2 and pick a nonidentity element t 1 from Z(T ). We see On the other hand there exists g ∈ G such that z g ∈ T .
Note that (z g ) A = z A by (j). If t A 1 = x A 2 then x A 2 is adjacent to z A and hence n = 2 and and n > 2 then we apply the same argument to {x A 2 , x A 3 } and find an element t 2 such that t A 2 is adjacent to each This proves that the distance of any pendant vertex from z A is at most two and any cycle appearing as a subgraph of Γ is of length at most four.
We also frequently appeal to the next proposition in the rest of this paper.
Proof. (a) If M/N is a nonsolvable A-chief factor of G then M/N is a direct product of isomorphic nonabelian simple groups. In case where M/N is not simple there exist two prime numbers r and s distinct from p so that M/N and hence G contains an element of order rs which is impossible by Proposition 3.2 (c).
On the other hand, by Proposition 3.2 (g), for any x ∈ M of order q there exists a ∈ A such that [z a , x] = 1 which means that N = xN commutes with z a N = (zN ) a . This implies that the Grünberg-Kegel graph of the simple group M/N has the vertex p as a complete vertex.
In case where p = 2 it follows by Theorem 7.1 in [4] that M/N is an alternating group A n for some n such that there are no prime numbers r with n − 3 ≦ r ≦ n. Let now m = n if n is odd and m = n − 1 if n is even. Then there exists an m-cycle, say σ, in A n . Now |σ| = m is an odd integer which is not a prime and cannot be divisible by two distinct primes by Proposition 3.2 (c). So m = r k is a prime power with k > 1 which is impossible by Proposition 3.2 (g) as C An (σ) = σ . This shows that p must be odd, and so Sylow 2-subgroups of M/N are elementary abelian by Proposition 3.2 (g). Using the main result of [9] we see that one of the following holds for the simple group M/N : where s ≡ ±3 (mod 8). Since π(K) contains at least three distinct primes the case (iii) cannot occur because otherwise we would get p = 2. The case (i) cannot also occur because otherwise the centralizers of involutions are Sylow 2-subgroups of M/N but their orders must be divisible by p. Therefore we are left with the case (ii), that is, M/N ∼ = P SL(2, s) where s ≡ ±3(mod 8). Suppose that s = 3 + 8k for some k. Then |M/N | = s(1 + 4k)4(1 + 2k) and M/N has cyclic subgroups of orders (1 + 4k) and 2(1 + 2k) which contain the centralizer in M/N of any of its nontrivial elements of odd order by Kapital II Satz 8.3 and Satz 8.4 in [6]. So we see that p divides both 1 + 2k and 1 + 4k which is not possible. Similarly if s = 5 + 8k for some k then we have |M/N | = s(3 + 4k)4(1 + 2k) and M/N has cyclic subgroups of orders (3 + 4k) and 2(1 + 2k) which contain the centralizer in M/N of any of its nontrivial elements of odd order by Kapital II Satz 8.3 and Satz 8.4 in [6]. This forces that p divides both 1 + 2k and 3 + 4k which is also impossible completing the proof of (iii).
Proposition 3.4. If a Sylow 2-subgroup of G is nonabelian then p = 2 and all Sylow subgroups for odd primes have prime exponent . If a Sylow 2-subgroup of G is abelian then either G is solvable or G has only one nonabelian A-chief factor in an A-chief series and that is isomorphic to P SL(2, 5).
Proof. The first claim is clear by Proposition 3.2 (g). Assume now that G is nonsolvable and a Sylow 2-subgroup of G is abelian. Let M/N be a nonabelian A-chief factor of G. Then by Proposition 3.3 (b) and Proposition 3.2 (d) M/N is a simple Eppo-group with abelian Sylow 2-subgroup and hence isomorphic to P SL (2,5) or P SL (2,8).
Assume first that M/N is isomorphic to P SL (2,8). Then M/N has a cyclic subgroup of order 9 and so p = 3. This shows that 2 does not divide |N | |G/M | whence M/N is the only nonsolvable A-chief factor. Therefore N is solvable. Let T be a Sylow 2subgroup of M. Then T is elementary abelian and acts by automorphisms on N/N ′ and Since it is not possible that both N ′ and N \ N ′ contain elements of z A we see that either N = 1 or N is an elementary abelian p-group generated by z A .
If N = 1 then M is a minimal normal subgroup of GA and z ∈ G \ M by Proposition 3.3 (b) and centralizes a Sylow p-subgroup x of M. But this is not possible because then x is an element of order If N = 1 and P ∈ Syl p (G) then N ≤ P and hence N ∩ Z(P ) = 1. Without loss of generality we may assume that z ∈ N ∩ Z(P ). This shows that |C M (z)| is divisible by 2 · 3 2 · 7 · |N | . As M/N cannot have a proper subgroup of index less than 5 we see that C M (z) = M and hence N = Z(M ). As M ′ N = M we have either N ∩ M ′ = 1 or N ≤ M ′ . The first case brings us back to the situation N = 1 which was seen as leading to a contradiction. The second case is also not possible because it implies that the Schur multiplier of P SL(2, 8) contains a nontrivial p-subgroup, but it is known to be trivial.
So we have M/N is isomorphic to P SL(2, 5). If p = 2 then |N | |G/M | is odd and the claim follows. Thus we can also assume that p = 2.
Let us now consider the A-chief factor K/S where S is the solvable radical of G. What we have seen so far shows that K/S ∼ = P SL (2,5). G/KC G (K/S) is isomorphic to a subgroup of Out(P SL(2, 5) ∼ = Z 2 . To show that G/K is solvable we only need to get that KC G (K/S)/K ∼ = C G (K/S)/(K ∩ C G (K/S)) = C G (K/S)/S is solvable. As for r ∈ π(K/S)\{p} = {3, 5} we know that r does not divide |S|, a Sylow r-subgroup R of K is a Sylow r-subgroup of G and is isomorphic to a Sylow r-subgroup of K/S.Therefore Proof. For A ≤ Aut(G) suppose that Γ is an F -graph. It is clear by Proposition 2.1 that Γ is an F -graph with a singular vertex. As G/N ∼ = Sz(8) holds there exists a 2-element x of G such that α = xN is of order 4. Note that there are two conjugacy classes of elements of order 4 represented by α and α −1 in Sz (8) and that they cannot fuse in Aut(Sz (8)) as |Aut(Sz (8)) : Inn(Sz(8))| = 3. Therefore there cannot exist any forms a triangle and hence must contain the singular vertex. By (b) of Proposition 3.3 we know that the singular vertex is contained in N. This contradicts the fact that Lemma 3.6. There exists no A ≤ Aut(SL (2,9)) such that Γ(SL (2,9), A) is an F -graph.
Proof. Let G = SL (2,9) and suppose that Γ is an F -graph for some A ≤ Aut(G). Let T ∈ Syl 3 (G). Then T is elementary abelian of order 9, C G (T ) = T × Z(G) and N G (T )/C G (T ) is cyclic of order 4. Hence there are two conjugacy classes of elements of order 3 in G. Now Aut(G) = Inn(G) γ where γ is the automorphism of SL (2,9) arising from the automorphism x → x 3 of the field GF (9). Since γ fixes a subgroup of G which is isomorphic to SL(2, 3) and hence fixes an element of order 3, we see that the two conjugacy classes of elements of order 3 do not fuse to one Aut(G)-orbit. If X and Y are representatives of different A-orbits of elements of order 3 lying in the same Sylow 3-subgroup and Z is the involution in the center of G we see that the induced graph on the set of A-orbits represented by X, Y, XZ, Y Z, Z is a clique which shows that Γ is not an F -graph. This contradiction completes the proof.
Proof. Let G = SL(2, 7) and suppose that Γ is an F -graph for some A ≤ Aut(G). A Sylow 2-subgroup S of G is generalized quaternion of order 16 and contains a unique cyclic subgroup T = t of order 8. Notice that any two elements of order 8 in T are conjugate to each other in H = Aut(G) if and only if they are conjugate by an element in N H (T ) = SC H (T ). It follows that the induced graph on the set of A-orbits represented by t, t 2 , t 3 , t 4 is K 4 , which is impossible. This proves the claim.
THE CASE WHERE
∈ M the group M is a Frobenius group with kernel O p (G) by Proposition 3.2 (c). It follows that M is elementary abelian and hence is of order r for some prime On the other hand M is not a p-group and hence is a direct product of isomorphic nonabelian simple groups. As in the proof of Proposition 3.3 (a) we conclude that M is a nonabelian simple group. Then z / ∈ M by Proposition 3.3 (b) and hence M is a nonsolvable Eppo-group by Proposition 3.2 (d). As M has a nontrivial normal p-subgroup, Theorem 3.1 implies that p = 2 and M is isomorphic to either P SL(2, q) for q ∈ {4, 8} or Sz (8) or Sz(32).
Notice that the outer automorphism groups of P SL (2,8), Sz (8) and Sz(32) are of odd order. Therefore z induces an inner automorphism on M in each of these cases, that is, On the other hand, by a similar argument as above, one can see that O p (G) xz is a normal p-subgroup of G. Then xz ∈ z A ∩ O p (G) which is not the case. So we are left with the case M ∼ = P SL(2, 4) ∼ = A 5 whence G ∼ = S 5 which is also impossible since no element of order 5 is centralized by an involution in S 5 . This completes the proof.
In this section we study the case where O p (G) = 1 by examining the subcases O p (G) = F (G) and O p (G) = F (G) separately. The next result provides a precise description of the structure of G when O p (G) and O p ′ (F (G)) are both nontrivial.
) is a Sylow q-subgroup of G for another prime q which is elementary abelian, G/P is a Frobenius group with kernel F (G)/P and complement either of prime order or a p-group which has a unique subgroup of order p. Furthermore both P \ {1} and Q \ {1} are A-orbits.
Proof. By hypothesis Q = 1. Since Q is an A-invariant, normal and nilpotent subgroup it follows by (c), (g) and (h) of Proposition 3.2 that Q is an elementary abelian q-group for some prime q and is a Sylow q-subgroup of G. Thus we have F (G) = Q × P. Note that Γ = C 3 and hence F (G) = G. We observe that z A ⊂ P by Proposition 3.8. Then C F (G)/P (xP ) = 1 for any x ∈ G \ F (G) by Proposition 3.2 (c), that is, G/P is a Frobenius group with kernel F (G)/P . This forces that |π(G/F (G))| = 1, and if π(G/F (G)) = {p} then G/F (G) is cyclic of prime order as claimed. Proof. Let P = F (G). By Proposition 3.8 we know that z ∈ P and G is an Eppo-group.
Step 1. If G is solvable then (a) holds.
Proof. Suppose that G is solvable. Then G, being a solvable Eppo-group, is either a q-group for some prime q = p or is a Frobenius or a 2-Frobenius group with π(G) = 2. Let M be a minimal normal A-invariant subgroup of G. It is an elementary abelian q-group for some prime q = p and is a Sylow q-subgroup of G by Proposition 3.2 (h). This shows that G cannot be 2-Frobenius with π(G) = 2 and hence we have either G is a q-group or G is a Frobenius group with kernel M whose complement is an r-group for some prime r. If r = p then |G/M | = r by Proposition 3.2 (g). If r = p then G/M is isomorphic to a p-group which has a unique subgroup of order p. For any Q ∈ Syl q (G) and R ∈ Syl r (N G (Q)) we have M = P Q and G = M R.
Suppose that either Q is noncyclic or r = p. Then N = C N (x) : 1 = x ∈ Q or C [N,Q] (R) = 1 for any QR-invariant section N of P. This shows that Φ(P ) = 1 because otherwise both Φ(P ) and P \ Φ(P ) contain A-conjugates of z which is impossible. So P is an elementary abelian p-group. Let P = P 1 ⊕ P 2 ⊕ · · · ⊕ P k be the decomposition of P into the sum of homogeneous Q-components. For any x = x 1 + · · · + x k ∈ P with x i ∈ P i , i = 1, 2, . . . , k, we define the weight of x as |{i : x i = 1}|. As A acts on the set {P 1 , . . . , P k } we see that A stabilizes the sets of elements of P of the same weight. There- Then P 1 = P and C Q (P ) = 1 which is not possible. Thus Q is cyclic. Clearly C G (z) has order divisible by q and r if r = p and hence C G (z) = G. Now z A ⊂ Z(G) < P as C G (P ) ≤ P. But then z / ∈ N = [P, Q] = 1. As QR is a Frobenius group we obtain that C N (R) = 1. This yields a contradiction if r = p because it implies that N contains an A-conjugate of z which is not possible. Thus r = p. Since Q is cyclic we see that R is a cyclic p-group. If possible, let x be a p-element such that p 2 divides the order of xP . If x A , (x p ) A , (x −1 ) A are pairwise different A-orbits then they are adjacent to each other and hence one of them must be z A which is not possible. So there must exist an element a ∈ A such that x a = x −1 . But then the subgroup of the semidirect product GA generated by x and a must induce a group L of automorphisms on the cyclic section M which must be isomorphic to a cyclic group of order dividing q − 1. Of course this is not possible as L is not abelian. Therefore |G : M | divides p.
Step 2. If G is nonsolvable we have O 2 (G) = 1 and hence G is either a simple Eppo-group or isomorphic to M 10 .
Proof. If G is nonsolvable the assumption O 2 (G) = 1 leads to p = 2 as P = O p (G). Therefore a Sylow 2-subgroup of G is of exponent 2 and hence is elementary abelian. This implies that a Sylow 2-subgroup of G/O 2 (G) acts trivially on O 2 (G) which is not possible. Then O 2 (G) = 1 and hence G is either a simple Eppo-group or isomorphic to M 10 .
Step 3. If G is nonsolvable and C G (z) contains a Sylow p-subgroup of G then (b) holds.
Proof.
We shall first observe that z ∈ Z(G) under the assumption that C G (z) contains a Sylow p-subgroup of G: For any prime q ∈ π(G) \ {p} ⊂ π(G), a Sylow q-subgroup of G is a group of exponent q which is isomorphic to a Sylow q-subgroup of G where q divides |C G (z)|.
Assume now that the Sylow 2-subgroups of G are not elementary abelian. Then p = 2 and by Theorem 3.1, the group G is isomorphic to one of the following groups: P SL(2, 7), P SL (2,9), P SL(2, 17), P SL (3,4), Sz(8), Sz(32), M 10 We can eliminate the groups P SL(2, 17) and Sz(32) from this list because they contain cyclic groups of orders 9 and 25, respectively. Among the remaining groups, P SL (2,9), M 10 and P SL(3, 4) have elementary abelian Sylow 3-subgroups of order 9 and cyclic Sylow subgroups for primes different from 2 and 3. So |G : C G (z)| divides 3 and hence is 1. The others, namely P SL(2, 7) and Sz (8), have cyclic Sylow subgroups for odd primes. Therefore G = C G (z) in case where the Sylow 2-subgroups of G are not elementary abelian.
Assume next that the Sylow 2-subgroups of G are abelian. Then G ∼ = P SL(2, 5) by Proposition 3.4. We get [G : C G (z)] ≤ 4 and hence z ∈ Z(G) establishing the first claim.
As z ∈ Z(G), we have z A ≤ Z(G) = z A ∪ {1}. In this case Z(G) is properly contained in P since C G (P ) ≤ P . Hence, by Proposition 3.2 (e), G/Z(G) is a nonsolvable Eppo-group with nontrivial normal subgroup P/Z(G). In particular p = 2 and G is isomorphic to one of the groups SL(2, 4), Sz(8), SL (2,8), Sz(32). The last two can be eliminated from this list because they contain elements of orders 9 and 25 respectively. Sz(8) cannot also occur by Lemma 3.5.
To complete the proof of Step 3 it remains only to show that G does not split over Z(G). Assume the contrary, that is, assume that G = H × Z(G) for some subgroup H which is isomorphic to G/Z(G). Since G/Z(G) is a perfect group we see that H = H ′ = G ′ is A-invariant. As H contains an element h of order 4, the induced graph on the set of A-orbits represented by h, h 2 , hz, h 2 z is K 4 . This contradiction completes the proof.
Step 4. If G is nonsolvable and C G (z) does not contain a Sylow p-subgroup of G, then (c) holds.
Proof. Let N be an A-invariant minimal normal subgroup of G contained in P . Clearly N ≤ Z(P ). Let S ∈ Syl p (G) and 1 = x ∈ N ∩ Z(S). Then we have x A = z A . Assume that A has k orbits in N \ {1}. Note that N < S because otherwise z ∈ Z(S). Then for any y ∈ S \ N the orbit y A is adjacent to x A and is different from the vertices contained in the set N. As deg(x A ) ≤ 2 we see that k ≤ 2. If k = 1, then z A ∩ N = ∅ and hence N < P as z ∈ P . On the other hand, G acts on N in such a way that every p ′element of G is fixed point free on N . Then E = N ⋊G is a nonsolvable Eppo-group with nontrivial normal subgroup N . We see by Theorem 3.1 that p = 2. As G is nonsolvable, |G| is even and hence P < S. Again the fact that deg(x A ) ≤ 2 yields P \ N = z A and S \ P ⊂ y A for any y ∈ S \ P. This implies in particular that the exponent of S of G must be 2 and hence S is abelian. Thus we have either G ∼ = P SL(2, 5) or G ∼ = P SL (2,8). The latter cannot occur since P SL(2, 8) contains elements of order 9 and p = 2. On the other hand as P \ N = z A we see that exp(P ) = 2 which implies that P ≤ C G (z). Hence |G : C G (z)| ≤ 4, as 3 and 5 have to divide |C G (z)|. But the simple group G cannot have a proper subgroup of index less than 5 and so z ∈ Z(G) in case where k = 1. Therefore we must have k = 2, that is, N \ {1} is the union of two A-orbits.
If N < P then S = P since deg(x A ) ≤ 2. This means p does not divide G which shows in particular that p is odd. Hence a Sylow 2-subgroup of G is noncyclic, elementary abelian. If T ∈ Syl 2 (G) then we have P/N = C P/N (x) : 1 = x ∈ T where N = C N (x) : 1 = x ∈ T which implies that (P \N )∩z A = ∅ = N ∩z A , a contradiction. Thus P = N and hence P is abelian. As z / ∈ Z(S), we see that P < S. We also clearly have P \ {1} = x A ∪ z A for some x ∈ P ∩ Z(S).
If Sylow r-subgroups of G are not of exponent r then p = r and S \ P intersects at least two A-orbits, corresponding to elements of order p and p 2 nontrivially, which is not the case as deg(x A ) ≤ 2. So every Sylow subgroup of G is of prime exponent, in particular Sylow 2-subgroups of G are elementary abelian and hence G is isomorphic to either P SL(2, 5) or P SL (2,8). The latter case cannot occur because a Sylow 3-subgroup of P SL(2, 8) is cyclic of order 9. If p = 2 then 3 and 5 divide the order of C G (z) and hence |G : C G (z)| divides 4 which is impossible since P SL (2,5) has no proper subgroup of index less than 5. Therefore p is odd, G ∼ = P SL(2, 5), and |G : C G (z)| divides 2p and is divisible by p.
Let yP be a nonidentity element in G where y is a p-element in G \ P, and let S 1 ∈ Syl p (G) with y ∈ S 1 . Then clearly P S 1 and y A is adjacent to v A for any v ∈ Z(S 1 ) ∩ P. Recall that P \ {1} = x A ∪ z A for some x ∈ P ∩ Z(S). As C G (z) does not contain a Sylow p-subgroup of G we have v A = x A . Since deg(x A ) = 2 it holds that all the p-elements in G \ P lie in the same A-orbit, in particular all the p-elements in G \ P are of the same order.
Let T ∈ Syl 2 (G). Clearly T is elementary abelian of order 4 as p is odd and we have is T -invariant, and t 2 acts fixed point freely on C P1 (t 1 ) whence t 2 inverts elements of C P1 (t 1 ). Since nontrivial elements of C P1 (t 1 ) are conjugate to z we see that N G ( z ) contains C G (z) properly . So we have [G : N G ( z )] = p. It follows that p = 5 because G cannot have a subgroup of index less than 5. Clearly A normalizes F (G) and induces automorphisms on G. Let A 1 = C A (G). Then A = A/A 1 ≤ Aut(G) ∼ = S 5 . We see that the involutions of T must lie in the same A-orbit and for that purpose it is sufficient and necessary that there exists an element in A/A 1 that leaves T invariant and acts on it as an automorphism of order 3. Observe that the normalizer of a Sylow 2-subgroup of A 5 in S 5 is isomorphic to S 4 . So the number of Sylow 3-subgroups of A ≤ S 5 is either 1 or 4 or 10. The first cannot occur since a Sylow 3-subgroup of S 5 normalizes only 2 Sylow 2-subgroups and acts transitively on the remaining 3 Sylow 2-subgroups. On the other hand we know that A is not contained in Inn(G) ∼ = A 5 , because if g is an element of order a power of 5 such that g an element of order 5 of G then g and g 2 are not conjugate in Inn(G) and hence g A and (g 2 ) A are two adjacent, different vertices, which is impossible. Therefore A is isomorphic to S 4 or S 5 establishing the claim.
Remark 3.12. It is very desirable to pin down the structure of F (G) in part (b) of the above theorem and decide whether this case can really occur. The same question about the possible nonoccurence exists also in part (c) in the light of our knowledge of irreducible S 5 -modules over GF (5). These contain more than two S 5 -orbits of nontrivial elements. On the other hand F (G) is a minimal normal subgroup of GA and hence is a homogeneous G-module and also a homogeneous C A (G)-module.
In the case Z(E(G)) = 1 every element of the simple group E(G) must be of prime order, in particular E(G) must have elementary abelian Sylow 2-subgroups. The only simple Eppo-group satisfying these conditions is P SL(2, 5) whence we have F * (G) = z A × P SL (2,5).
Suppose now that E(G) is quasisimple with nontrivial center and look at the list of all possible groups for E(G)/Z(E(G)). The groups P SL (2,8), Sz(32) both have trivial Schur multipliers. Since the Schur multiplier of P SL(2, 17) is of order 2 we would get p = 2 in this case and this would lead to a contradiction as P SL(2, 17) contains an element of order 9. Thus F * (G)/Z(F * (G)) is isomorphic to one of the following groups: P SL(2, 5), P SL(2, 7), P SL (2,9), P SL (3,4), Sz (8). Notice that the Schur multiplier of P SL (2,9) is of order 6, but p = 3 because otherwise a Sylow 2-subgroup of P SL(2, 9) must be elementary abelian which is not the case. Therefore in the first three cases we have F * (G) is isomorphic to one of SL(2, 5), SL(2, 7), SL (2,9). The Schur multiplier of P SL (3,4) is isomorphic to Z 4 × Z 12 so that Z(F * (G)) is isomorphic to one of the groups Z 2 , Z 2 × Z 2 , Z 3 . Again the last one is not possible as a Sylow 2-subgroup of P SL(3, 4) is not of exponent 2. Essentially there exist up to isomorphism one quasisimple group K 1 such that Z(K 1 ) ∼ = Z 2 and K 1 /Z(K 1 ) ∼ = P SL(3, 4) and one quasimple group K 2 such that Z(K 2 ) ∼ = Z 2 × Z 2 and K 2 /Z(K 2 ) ∼ = P SL (3,4) In both cases F (G) = z A and hence G/F (G) is a nonsolvable Eppo-group having a simple normal subgroup F * (G)/F (G). Appealing to the list of such Eppo-groups we obtain that G = F * (G) since the only nonsimple Eppo-group M 10 cannot occur as G/F (G) because the covering group 2P SL (2,9) cannot be extended to M 10 whose Schur multiplier is of order 3. By the Lemmas of subsection 3.2 we can conclude that G/Z(G) cannot be isomorphic to any of P SL(2, 7), P SL (2,9), Sz (8). As a result G is isomorphic to one of SL(2, 5), 2P SL(3, 4), 2 2 P SL(3, 4) establishing the claim.
Example 3.14. Let G = SL (2,5). Then Aut(G) has 6 orbits on G \ {1} of lengths 1, 20, 20, 30, 24, 24, represented by elements of orders 2, 3, 6, 4, 5, 10 respectively. One can easily check that Γ(G, Aut(G)) is an F -graph consisting of 2 triangles and a tail P 2 joined at the singular vertex corresponding to the central element. The unique vertex with degree 1 corresponds to the orbit consisting of the elements of order 4. Let Proof. For the sake of easy understanding we shall divide the rest of the proof into smaller steps.
Step 1. G is a {p, q}-group where F (G) is a Sylow q-subgroup of G which is elementary abelian, self-centralizing and |Z(G/F (G))| = p. Furthermore nontrivial elements of F (G) constitute an A-orbit.
Proof. Let M be an A-invariant minimal normal subgroup of G. As O p (G) = 1 we see that C G (M ) = 1 or C G (M ) =Ṁ . In the first case M is nonsolvable. Then M is nonabelian simple and hence z A ∩ M = ∅ by Proposition 3.3 (a)-(b). Now Proposition 3.2 (d) implies that M is a simple Eppo-group and G is isomorphic to a subgroup of Aut(M ) containing M . Clearly z induces by conjugation an automorphism ζ on M . Note that ζ is an outer automorphism because otherwise there would exist x ∈ M such that xz ∈ C G (M ) = 1. Hence p divides |Out(M )| . If p is odd then a Sylow 2-subgroup of M is abelian by Proposition 3.2 (g). Now appealing to Propositio 3.4 we observe that M is isomorphic to one of P SL (2,5). But |Out(P SL(2, 5)| = 2. This shows that p = 2.
Then all Sylow q-subgroups of M for odd primes q are of exponent q by Proposition 3.2 (g). We observe by Theorem 3.1 that M is isomorphic to one of the following groups: P SL(2, 5), P SL(2, 7), P SL (2,9), P SL (3,4) because P SL (2,8) and P SL(2, 17) do not satisfy the exponent condition, and both |Out(Sz(8))| and |Out(Sz(32))| are odd. Notice that in all these remaining cases there exists a prime q = p such that q does not divide |C M (ζ)| which is impossible by (g) of Proposition 3.2. Therefore M must be an elementary abelian q -group for some prime q = p. We have M ∈ Syl q (G) and M \ {1} is an A-orbit by (g) and (h) of Proposition 3.2. Clearly it also holds that C G (M ) = M .
We know by part (k) of Proposition 3.2 that z A ∩ Z(S) = ∅ for any Sylow p-subgroup S of G. Let S ∈ Syl p (G). If 1 = u ∈ Z(S) then C G (u) = S as u / ∈ z A , in particular u acts fixed point freely on O p ′ (G) for any 1 = u ∈ Z(S) . This shows first that O p ′ (G) is nilpotent and hence O p ′ (G) = M and second that Z(P ) is cyclic. Suppose that there exists an element x in Z(S) of order p 2 . Then for any y ∈ S \ Z(S) we see that Step 2. For any S ∈ Syl p (G) we have z ∈ X = {x ∈ S \ Z(S) : x p = 1} is contained in an A-orbit while S \ (Z(S) ∪ X) is contained in another A-orbit, and one of the following holds : (a) S is extraspecial, is an extraspecial group of exponent p, p is odd.
Proof. Let S ∈ Syl p (G). By Step 1 we know that U = Z(S) is of order p and U = S. The number k of A-orbits the union of which covers S \ Z(S) is at most 2 because otherwise there exist x i in S \ Z(S) such that x A i , i = 1, 2, 3 and u A for any 1 = u ∈ U are pairwise distinct. As u A is adjacent to x A i , i = 1, 2, 3 we see that u A = z A which is not possible. Suppose that k = 2. There exists x ∈ S \ U such that x / ∈ z A . Assume that x is also of order p. As U acts fixed point freely on M we see that If x ∈ Z 2 (S) then x U S and hence x S = xU . Thus we obtain that C M (x) = 1 and hence x ∈ z A . If x / ∈ Z 2 (S) then as k = 2 we see that S \Z 2 (S) ⊂ x A and Z 2 (S)\U ⊂ z A . But then there exists u ∈ U such that t = xu centralizes some nontrivial element in M implying the contradiction that t ∈ (S \ Z 2 (S)) ∩ z A . This shows that in case k = 2 any element in (S \ U ) \ z A is of order p 2 and hence any two elements of S \ Z(S) of the same order are lying in the same A-orbit. If k = 1 then S \ Z(S) ⊆ z A . In any case we have {x ∈ S \ Z(S) : x p = 1} ⊆ z A and S \ (Z(S) ∪ X) is contained in another A-orbit which consists of elements of order p 2 if it is nonempty.
Suppose that there exists an element y ∈ S of order p 2 such that y p / ∈ U = u . Then y p = v ∈ z A . The abelian group u, y acts on the nontrivial group C M (v). If x ∈ u, y \ v has some nontrivial fixed point in C M (v) then it must be A-conjugate to z and hence is contained in Ω 1 ( u, y ) ≤ u, v . But clearly any element of u, v \ v acts fixed poiny freely on C M (v). It follows that C M (v) u, y / v is a Frobenius group which is not possible as u, y / v is noncyclic of order p 2 . This contradiction shows that y p ∈ u , that is ℧ 1 (S) = U. So independent of the exponent of S we have ℧ 1 (S) ≤ U.
If U < S ′ then S ′ \ U and S \ S ′ are lying in different A-orbits. So there are no A-invariant subgroups Y such that M U < Y < M S = G except M S ′ . In particular we have Z 2 (S) = S ′ or Z 2 (S) = S. As S/U is of exponent p and Z 2 (S) ′ ≤ U < S ′ we see that Z 2 (S) = S ′ = Φ(S). Similarly, as M C S (S ′ ) is A-invariant and contains M U we see that either C S (S ′ ) ≤ S ′ or S ′ < C S (S ′ )S ′ and hence C S (S ′ )S ′ = S. The second implies C S (S ′ ) = S and hence the contradiction S ′ ≤ Z(S) = U. Thus we have C S (S ′ ) ≤ S ′ . Suppose next that z A ∩ S ′ = ∅. Then M S ′ is a Frobenius group and hence S ′ is isomorphic to either Z p 2 or Q 8 . So the unique subgroup of order p in S ′ must be A-conjugate to z which is not possible. This shows that S ′ \ U ⊂ z A and hence S \ S ′ is the set of elements of order p 2 in S.
Furthermore since exp(S ′ ) = p we have that S ′ is either extraspecial of exponent p or is elementary abelian. Now M is an irreducible GA-module and M S ′ is a normal subgroup of GA. Let M = W 1 ⊕ · · · ⊕ W m be the Wedderburn decomposition of M with respect to M S ′ . As A acts on the set {W 1 , . . . , W m } of M S ′ -Wedderburn components, it leaves W 1 ∪ · · · ∪ W m invariant. Since M \ {1} is an A-orbit by Step 1 we must have W 1 ∪ · · · ∪ W m = M and this is possible only if m = 1. Thus M is a homogeneous M S ′ -module and hence a homogeneous M S ′ /M -module. If S ′ is abelian we obtain that S ′ /C S ′ (M ) is cyclic of order p which is impossible as C S ′ (M ) = 1 and U < S ′ . So S ′ is extraspecial of exponent p which yields in particular that p is odd.
Step 3. The end of the proof.
Proof. Note that S ∼ = G H. Then H is isomorphic to a subgroup of GL(m, q) = Aut(M ), which acts transitively on the set of all nonidentity elements of M and has a normal subgroup G which is isomorphic to S. Appealing to [7] if A is assumed to be solvable, or to [8] which uses CFSG we conclude that one of the following holds: (1) SL(k, q n ) ≤ H ≤ ΓL(k, q n ), (2) holds. Then the fact that H has a normal subgroup G isomorphic to S bounds the parameters: k ≤ 2; and q n ∈ {2, 3} if k = 2 , because otherwise H does not contain a nonabelian normal p-subgroup. Suppose first that k = 1. Then H is isomorphic to a subgroup of ΓL(1, q n ). On the other hand ΓL(1, q n ) = KL where K is a normal subgroup which is isomorphic to the multiplicative group of the field GF (q n ) and L ∼ = Aut(GF (q n ), K ΓL(1, q n ) and K ∩ L = 1. Note that K is cyclic of order q n − 1 and L is cyclic of order n. We deduce that G has a cyclic normal subgroup which we denote by abuse of language by G ∩ K so that G/(G ∩ K) is also a cyclic group. Since on the other hand either S is extraspecial or S ′ is extraspecial we see that S ∼ = G must be an extraspecial group of order p 3 containing a cyclic subgroup of order p 2 and also a subgroup of order p different from Z(G). In any case there exists an element x ∈ G \ (G ∩ K). If we consider x as an element of ΓL(1, q n ) it can be given as x = hγ where h ∈ H ∩ K and γ is the unique subgroup in L of order p. In particular, p divides n and γ is the automorphism of the field GF (q n ) given by y γ = y q r where n = pr. Using this description we want to compute C H (G) : We have C H∩K (G) = C H∩K (x) ≤ C K (γ) ∼ = GF (q r ) × and hence C H (G) divides |L| |C K (γ)| = pr (q r − 1). Clearly Z(G) is of order p and is contained in (G ∩ K) ∩ C H∩K (x) which gives the additional information that p divides q r − 1. H/C H (G) is isomorphic to a subgroup of Aut(G) containing We know by [10] that Aut(G)/Inn(G) is isomorphic to Z 2 if G ∼ = D 8 and to the Frobenius group of order p(p − 1) if p is odd. But taking the metacyclic structure of ΓL(1, q n ) into account we see that the group of automorphisms induced by H on G modulo inner automorphisms must be abelian and must leave G ∩ K invariant and hence H divides 2 4 r(q r − 1) if p = 2, and divides p 3 (p − 1)r(q r − 1) if p is odd as Z p−1 ∼ = H/C H (Z(G)) acts transitively on Z(G) \ {1}. As M \ {1} is an H-orbit we see that q pr − 1 divides the order of H. So we get that q 2r − 1 divides 2 4 r(q r − 1), or p is odd and q rp − 1 divides p 3 (p − 1)r(q r − 1) where p is a prime dividing q r − 1. In particular q rp − 1 ≤ p 4 r(q r − 1) ≤ (q r − 1) 5 r < q r5 r < q rp if p > 5 which is clearly not possible. So p ∈ {2, 3, 5}.
Let us assume that p ≥ 3 and hence n = pr ≥ 3. Then by Theorem 3.5 and Theorem 3.9 in [8] we see that if (n, q) = (6, 2) there exists a prime divisor s of q n − 1 which does not divide q k − 1 for any 0 < k < n. Thus s divides (p − 1)r. If s divides p − 1 then s = 2 and q is odd and s divides q − 1 which is not possible. So s divides r and hence we have q s ≡ q(mod s) and hence q rp ≡ q r s p ≡ 1(mod s) which contradicts the definition of s. So we are left with the case (n, q) = (6, 2). In this case q n − 1 = 63, p = 3 and r = 2 giving that p 3 (p − 1)r(q r − 1) = 3 3 2 2 (2 2 − 1) which shows that q n − 1 does not divide p 3 (p − 1)r(q r − 1). So we have p = 2 and we get q r + 1 divides 2 4 r. This yields that r = 1 and q ∈ {3, 7}. Then H = ΓL(1, q 2 ) and [H : G] = q − 1.
Suppose next that k = 2. Then either q n = 3 or q n = 2 and m = k. Note that in the second case H ≤ S 3 which is not possible as G is nonabelian. In the first case SL(2, 3) = Sp(2, 3) ≤ H ≤ ΓL(2, 3) and G ≤ O 2 (H). The group G is an extraspecial group of order 8 containing the involution z outside its center and hence G ∼ = D 8 . This forces H to be the semidihedral group of order 16 containing a subgroup isomorphic to Q 8 and acting regularly on M. So (i) holds in this case. Now we can assume that (3) Therefore G ≤ N . On the other hand an element of order 5 in H acts irreducibly on N /Z(N ) and normalizes G. This implies that G = N and also S ∼ = G is an extraspecial group of order 32. There are two isomorphism classes of extraspecial groups of order 32, which differ by the number of involutions they contain. And only the one which is the central product of a dihedral group and a quaternion group and contains exactly 20 elements of order 4 admits an automorphism of order 5. This forces that S ∼ = Q 8 * D 8 . As elements of the same order in S \ Z(S) are A-conjugate we see that the elements of G of order 4 must be conjugate in H. Since an element x ∈ G of order 4 has exactly two conjugates inside G, namely x and x −1 , we see that |H : G| ∈ {10, 20}. Observe that all maximal abelian subgroups of S are isomorphic to Z 4 × Z 2 and hence for any 1 = w ∈ M we have |C S (w)| = 2 so that the subgroup GL is transitive on M \ {1} where L ∈ Syl 5 (H).
If Example 3.17. We shall now present a slightly modified version of an example given in [7]. Consider the subgroup of GL(4, 3) generated by the matrices Then we have the following relations: Set . Note that T ∩ SL(2, 3) = y 1 , y 2 is a quaternion group and acts Frobeniusly on M . Let z be an involution in T \ SL(2, 3). Then we have T = y 1 , y 2 , z and z acts on y 1 , y 2 such that y z 1 = y −1 1 , y z 2 = y 1 y 2 . Let A = M T and G = M y 1 , z . Then G is a subgroup of A of index 2 on which A acts faithfully by conjugation. If 1 = m ∈ C M (z) then the A-orbits in G \ {1} are represented by m, y 2 1 , y 1 , z, zm and are of lengths 8, 9, 18, 12, 24 respectively. It holds that Γ is an Fgraph consisting of a triangle Γ[z A , m A , (zm) A ] together with a tail Γ[z A , (y 2 1 ) A .y A 1 ] = P 2 . Proof. If Γ has no singular vertex then Proposition 2.1 shows that G is abelian of order at least 6 which is not possible. Therefore Γ possesses a singular vertex. Let p be the order of a representative of the singular vertex. If O p (G) = 1, it holds by Proposition 3.16 that A is not contained in Inn(G). Hence we may also assume that O p (G) = 1.
Final Remarks
We shall repeatedly use the fact that if Γ is an F -graph for some A ≤ Inn(G) and a Sylow subgroup T of G for some prime different from p is abelian then N G (T ) must act transitively on T \ {1}.
Suppose first that G is solvable. Then the Theorem 1 says that the structure of G can be described as G = P QR where P = O p (G), Q ∈ Syl q (G) for some prime q = p, P Q G and R ∈ Syl r (N G (Q)) where r = q. We observe that R = 1 implies that the elements of Z(Q) \ {1} lie in different A-orbits and hence that |Q| = 2 . Therefore p is odd. On the other hand z ∈ P and z is A-conjugate to all the other elements in z \ {1} and this conjugation must take place in the normalizer of z . Note that the Sylow p-subgroup of N G ( z ) centralizes z and z has to centralize an element of order 2 and hence a Sylow 2-subgroup of G. This forces that N G ( z ) = C G (z) which is impossible. Similarly we see that R/C R (Q) is of order r if it is nontrivial and if r = p then r = 2 and |Q| = 3 and p ≥ 5. As 6 has to divide |C G (z)| it holds that C P (QR) = 1. But C P (QR) \ {1} ⊂ z A which is not possible because any two elements of C P (QR) are conjugate in G if and only if they are conjugate in N G (C P (QR)) = N P (C P (QR))QR which has a center containing at least 5 elements. Therefore R is a p-group and G = G/P Q contains only one subgroup of order p. If there exists a p-element x such that |x| = p 2 in G then it is not possible that x, x −1 , x p represent different A orbits. So we obtain that G ∼ = Q 8 = a, b . Then a, b and ab lie in different A-orbits. It follows that p = 2 = |R/C R (Q)| and hence |Q| = 3. Notice that A has to act transitively on Y = X∈Sylq(G) C Z(P ) (X) \ {1}. We see that Y = C Z(P ) (Q) \ {1} as C Z(P ) (Q) is normalized by P QR = G. If Y = ∅ then Y ⊆ Z(G) as Y ∩ Z(G) = ∅. This yields C Z(P ) (Q) ≤ 2. If there exists an involution y in [P, Q] ∩ Z(S) where S ∈ Syl 2 (G) and u ∈ S \ P then y / ∈ Y and hence y, yz, u, z represent different A-orbits which are pairwise adjacent to each other. Then [P, Q] = 1 and hence P ⊆ z A ∪ {1}. As P ∩ Z(G) = 1 we get P = Z(G) = z and G/Z(G) ∼ = QR/C R (Q) ∼ = S 3 . So either G ∼ = Z 2 × S 3 or G has cyclic 2-subgroup of order 4 and a normal subgroup of order 3 such that Z(G) ∼ = Z 2 and G/Z(G) ∼ = S 3 . Clearly a Sylow 2-subgroup of QR is contained in A. Here Γ is a friendship graph with four or two C 3 's, depending on whether A is a 2-group or not. Assume next that G is nonsolvable. By the Theorem1 we see that G has a normal subgroup such that G/N is isomorphic to P SL (2,5) or P SL(2, 7) or P SL (3,4). In the last two of these groups a Sylow 2-subgroup is not abelian and so p = 2. But then in both of these groups a Sylow 7-group S is cyclic and in both of them N G (S)/C G (S) is not of order 6. Therefore we have G/N ∼ = P SL(2, 5). If p = 5 then 5 does not divide |N | and hence N G (T ) does not act transitively on T \ {1} where T ∈ Syl 5 (G). Then we must have p = 5 and we are in the case Theorem1 (2)(c) or (3). In the former case A is not contained in Inn(G) and so we have G = P × P SL (2,5) where P = z A = Z(G) is an elementary abelian 5-group. This contradiction completes the proof. Proof. Suppose that G is a nonabelian group with |π(G)| ≥ 2. If Z(G) = 1 then the commuting graph of G is Γ = Γ(G, 1) and by the above corollary we see that it is not an F -graph. So Z(G) is nontrivial. The commuting graph of G is then ∆ = Γ[G \ Z(G)].
For any x ∈ G \ Z(G) the graph ∆ induces a clique on the vertex set xZ(G) and hence |Z(G)| ≤ 3. Furthermore if a vertex in xZ(G) is adjacent to some vertex v not abelian kernel M/O p (G) so that A acts transitively on the set of nontrivial elements of M/O p (G). Then |M/O p (G)| − 1 divides |A|. Every prime dividing the order of the Frobenius complement of G/O p (G) divides also |M/O p (G)| − 1 and hence divides (|G| , |A|). This shows that Γ has no isolated vertex. Now appealing to the Theorem1 we see that G = P × Q where P and Q are elementary abelian p-and q-groups for two distinct primes p and q. We also know that both |P | − 1 and |Q| − 1 divide |A| as A acts transitively on both P \ {1} and Q \ {1}. On the other hand there exists a subgroup A of Aut(G) given as A = A 1 × A 2 with |A 1 | = |P | − 1 and |A 2 | = |Q| − 1 so that A 1 acts Frobeniusly on P and centralizes Q, and A 2 acts Frobeniusly on Q and centralizes P . So if the primes p and q and the positive integers n and m are chosen in such a way that p does not divide q m − 1 and q does not divide p n − 1 then there exists a nilpotent group G of order p n q m with elementary abelian Sylow subgroups and A ≤ Aut(G) such that Γ = C 3 .
Finally, we list some questions we would like to see them answered.
Q.1 What can be said about a p-group G admitting a group A of automorphisms such that Γ is an F -graph? Q.2 Do the cases (b) and (c) in the Proposition 3.11 really occur? Q.3 Do the quasisimple groups 2P SL(3, 4) and 2 2 P SL (3,4) have really a group of automorphisms A such that Γ is an F -graph? Q.4 What can be said about the groups G such that Γ is planar for some A ≤ Aut(G)? | 2022-07-08T01:16:12.732Z | 2022-07-07T00:00:00.000 | {
"year": 2022,
"sha1": "ac8121544261dcdbc13a6e31c2f36601ee2c300b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ac8121544261dcdbc13a6e31c2f36601ee2c300b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
221716126 | pes2o/s2orc | v3-fos-license | A Framework for Probabilistic Decision-Making Using Change-of-Probability Measures
Probabilistic decision-making is a fundamental problem considered in many disciplines from engineering to social sciences. In this article, we address decision-making in contexts where the law of large numbers (LLN) does not apply. Non-LLN regimes include almost all high-impact decisions. The rise of artificial intelligence (AI) decision making is further increasing the importance of developing principled approaches for such problems. In this regard, we first introduce a method called bounded expectation (BE) to apply the accepted principle of ignoring negligible probabilities. We show that BE provides some satisfactory results and insights into some decision-making problems. Pointing out some shortcomings of BE, we then turn to a much more general setting, using change-of-probability measures. We show that the proposed approach can be considered a generalization of expected utility theory (EUT) from two different perspectives. First, the approach converges to EUT as the number of repetitions grows. Additionally, when the fundamental distortion parameter, $\epsilon $ , is set to zero, the proposed theory reduces to EUT. We then propose a systematic approach to applying the developed framework to non-LLN decisions. Finally, through a real-world example, we compare the decisions made with the proposed method and the conventional methods. It is speculated that due to the complexity and multidimensionality of decision-making under non-LLN regimes, the presented ideas can potentially lead to considerable further research, some of which is discussed in this article.
I. INTRODUCTION
Probabilistic decision-making has been studied in different disciplines, such as engineering, computer science, philosophy, economics, and other social sciences. In a typical scenario, an intelligent agent (human or AI), analyzes its environment and takes actions to achieve some goals. When the probabilities of the potential outcomes are known or could be estimated, such decisions are referred to as decisions under risk [1]. While such decisions are already central in decision theory, their importance is increasing as the role of AI becomes more prominent. The output of machine learning (ML) algorithms can usually be interpreted in terms of probabilities. For example, a classification algorithm normally outputs the probability that the input belongs to a certain class. Therefore, many AI-based decisions are probability-based decisions, i.e., decisions under risk.
The associate editor coordinating the review of this manuscript and approving it for publication was Shyi-Ming Chen.
Another major shift is that an increasing number of highimpact decisions are being made by AIs. This makes the issue of having a sound and rigorous foundation for probabilistic decision-making even more important than before. There has been significant and influential work in decision theory and related fields. A very common approach is the principle of maximizing expected utility. In this context, an agent makes decisions based on a utility function that is defined carefully to best represent the real ''value'' of potential outcomes. For example, in the context of reinforcement learning, typically the agent's goal is to find policies to maximize the expected reward [2]- [9].
The principle of maximizing expected utility, while very useful in many contexts, has known limitations [10]- [12]. Somewhat implied in the axioms of the theory lies the assumption that decisions are made in environments where the law of large numbers (LLN) holds in some way. In fact, the very notions of probability and expected value are essentially asymptotic. Probabilities and expected values are empirically what is observed in the long run. Thus, VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ a very important limitation arises when an agent is making a high-impact decision that cannot be aggregated into a large set of similar decisions. In such cases, we cannot invoke the LLN to justify application of the maximum expected utility principle. This article takes a fresh look at probabilistic decisionmaking with a focus on high-impact decisions for which the LLN cannot be used. We provide a general framework based on change-of-probability measures for making probabilistic decisions which can be applied in both LLN and non-LLN regimes. We will provide evidence that the proposed approach can provide consistent and satisfactory results in some practical settings. The proposed theory can be considered a generalization of expected utility theory (EUT) along two different dimensions. First, we show that as an agent repeats an action, the proposed theory converges to the expected utility theory. Nevertheless, the theory can produce results for any number of repetitions of an action. Therefore, the proposed theory can be considered a nonasymptotic theory for probabilistic decision-making.
There is a second aspect in terms of which the proposed framework can be considered a generalization of EUT. A central parameter in the proposed framework is the distortion parameter . Setting to zero reduces the proposed approach to EUT. We then propose a systematic approach for decision-making under non-LLN regimes by considering how preferences change as the value of changes.
A key insight in the proposed approach is that when an agent faces nonrepeatable decisions, it might be beneficial to place more weight on the most likely outcomes. Nevertheless, it is not clear how to do this in a principled way. This will be a main focus of this work. It is emphasized that the problem of decision-making under non-LLN regimes is a multifaceted problem, with each decision having unique aspects. Hence, it is most likely that no single approach can provide satisfactory results to all problems. Therefore, the presented work here could potentially lead to considerable further investigation. For example, combining the proposed framework with risk management techniques that mostly focus on the tails of the distributions (e.g., [13], [14]) might be a promising approach.
While the proposed theory can potentially be used in any probabilistic decision-making context, its importance could potentially be magnified by the prevalence of AI in the future for three reasons. First, as a greater number of high-stakes decisions are made by AI, it is crucial to have a formal and rigorous basis for such decisions. Second, as the output of many ML algorithms can be used to estimate probabilities, decisions under known probabilities are becoming more prevalent. Finally, the proposed theory is normative (as opposed to descriptive). Again, this is more suitable for AI applications as machines can be programmed to make optimal probabilistic decisions without being impacted by psychological biases and flaws.
A. RELATED WORKS
Probabilistic decision-making has been studied in many different disciplines. The rapid growth of AI and its applications has led to high-impact AI-based decision-making becoming more prevalent in the systems and technologies of different fields, such as medicine, autonomous vehicles, network and national security, safety and privacy, and business, [15]- [25].
As mentioned before, expected utility has been extensively used as the decision criterion. Furthermore, there are many works expanding and improving upon expected utility theory. A large collection of works are descriptive and focus on how humans make decisions [26]- [28].
There are also many normative works, such as [29]- [33]. Some works, such as those elaborating the risk-weighted expected utility approach, look at the risk aversion or risk-seeking attitudes of agents [33]. Other works focus on theories of how utilities are compared, such as rank-dependent expected utility theory [34], relative expectation utility theory [35], and cumulative utility theory [36]. As will be clear later on, it is possible to combine such methods with the proposed technique in this article as they address different aspects of decision-making.
All the abovementioned references represent significant progress in this broad field and indicate the importance of probabilistic decision-making. This article takes a fresh look at this problem from a different angle: It aims to develop a theoretical framework based on change-of-probability measures that addresses the fundamental observation that not all decisions can be aggregated into a large set of similar decisions (non-LLN regime). This is a crucial point, as many high-impact real-world decisions and incidents can be put in this category [37]- [44].
Change-of-probability measures have been applied in other contexts, such as finance [45] where the concept of a risk-neutral probability measure is used for the purpose of option pricing; communication [46]; signal processing [47]; and fuzzy measure theory [48]. Nevertheless, these applications are within a different framework, and the goals and mathematical constructions in them are different from the ones we consider here.
Historically, the St. Petersburg gamble appears to have been the first problem where mathematical expectations failed to evaluate the game's value in a rationally acceptable way. Hence, for more than 300 years, many efforts have been made to address the paradox [49]- [56].
B. CONTRIBUTIONS AND ORGANIZATION
Our contributions can be summarized as follows: • This article provides a theoretical framework for probabilistic decision-making problem that can be applied to both LLN and non-LLN regimes. The proposed change-of-measure is determined, in a principled way, as a function of the involved random variables. Such a construction allows us to ensure that the decision-making policy satisfies important requirements. To the best of our knowledge, this is the first paper that introduces such an approach.
• We introduce bounded expectation as a special case of the proposed method and show that the method can provide satisfactory solutions to some problems such as St. Petersburg paradox.
• Based on the proposed construction, we prove its desirable properties for example, the fact that the proposed method converges to the expected utility method if the actions are repeatable. The main value of the proposed approach, however, lies in its usefulness for situations in which actions are not repeatable or are repeatable only a few times.
• A key contribution is that a systematic approach for applying the proposed method is provided. The proposed method results in satisfactory answers and insights in the investigated examples.
• Finally, by turning our attention to a very popular real-world problem, i.e., angel and venture capital investment, we show that our proposed method provides a reasonable solution that, due to the benefits of being systematic, can be programmed to allow AI agents to make these kinds of decisions. The paper is organized as follows. In Section II, we provide some discussions to better explain the motivation of the work. Next, before providing the general theory, we first present a special case of our general framework called bounded expectation in Section III. The reason for this choice is to use a simple and concrete example to focus on some important insights without becoming bogged down in mathematical details. We then formally present the theoretical framework in Section IV. There, we formally prove the ideas and concepts discussed in Section III in a much more general setting. Examples of more sophisticated and powerful change-of-measures as well as a systematic approach to decision-making under non-LLN regimes are provided in Section V. Since the agent receives 2 k units of utility with probability 1 2 k , the expected value is infinity:
II. MOTIVATION
This result is clearly controversial as has been observed by many mathematicians [49], [57]. For example, the probability that the agent wins more than 32 dollars (for simplicity, let us replace utility units by dollars from now on) is only 3%, yet the calculation suggests that the gamble is worth infinity. If the agent is considering whether to pay a large sum for this game, then this game will be a high-impact decision.
To better understand the crux of the problem, consider a slightly different scenario, where the agent can choose to repeat the game as many times as she wants and for each play she pays c dollars. LetX n be the average amount received by the agent after n repetitions; then, we have for any c ∈ R 1 P(X n ≥ c) → 1, as n → ∞.
In other words, no matter how much the agent pays for each game, the agent eventually wins more than what she pays (assuming c is kept constant). Therefore, in this version of the game, it is not irrational for the agent to pay a large fee to play each game. Thus, the amount that the agent should be willing to pay for each game should depend on the total number of times the agent is allowed to repeat the game.
The above issue is not limited to infinite-mean random variables. In a typical decision-making scenario, an agent might be faced with a one-time high-impact decision where with a very small probability (say 1 1000 ), the agent will receive a very high reward but otherwise will receive a negligible or a negative reward. In such cases, the expected value of the reward might be very high, while it seems very unreasonable to place a very high value on such gambles since the agent is almost sure she will not receive the large reward.
Example 2 (Court Dilemma): As a second example, let us consider a plaintiff in a legal case who is offered a settlement in which she will receive one million dollars. Her lawyer estimates that she has a 35% chance of winning the case, whereupon she would receive ten million dollars, but she will receive nothing if she loses. Let us compute the expected utility for each option. Let us assume one million dollars has 10 units of utility, while ten million dollars has 40 units of utility (consistent with the diminishing effect of marginal utility [60]). Additionally, if the plaintiff goes to court and loses, the resulting outcome is not zero, as there is a psychological effect in terms of disappointment and regret. Let us thus assume that this outcome yields a utility of −5. 2 Let X be associated with accepting the settlement and Y with going to the court. The expected utilities are We observe that going to the court actually yields higher utility! Nevertheless, this does not seem to be a wise choice: 1 This can be concluded from the version of the LLN extended to infinite-mean random variables; see for example, [58], [59]. 2 Two points: First, there is also some element of regret in accepting the settlement as the plaintiff may wonder whether she could have won the larger prize, so the 10 units of utility in that case is assumed to be computed with this effect taken into consideration. Second, the exact values of utilities here are not very crucial: it is clear that different people assign different utilities to outcomes. Nevertheless, the phenomenon being discussed can often be observed. In other words, you can change the monetary rewards so that the resulting utilities are given by the values assumed in this example.
with 65% probability, it will result in the worst possible outcome.
Again, here we observe that this case could be another example of a high-impact event that cannot be simply aggregated with other decisions the plaintiff makes in her life, so the LLN cannot be used to justify the expected utility approach. Of course, if the plaintiff were extremely wealthy, the story would be a different. In that case, this decision could be simply aggregated into her other financial decisions, and in that case, the LLN could be used to justify the expected utility approach.
The crucial point is the following: In real-life decisionmaking, there are scenarios where decisions cannot simply be aggregated into a large set of similar decisions. This is, for example, the case when a very high-impact decision is being made where the outcome might have large consequences. The issue is that such decisions are not governed by LLN, as they are not repeatable (or are only repeatable a few times). For such cases, the very notions of probability and expectation have limited use, as they are inherently asymptotic.
The fundamental question then becomes the following: Can we provide a theory that addresses the above issue? Such a theory might evaluate a decision differently based on how many times an agent is going to face similar decisions in total, e.g., how many times the action can be repeated. More specifically, as the number of repetitions grows, the values converge to the expected value, but the crucial value is in the finite repetitions.
This article aims to answer the above question. As we will see, the proposed theory based on change-of-probability measure provides a promising framework to achieve the above goal. In the cases we considered, the method produces results that are consistent with the decisions typically expected by decision theorists. The method can also explain several phenomena that have been empirically observed. For example, applying the theory to right-tailed distributions, we can explain the underlying dynamics behind the venture capital industry. Applying the theory to left-tailed distributions, we can gain insight into the robustness and fragility of decision policies. Finally, the theory provides satisfactory results for the two examples provided above.
Obviously, the general question of making high-impact decisions has many different aspects, and most likely, a single framework cannot provide all the answers. More likely, a combination of different approaches could provide the most satisfactory result. In that regard, the proposed framework can be considered a step toward achieving such an important goal. One positive aspect of the proposed method is that it can easily be combined with other methods such as those developed for risk management.
Finally, it is worth noting that in evaluating an action, there are two important parts: probabilities and utilities (values of rewards). In this article, we focus on probabilities. It is assumed that the agent can appropriately assign utilities (rewards).
III. BOUNDED EXPECTATION
In this section, we introduce a specific change-of-probability measure called bounded expectation (BE) and discuss how it can be used in decision-making. BE is a very simple version of the proposed theory and is not perfect. It does not fully enjoy the potential advantages of metrics that can be built using the proposed change-of-probability measure approach. Nevertheless, as we will see, it has many desirable properties. It also has the considerable advantage of having a very intuitive and interpretable definition. Thus, we consider it a first step toward our general theory. This section is less formal and focuses on insights. The rigorous formulation and proofs are provided in Section IV.
Bounded expectation can be motivated by the de minimis risk principle, which is a generally accepted principle [61], [62]. De minimis is also referred to as the principle of ignoring rationally negligible probabilities (RNP). The RNP principle states that we should ignore very small probabilities, say below . Indeed, this is what we always do, as in anything we do in real life, there is always a chance of a catastrophe, and we ignore this if the probability is small enough.
Although the RNP principle is to some extent accepted, there is no principled way to apply it in decision-making [63]. For example, suppose an agent is offered a gamble in which a coin is being tossed 100 times, and depending on the sequences of heads and tails some rewards are offered to the agent. 3 In such a gamble, the probability of any outcome is below the threshold (the probability of each outcome is 1 2 100 ); therefore, it is not clear which outcomes the agent should throw away when applying the RNP principle?
This leads us to BE, which is a very simple version of our nonasymptotic metrics for evaluating the decision choices proposed in Section IV. Consider a scenario where an agent is considering one of m possible actions or choices. The random variables that represent the rewards (utilities) of potential actions are represented as X i , for i = 1, 2, · · · , m. The standard approach of maximizing expected utility advises choosing the action with the highest expected value (utility), E[X i ]. We now provide the BE metric, denoted by E , as a way to measure the value of each action.
The basic idea is very simple: We first identify ''extreme values'' (outliers) of X i from the right and left in such a way that the probabilities of such extreme values are in total less than or equal to m . Here, is what we consider the rationally negligible probability, and m is the number of alternative options we are considering, i.e., the number of random variables. The BE of X i , shown as E [X i ], is then the conditional expected value of X i given that X i is not in the outlier region.
If the X i s are continuous random variables, then the definition can be simplified as follows. First, for each X i , we identify the values of x i,min ( ) and x i,max ( ) in a way that the tail probabilities P(X i < x i,min ( )) and P(X i > x i,max ( )) are each equal to 2m . Figure 1 shows these values for a continuous random variable X with a probability distribution function (PDF) f X (x). The formal definition of x min ( ) and x max ( ) for a random variable X is as follows: 2m , Hence, we can state the definition of BE for continuous random variables in the following way. Note that in the definition of bounded expectation below, x i,min ( ) and x i,max ( ) are those values associated with the random variable X i .
Definition 1 (BE for Continuous Random Variables):
To decide between jointly continuous random variables X i , for i = 1, 2, · · · , m, let the A i s be the events
The value of action i associated with X i based on the BE metric is given by
The above definition will be extended in a specific way that can be applied to all random variables to ensure that some regularity conditions are satisfied (which will be discussed in Section IV). Nevertheless, the basic idea shown in Figure 1 and the above definition for continuous random variables are sufficient for our discussions in this section.
Note that the BE metric cannot simply be expressed as , where a and b are constants. Indeed, the difference is that in BE, the values of a and b depend on the distribution of X i and are different for each of the X i s. Additionally, if the X i s are not independent, the event A i impacts the E for other X j s.
The intuition behind BE is that we precisely throw out the ''outliers'' in the distributions of X i s to focus on the part of the probability space that will happen with a very high probability. Note that, as will be discussed, this does not mean we are ignoring tail risks. Indeed, we will see that BE can be used to describe and analyze such risks. Moreover, the extensions and generalizations that we propose later do not throw out any part of the probability space.
In the RNP literature, there are discussions on how to choose the value below which we ignore probabilities [64], [65]. This is equivalent to the value of in BE. The suggested values usually range between 10 −3 and 10 −6 .
Next, we will discuss using BE for decision-making. We focus on the insights and discussions and leave the proof for Section IV. Note that if X i 's are independent continuous random variables or there is only one X i = X (the agent is deciding how much to pay for X ), we can simply write A. BE FOR ST. PETERSBURG GAME Figure 2 shows the BE value of the St. Petersburg game (per play) as a function of n, the number of times the agent is allowed to play the game, for = 0.001. Specifically, if the agent plays the game n times independently, and X (j)4 shows the reward on the jth play, we can define the sample mean as The per game value according to BE is given by E X n .
As we see, this value increases as n becomes larger. Indeed, as n → ∞, E [X n ] → ∞. This is exactly what we expect for a fair valuation. As one plays more, the probability of 4 In this article, we normally use subscripts to identify random variables that are associated with different actions and we are interested in comparing them, i.e., X 1 , X 2 , · · · , X m . Superscripts with parentheses (X (j) ) are usually used when we are referring to independent and identically distributed (i.i.d.) random variables. The subscripts in parentheses (X (i) ) are used when we refer to the order statistics of i.i.d. samples from a distribution. Finally, superscripts with brackets (X [n] ) are used when we refer to sequences of random variables that converge to another random variable. extreme observations increases, which increases the value of the game.
B. NONLINEARITY OF BE
Note that for BE, we do not necessarily have This is why for the St. Petersburg game, independent repetitions of the game increase the per-game value. Indeed, for independent right-tailed random variables X and Y , we often observe This nonlinearity is pronounced for heavy-tailed distributions. Figure 3 shows the BE for Pareto random variables with infinite expected value. As represented, the BE of the sample mean E X n is more than the average of the BEs, i.e., Furthermore, the BE converges to the actual expectation as the number of experiments increases. Figure 4 shows this fact for the Pareto random variable with finite expected value. In both Figures 3 and 4, is assumed to be 0.001.
C. ON THE PROFITABILITY OF THE VENTURE CAPITAL INDUSTRY
Let us apply BE to a concrete example, venture capital investing. A venture capitalist invests in a large number (L) of right-tailed (usually heavy-tailed) options, X (j) , j = 1, 2, · · · , L. As discussed above, for such right-tailed distributions, assuming X (j) 's are independent, we often have Thus, the aggregate value of the investment portfolio is much larger than the sum of individual items. Each startup company has a very small chance of success. It has a still smaller chance of great success, so its individual valuation is small. However, the aggregate is much more valuable, as predicted by the BE measure. We will explore this topic more deeply in Section V-C after the full theory is developed.
D. ON TAIL RISKS, ROBUSTNESS, AND FRAGILITY
The situation is reversed for left-tailed distributions. For independent random variables X and Y that are right-tailed, we often have . Figure 5 shows this relation for left-tailed Pareto random variables with infinite expected value. As shown, the BE for the sum of the left-tailed random variables is less than the sum of the BEs of each random variable. Again, = 0.001. Similarly, Figure 6 shows this phenomenon for the Pareto random variable with finite expected value.
What does all this mean? The above can describe the situation of accumulation of risks. An agent might be taking risks that are individually acceptable but not acceptable on average. This is related to the issue of robustness and fragility [13], [14], [66]- [69]. This is the reverse of the situation in the venture capital example. Each individual risk is very limited. The probability of a loss for each action might be small. The probability of a large loss is even much smaller. However, the aggregate risk is by far larger than the sum of the risks.
E. LIMITATIONS OF BE: TOWARD A MORE GENERAL FRAMEWORK
The above discussions were intended to show that the BE measure can be considered a simple measure that has several desirable properties: In addition to providing a satisfactory answer to problems such as the St. Petersburg problem, it could provide insights on some practical situations. Nevertheless, it is not perfect (like any other measure). The problem is that it only solves one issue regarding one-time decisions: the RNP issue. It does not address the rest of the probability space. BE seems to provide a satisfactory answer to the St. Petersburg problem, but let us consider our second example regarding the legal case. For that case, we obtain It is easy to verify that if is small, the result of BE is not very different from what is predicted by the expected utility. Thus, we need a more comprehensive approach that looks at the entire probability space, not just a negligible part.
To better motivate our general framework, let us now look at different views of BE. The BE operation can be thought of as normal expectation in a ''distorted'' or ''modified'' probability space, one for which the probabilities of events in A c are reduced to zero, but the probabilities of events in A are multiplied by 1 P(A) . The intuition is that we are magnifying the most likely outcomes and shrinking highly improbable outcomes since we are focusing on one-time decisions that cannot be repeated; thus, we do not have the luxury provided by the LLN. Now, this idea of modifying the probability space to better accommodate the lack of repetition and the non-LLN regime can be developed into a much more general methodology. For example, there is no reason to partition the space into only two subsets. We can partition into more sets and adjust the probability of each part in a specific way. Additionally, BE applies an abrupt change, i.e., reduces some probabilities to zero. We can instead change the probabilities in a smoother way and still enjoy the aforementioned attractive properties of BE as well as many more. All of these are covered under the proposed framework of the change-of-probability measure in the next sections. For example, we see that the general approach provides a more satisfactory answer to the legal case question.
It is clear that this change-of-probability measure operation (probability modification) cannot be arbitrary and must be done in a principled way to ensure that it is consistent with rational decision-making. Therefore, in the next section, we develop a rigorous theory for such change-of-probability measure operations for decision-making and prove their properties. This will lead to the systematic decision-making in non-LLN regimes discussed in Section V.
IV. A GENERAL FRAMEWORK: CHANGE-OF-PROBABILITY MEASURE
In this section, we develop a general framework based on the change-of-the probability measure for probabilistic decisionmaking. The idea is to list some important properties that such change-of-measure operations must satisfy. Changeof-measure policies that satisfy these properties are called -consistent. The parameter is a measure of "distortion" imposed on the probability measures and plays a key role in our analysis (Property 6 below). We then prove some properties (such as convergence to expected utility as the VOLUME 8, 2020 number of repetitions grows) that all -consistent policies satisfy. We first focus on the case of uniform change-ofmeasures and then discuss nonuniform change-of-measures. BE is proven to be uniform -consistent.
Later, in Section V, we will focus on two important tasks. First, we will provide a specific method for constructing -consistent change-of-measure policies using what we call consistent functions. Second, and more importantly, we will propose a systematic approach to applying -consistent policies in probabilistic decision-making.
As our goal is to build a rigorous theory, many of the forthcoming sections are somewhat technical. Readers less interested in the mathematical details can refer to Sections V-A, V-B, and V-C to see a summary of the approach and examples of how it can be used in practice.
A. UNIFORM -CONSISTENT CHANGE-OF-MEASURE POLICIES
Consider a complete probability space ( , F, P). The random variables that represent the rewards (utilities) of potential actions are defined on this probability space. Concretely, let X i : → R, for i = 1, 2, · · · , m, show the reward associated with the m potential actions that we are considering. It is in general convenient (and not restrictive) if we assume F is the sigma field generated by all the involved random variables. So we are making this assumption unless stated otherwise. For example, if we are considering a fixed set of random variables X i , for i = 1, 2, · · · , m, we may assume F = σ (X 1 , X 2 , · · · , X m ), where, σ (X 1 , X 2 , · · · , X m ) is the sigma field generated by X 1 , X 2 , · · · , X m .
The goal here is to define a new probability measure Q on ( , F) to be used in evaluating the true value of these actions. More specifically, for a generic random variable X , the Intuitively, the new probability measure can be defined in a way to potentially amplify the most likely outcomes while weakening the highly unlikely outcomes. This could be consistent with the nonasymptotic nature of the problem, for example, a one-time high-impact decision. Therefore, our goal is to describe mappings P → Q that have desirable properties consistent with probabilistic decision-making.
First, we notice that the measure Q must be absolutely continuous with respect to the P, i.e., Q P. This is because any event that has zero probability under P must have zero probability under Q. Thus, we can use the Radon-Nikodym theorem [70] to conclude that there exists a unique integrable nonnegative random variable Z , with E[Z ] = 1, such that Here Z is the Radon-Nikodym derivative Z = dQ dP . Therefore, our goal can be equivalently stated as obtaining a mapping {X 1 , X 2 , · · · , X m } → Z , that maps any set of random variables on ( , F, P) to an integrable nonnegative random variable Z . Identifying this Z uniquely identifies the measure Q as well as v[X i ] for i = 1, 2, · · · , m. Hence, to summarize this change-of-measure operation, we write where ''ch'' stands for the change-of-probability measure.
For example, if we consider continuous random variables X i , and for i = 1, 2, · · · , m, define then, the Z associated with the BE measure that was introduced in the previous section is given by In general, we can rewrite Equation (1) as Equation (2) provides the two interpretations for our problem: (1) the change-of-measure interpretation (P → Q) and (2) the transformation interpretation, given by where the expectation is computed with respect to the original probability measure. Both interpretations are helpful and help us gain insight.
Obviously, the change-of-measure operation P → Q (or equivalently, defining Z ) cannot be arbitrary and must be done in a principled way to satisfy some required properties. Hence, we now proceed to identify the properties that should be satisfied by Z and the associated Q and v[X ]. The properties are listed below.
Property 1 (Finiteness 5 ): This property simply states that if under all possible scenarios, we receive a finite reward, then the value v[X ] must be finite. Although this might seem a very rational assumption, it is worth noting that this property is not satisfied by standard utility theory (e.g., in the case of St. Petersburg problem). Thus, we depart from the standard expected utility approach from the start.
The next property states that if one option always results in a better outcome, then its value must be higher. This is known as the dominance principle in decision theory.
The next property states that if two options can be made arbitrarily close to each other, their values must also be close to each other. In other words, we are ensuring the continuity of the v[·] metric. We say that a sequence of random variables X [n] , n = 1, 2, 3, · · · are dominated in absolute value by the random variable Y , if |X [n] (ω)| ≤ Y (ω), for all ω ∈ and for all n = 1, 2, 3, · · · . Property 4 (Convergence): Let X [n] 1 , n = 1, 2, 3, · · · , be a sequence of random variables on ( , F, P) and Q [n] be the corresponding measure when the X [ Note that since Q [n] P, almost sure convergence with respect to P also ensures almost sure convergence with respect to all Q [n] s.
Next, note that v[X ] is not necessarily a linear operator as we saw regarding the bounded expectation operator. This is in contrast to standard expected utility. Nevertheless, we require a weaker form of linearity for v [X ]. Specifically, we require: The ''+b'' part simply says that first obtaining X and then obtaining a constant reward b is equivalent to obtaining the reward X + b. The ''aX '' part says that if we multiply all the possible outcomes by a factor of a, it makes sense that the whole value is multiplied by a. Note that this has nothing to do with the concept of marginal utility: it does not say that if we are given twice the money, our utility is multiplied by two. It simply says that if under Action 1, we always get twice the utility compared to Action 2, then Action 1 is worth twice Action 2. Indeed, this property is satisfied in standard utility theory.
This property also implies that . Note that v[X ] is how much the option X is worth to an agent. It simply states that if under Action 1, an agent always obtains the negative utility that she obtains under Action 2, then Action 1 must have the negative aggregate value as Action 2. Note that this is not inconsistent with incorporating issues such as loss aversion, as those can be incorporated in the way we define utilities, so if, for example, under Action 1, the agent loses $100 for sure and under Action 2, she wins $100 for sure, the utility of Action 1 could be −150, while the utility of Action 2 could be 100. For this choice, v[ Action 1] = −v[ Action 2]. Moreover, to incorporate issues such as risk aversion or risk-seeking, one may use techniques such as risk-weighted expected utility [33] in conjunction with the change-of-measure operation proposed here. Note that The next property makes sure we do not distort the probabilities too much. This is crucial, as the agent is basing her decision on the modified probability measure Q, so we would like to make sure that for any event B, the actual probability of that event, i.e., P(B), is within of its distorted probability. Specifically: Property 6 (Bounded Distortion): For any event B ∈ σ (X 1 , X 2 , · · · , X m ), we must have This property essentially says that the total variation distance between the two probability measures P and Q on σ (X 1 , X 2 , · · · , X m ) must be less than or equal to . The VOLUME 8, 2020 parameter plays a key role in the change-of-measure operation. As we will see by increasing the from zero to positive values and examining the outcomes, we develop a systematic approach to decision-making under non-LLN regimes. It is worth noting that in this context, the interpretation of is broader than our previous narrow interpretation of BE: here, it refers to the maximum distortion in probabilities. Nevertheless, as we will see, the values of both interpretations coincide in the special case of BE.
The following lemma is useful in constructing bounded distortion change-of-measure operations.
Lemma 1 (Partition Lemma): Let D 1 , D 2 , · · · , D m with P(D i ) > 0 be sets in F that form a partition of . Let α i ≥ 0 for i = 1, 2, · · · , m be such that If we define for any B ∈ F, then Q defines a probability measure on ( , F), and for any B ∈ F, we have Proof: First, we note that we have: Let δ(P, Q) show the total variation distance between P and Q which is given by δ(P, Q) = sup A∈F
|P(A) − Q(A)| .
Since D i 's form a partition of , we have Thus, for any B ∈ F, we have
Definition 2 (Uniform -Consistent Policies): Consider a probability space ( , F, P) and a mapping rule that maps any set of random variables on ( , F, P) to an integrable nonnegative random variable Z , with E[Z ] = 1. Assume Q and v[·] are the associated measure and the value function. We say that this change-of-measure operation is a uniform
-consistent policy if it satisfies properties 1 through 6. When we simply say -consistent, we mean uniform -consistent. We now proceed to prove some properties of -consistent change-of-measure policies. We say that the random variable X is symmetric around µ if 2µ − X has the same distribution as X . Our first theorem considers the case where we are evaluating v[X ] for a single (m = 1) symmetric random variable.
Theorem 1: Let v[.] be associated with an -consistent change-of-measure policy applied to a single random variable X . If X is symmetric around µ, then v[X
The first equality is true by Property 3 and the assumption that X is symmetric around µ. The second equality is true by Property 5. We conclude that v[X ] = µ. Note: It is crucial to note that the proposed framework here addresses a single issue: the non-LLN nature of some decision problems. Issues such as attitudes toward risk can be further combined with the proposed method to obtain a more comprehensive view. For example, in financial investment, it is common to prefer options with a lower variance when the expected values are the same even when the distributions are symmetric around the mean.
We now turn our attention to the case where an agent is able to independently repeat an action several times; here, X indicates the average reward.
In this case, we show that v[X n ] converges to E[X ]. This means that as the number of repetitions of an action grows, we approach the expected utility theory. Since X n s are dominated in absolute value by an integrable random variable Y , by Property 4, we conclude that The last equality is ensured by Property 5.
We can often provide a stronger characterization under some regularity conditions. Specifically, suppose that X (i) s have a finite variance: 0 < Var(X (i) ) = σ 2 < ∞. We can apply the central limit theorem (CLT) and conclude that the random variables
B. BOUNDED EXPECTATION REVISITED
We now formally define BE and show that it provides anconsistent change-of-measure. Therefore, it enjoys the properties discussed above. There are two equivalent ways to define BE: One is more suitable for simulations; we call it the operational definition. The other is more suitable for direct calculations; we call it the computational definition. For simplicity, let us first assume that the m random variables associated with different options are independent. The definition then will simply be extended to the case when they are not independent. Let us start with the operational definition, which is more intuitive. Let X be the random variable indicating the reward (utility) of the action. Assume X (ω) < ∞ for all ω ∈ . Generate N i.i.d. random variables from the distribution F X (x), order them from smallest to largest, and denote the resulting sequence of random variables as X (1) , X (2) , · · · , X (N ) .
In other words, X (1) , X (2) , · · · , X (N ) is the order statistic of the random sample. Define the ''normal'' set I N as Accordingly, the outlier set is defined as {1, 2, · · · , N } − I N . Figure 7 shows this sample division for a Pareto random variable. In this figure, = 0.1 for the sake of representation.
The v N [X ]s converge almost surely to a finite limit, which we call the bounded expectation of X : To see that the limit exists and is finite, we actually derive the limit that gives us the computational definition of BE. Specifically, by applying the law of large numbers, we obtain Here, To consider the case where the X i s are not necessarily independent, we can proceed similarly. Specifically, we consider the vector X : Generate N i.i.d. random variables from the distribution F X . For each i ∈ {1, 2, · · · , m}, order the obtained vectors in terms of the value of X i (for equal values, the ordering is random). Then, label the vectors as outliers as before. Finally, all the vectors that have been labeled as outliers at least once, are removed. Note that by the union bound, the outlier set has at most N elements. The remaining collection of samples will give us the normal set, and E [X ] can be computed as before using Equations (3) and (4). We now state and prove the main theorem regarding BE. Theorem 3: BE is an -consistent change-of-measure operation.
where (a) comes from the fact that Q( ) = 0 and (b) holds according to the assumption that X 1 (ω) ≤ X 2 (ω) for all ω ∈ − . Property 3: Since BE is computed and uniquely determined by the CDF, and is symmetric with respect to the ordering of random variables, this property is satisfied.
from which we conclude that Next, we have ≥ a n ), where (b) results from the definition of x min ( ) and the assumption that X is continuous. For large enough n, we conclude that which results in a n ≥ a − β. Similarly, b n ≤ b − β. We have − β).
Now, we conclude that Property 5: Consider the operational definition of BE, i.e., Equation 3. Note that the ordering of Y (i) s are exactly the same as that of the X (i) s for a > 0 and completely reversed for a < 0. The additional +b is added to all X (i) s, so it will appear as a +b in the computation of v[Y ].
Property 6: It can be concluded from the application of Lemma 1.
C. NONUNIFORM -CONSISTENT CHANGE-OF-MEASURE POLICIES
It is worth noting that to prove Theorems 1 and 2, we did not specifically use the fact that all random variables go through the same change-of-measure operation. In other words, as long as Properties 1 through 6 are satisfied, we can use Theorems 1 and 2. Therefore, we can actually construct change-of-measure policies as such that for any X ∈ {X 1 , X 2 , · · · , X m }, its value v[X ] is given by We refer to such policies as nonuniform change-of-measure policies. Why might we want to consider nonuniform policies? The answer is that they give us more flexibility in defining the appropriate change-of-measures. Specifically, one way to define nonuniform change-of-measure policies, is to act as if m = 1 and apply the change-of-measure operation for each X i separately to obtain v[X i ]. The big advantage here is that we do not need to deal with the way the random variables might be dependent on each other. One might argue that, at the end of the day, we are choosing one of the options and all we care is the distribution of the resulting utility. In other words, we might be less concerned with the way the potential options are correlated as we will only be choosing one of them.
Of course, we need to be specifically careful and ensure that the fairness properties, such as dominance property and CDF symmetry property, are satisfied; otherwise, we might be making an unfair comparison. Properties 1 through 5 do not change; however, we provide a slightly modified version of Property 6 to make it suitable for nonuniform change-ofmeasure policies: Property 6-b (Bounded Distortion for Nonuniform Policies): For any i ∈ {1, 2, · · · , m} and any event B ∈ σ (X i ), we must have Definition 3 (Nonuniform -Consistent Policies): Consider a probability space ( , F, P) and a mapping rule that maps any set of random variables on ( , F, P) to a set of integrable nonnegative random variables Z X i with E[Z X i ] = 1. Specifically, we write such that for any X ∈ {X 1 , X 2 , · · · , X m }, its value v[X ] is given by We say that this change-of-measure operation is a nonuniform -consistent policy if it satisfies Properties 1 through 5 as well as Property 6-b. Note that as we discussed above, nonuniform policies could be specially easy to work with when the change of measure operation is performed separately for each random variable. In such cases, it suffices to provide the mapping for each X i . That is, for the ith random variable, we restrict our attention to the space ( , σ (X i )).
V. SYSTEMATIC APPROACH TO CONSTRUCTING -CONSISTENT POLICIES
In this section, we focus on two important tasks. First, we will provide a specific method for constructing -consistent change-of-measure policies using consistent functions. Second, and more importantly, we will propose a systematic approach to applying -consistent policies in probabilistic decision-making. The approach is based on sweeping the parameter from zero to large values and looking at how preferences change in this process. The idea is to obtain a holistic view of the problem taking into account the complexities involved in decision-making under non-LLN regimes.
As discussed before, BE has some limitations. The BE operation divides the probability space into two parts: the ''normal'' part and the ''outlier'' part, which has a probability smaller than . The BE then completely eliminates the outlier part of the probability space. This is not necessary; the only thing necessary is to weaken that part enough, so that Property 1 is satisfied. Additionally, there is no need to stop at two divisions. We can simply divide the space into more parts. Using the partition lemma (Lemma 1), we can simply construct an -consistent change-of-measure operation.
In fact, instead of dividing the space, we can apply a smooth change-of-measure operation. This seems to have some advantages over the partitioning approach.
Remember that for any random variable X , the tail function F X (·) is given bȳ As before, suppose that we are interested in comparing X i : → R, for i = 1, 2, · · · , m. For simplicity, let us adopt the following notation for the CDF and tail function of X i : The key to our method lies in what we call consistent functions.
2) (Lipschitz continuity) There exists c g ∈ R such that |g(x)−g(y)| ≤ c g |x−y| for all x ∈ [0, 1] and y ∈ [0, 1]. 3) On the interval 0, 1 2 , g(·) is convex and we have g(x) ≤ x. 4) On the interval 1 2 , 1 , g(·) is concave and we have g(x) ≥ x. 5) For all x ∈ [0, 1], 6) For each i = 1, 2, · · · , m, there are constants c i and c i in R such that It is easy to construct consistent functions, and indeed, there are infinitely many of them for any set of random variables, as we will see. Our main theorem here is the following.
Theorem 4: Let g : [0, 1] → [0, 1] be a consistent function with respect to random variables X 1 , X 2 , · · · , X m . For any i ∈ {1, 2, · · · , m} and x ∈ R, define Then, P → Q X is a nonuniform -consistent change-ofmeasure policy, where Intuitively, the consistent-function properties and the change-of-measure operations in Theorem 4 are chosen to weaken the outliers and strengthen the typical outcomes. This is again consistent with our intuition that in a one-shot decision, it makes sense to focus more on the more likely outcomes.
Note: The actual value of for a specific set of random variables could be smaller than what is stated in Theorem 4. The value in the theorem is chosen so that a general statement can be made. It is easy to verify that, for m = 1 and non-atomic probability spaces, BE is a special case of this general policy with Note that Condition 6 in Definition 4 is very easy to satisfy. All we need is to make sure that the function g(x) becomes relatively flat at x = 0 and x = 1. One easy way to satisfy it is to apply the BE truncation using a very small value of = 10 −4 . Note that this should be much smaller than the overall for g. The truncation can be done in a way that continuity is satisfied. Nevertheless, as is very small, continuity at that point has a negligible practical impact.
Proof of Theorem 4: For simplicity, for i ∈ {1, 2, · · · , m} we write First, note that since g is Lipschitz continuous, we have for any A ∈ σ (X i ), so we have Q i P, for i = 1, 2, · · · , m. Property 1: Property 1 is guaranteed due to Condition 5. Specifically, 1) Using (1) and (2) Property 2: If P({ω ∈ : X 1 (ω) ≤ X 2 (ω)}) = 1, then P(X 1 > x) ≤ P(X 2 > x); therefore, for all x ∈ R, we have where (a) results since g is increasing. Therefore, we have Property 3: Since v[X i ]s are uniquely determined by the joint CDF, and the operation is symmetric with respect to the ordering of random variables, this property is satisfied.
Property 4: For simplicity, assume that the random variables are nonnegative. Let Let Y be the dominating random variable, and For all x ≥ 0, we have which completes the proof.
Property 5: Let Y = X + b. Let Q 1 be the measure associated with X and Q 2 be the measure associated with Y . We have 6 Here, E Q [·] shows the expected value with respect to measure Q. Now let Y = aX and a > 0. Hence, Similarly, for a < 0, we have: Property 6-b: Let δ i (P, Q i ) show the total variation distance between P and Q i (measured on σ (X i )), given by Due to Conditions of Definition 4 (e.g., convexity/concavity of g), it is easy to see that the supremum is obtained by some B = {ω : θ 1 ≤ X i (ω) ≤ θ 2 }. We can then say that for any i ∈ {1, 2, · · · , m} and any event B ∈ σ (X i ), we have
A. A SYSTEMATIC APPROACH TO DECISION-MAKING IN NON-LLN REGIMES
Here, we propose a systematic approach for decision-making in non-LLN regimes. We start by picking an -consistent policy such as the method described in the previous section using consistent functions. Note that if we let = 0, we obtain the same results derived from expected utility theory. In general, let i( ) be the preferred option for a specific . As we then increase , we take note of the possible changes in i( ). Let * be the value of where the first change occurs in i( ), i.e., i( * ) = i(0). The key insights are as follows: 1) The larger the value of * , the more stable is the choice made by the expected utility (i(0)). That is, it is more likely that the expected utility is suggesting a good option. 2) On the other hand, if the value of * is small, this is a high indication that i( * ) might be the best choice. As the problem of decision-making under non-LLN regimes is multifaceted and most likely a simple narrow approach will not be enough, the proposed method above, where we look at how the preferences change as changes, seems to be a step in the right direction. An interesting question for further research seems to be finding guidelines on the choice of the threshold value of * at which i( * ) becomes the preferred option. As a very rough rule of thumb, one might suggest * < 0.05 might be used as the threshold.
B. EXAMPLE OF THE APPLICATION OF THE CHANGE-OF-MEASURE OPERATION USING CONSISTENT FUNCTIONS
To clearly present the proposed method, let us revisit the problems that we introduced in the motivation section. We choose the following consistent g function: Note that technically, the above function is discontinuous at 10 −4 and 1 − 10 −4 . However, since the discontinuity jump is so small, it does not have any practical impact on our calculation. Nevertheless, one may easily make the function fully continuous by replacing the jump with a smooth curve. For α = 0, we have g(x) = x, so we obtain the standard expected utility, and = 0. As we increase α, increases. Therefore, for any α, we obtain a corresponding value for . Thus, to apply our systematic approach, it suffices to increase α gradually and compute the corresponding value of as outlined below. Figure 8 shows this g(x) for different α values.
If we have a random variable X and are interested in evaluating its value, i.e., v[X ], we can proceed as follows. Specifically, if the random variable X is discrete and bounded from the left, we can simplify the change-of-measure operation in Theorem 4 in the following way. Suppose {x 1 , x 2 , · · · } are potential values of X in an ordered way, i.e., Let p i = P(X = x i ). Then, we obtain the changed probabilities, q k , for k = 1, 2, . . . , as below If the range is finite, i.e., then, for k = 1, 2, · · · , r − 1, we obtain The value of can be obtained using the total variation distance between P and Q which in this case simplifies to The value of X is then obtained as Algorithm 1 represents the procedure for calculating v[X ] and .
We now can use the above to revisit the problems that we introduced in the motivation section.
Let us first consider the St. Petersburg problem. Here, we use v [·] to show the value function associated with . As we know for = 0, the value of the game is infinity: However, even when we choose a very small , apply the transformation given in Theorem 4, and use the g function above, the value of the game drops to a very small amount. Indeed, at just = 0.01 (which is obtained at α = 0.056), the value drops to Thus, we conclude that it is most likely not reasonable to pay more than 13 units of utility for a one-time shot at this gamble.
Algorithm 1 Calculation of v[X ] = E Q [X ]
INPUT: 1) Distribution of X : Next, let us look at the legal example provided in Section II. Remember that expected utility provided the following result: which suggests option Y (going to the court) is preferable. Now, by applying the transformation given in Theorem 4 and using the g function above, we notice that at * = 0.0165, which is obtained at α = 0.135, we obtain That is, for any > 0.0165, option X , i.e., accepting the settlement is preferable. Since * = 0.0165 is very small, we conclude that accepting the settlement, is most likely a preferable choice.
The above examples were for scenarios where we obtained a result other than what is proposed by expected utility. Nevertheless, expected utility provides reliable answers for many problems (even in non-LLN regimes). Indeed, for many such problems, we note that there is no for which the preferences change. This means that the above method produces the same result as the one obtained by the expected utility theory.
In other scenarios, the method produces a large * , which again indicates agreement with expected utility theory. For example, suppose that an agent is choosing between winning X = 95 units of utility for sure and a gamble where she wins Y = 100 utility units with probability 80% and nothing (Y = 0) with probability 20%. In such a case, expected utility theory ( = 0) prefers X . Applying the above method, we obtain * = 0.15, which is a very large value, indicating that the result obtained by expected utility theory is reliable.
C. AN EXAMPLE OF A REAL-WORLD APPLICATION OF THE METHOD
Here, we would like to apply the systematic approach to the problem of angel and venture capital (VC) investment and compare the results with those obtained by a few other approaches. Suppose that an angel or a venture capital investment fund is being created to invest in technology startups. A fundamental question is how many startups the total available funds should be divided in? We will apply the proposed systematic change-of-measure-based approach to answer the question and compare the result with those from a few other approaches.
To formulate the problem, let L be the total number of companies in which the fund invests. Let X (j) , j = 1, 2, · · · , L be the total profit from the investment in the jth company assuming one unit of money being invested. For example, if the j th company fails, we let X (j) = −1. On the other hand, if the investor triples the invested amount, we let X (j) = 2. For simplicity, we assume that the fund invests equal amounts in each company.
The first question that needs to be addressed is what the distribution of the X (j) s is. There are many works on the topic, for example, [71]- [74], and we adopt a model based on [71]. Specifically, we assume that the distribution of X (j) is as shown in Figure 9. Assume that the investor requires a minimum profit of 140% over the length of the investment, which is usually a few years for each startup. This seems to be consistent with the goals set by VCs and angel investors. Let us now try to address the question of how many companies the fund should invest in.
Expected Utility Approach: If we compute the expected utility, we get E[X (j) ] = 1.52, which implies that on average, the investor earns a profit of 150% dollars for each dollar she invests in a single company. This means that if we just want to use expected utility, even a single startup suffices as E[X (j) ] − 1.4 > 0. Needless to say, VCs and angel investors are completely aware that the expected utility does not suffice for such risky investments. It is well known that a diversified portfolio is much less risky.
Limiting (Bounding) Loss Probability: One common approach is to require that the probability of a loss at the end of investment be lower than a given threshold, say 10%. More specifically, Assuming the investments are independent, we can calculate the required L using the distribution of the sum. The smallest L for which the probability of loss is smaller than 10% is Thus, the investor could choose L = 12. It is worth noting that due to the discreteness of the distribution, the loss probability is not monotonic for a small L. For example, the loss probability is larger than 10% for L = 13, but it becomes smaller than 10% for all L ≥ 14. Thus, the investor may choose L ≥ 14 to be on the safe side.
Proposed Systematic Approach Based on Consistent Functions: We can follow the systematic approach discussed in the previous section. Specifically, we look at and for each L obtain the range of values of where the above quantity is positive. Remember that this range gives us stability, and we want it to be larger. If we require a 0.05 tolerance (i.e., * = 0.05), we obtain L ≥ 12.
Therefore, the proposed systematic approach provides similar results to the approach based on limiting loss probability in this case. However, the systematic approach based on the change-of-probability measures has some desirable properties: -First, it is a general approach that can be applied to any situation, i.e., in LLN and non-LLN regimes. -Second, it is a systematic approach that can be easily programmed into AI decision-making. -Third, the proposed approach is based on the entire probability distribution, while approaches such as limiting the loss probability only look at part of the distribution (e.g., the part resulting in a loss). -Finally, as discussed before, combining different approaches seems to be a reasonable approach to probabilistic decision-making, and the proposed change-ofmeasure approach can be a key component in that regard.
VI. DISCUSSION AND FURTHER RESEARCH
Here, we provide some discussions and comments on a few potential avenues for further research. Decision-making under non-LLN regimes is an important area to investigate, and its importance is growing with the rise of AI. Almost all high-impact decisions are in this category. Unlike LLN regimes where expected utility theory provides a relatively satisfying answer, the problem of decision-making under non-LLN regimes is multidimensional and complex, and each specific instance might require specific consideration. Therefore, it is very unlikely that a single approach can provide all the answers. With those considerations in mind, this article aimed to provide a framework based on the changeof-probability measures to shed light on some aspects of this important area. We observed that the proposed method provides satisfactory results for some problems and was able to provide some insights. We believe the presented material potentially provides several avenues for further research. First, the proposed approach should be applied to different problems within different contexts. Undoubtedly, such efforts will reveal shortcomings, and this could help bring us closer to more comprehensive approaches. Different specific non-LLN problems in engineering, computer science, philosophy, economics, and other social sciences can be considered to test and improve the proposed approach. This could help yield more insights into the types of problems for which different approaches are more effective.
It is important to note that the proposed method addresses a single issue: the non-LLN nature of some decision-making problems. For more comprehensive decision-making, a very promising lead could be to combine the proposed methodology with other techniques, such as those developed for risk management.
Next, there is much flexibility within the proposed framework. The approach used in Sections V-A, V-B, and V-C is only one of the possible approaches under the proposed framework. Different approaches using different forms of change-of-measure operations might be more effective for some problems.
There are many problems that can be considered from a mathematical perspective. Here, we mention a few: First, it is possible to add to the required properties to make the set of change-of-measures more restrictive. Second, the stated results can be extended and improved. More properties of -consistent change-of-measure policies could be proved.
Finally, in the algorithms that we have provided, we focused on the cases in which there is only one change of priorities for * < 0.05. However, it is possible that even the second option is not stable enough, such that the preferred option changes very quickly by increasing slightly from the * . For these cases, more sophisticated strategies for optimal decision-making must be developed, which is an interesting question to investigate in further research.
VII. CONCLUSION
In this article, we considered the fundamental problem of decision-making under non-LLN regimes. We first introduced BE as a principled way to address the accepted principle of ignoring negligible probabilities. BE provided satisfactory answers and insights regarding some aspects of decision-making under non-LLN regimes. Pointing out some shortcomings of BE, we then extended the approach to a much more general framework of change-of-probability measures. The proposed theory can be considered to be a generalization of expected utility theory in two directions. First, it was shown that as the number of repetitions increases, the results derived from the proposed theory converges to those from expected utility theory. Second, when the distortion parameter, , is zero, the proposed theory again becomes identical to expected utility. Finally, we suggested a systematic approach to applying the theory and showed that it produces satisfactory results for some examples. The proposed paradigm can be applied to different high-impact and non-LLN decisions made by humans or AI in business, economics, medicine, and computer science, to name a few. However, due to the complexity and multidimensionality of such problems, there may be limitations in the proposed method that need to be carefully investigated. Hence, we noted that this article could potentially lead to considerable further research. | 2020-09-03T09:05:07.712Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "9dea0824056f3171b7d171179f31ee274acc43b0",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09184049.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "0fc5f1f1a5a240f75f355fc8b2133b2aebbd956a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
20613275 | pes2o/s2orc | v3-fos-license | Evolutionary Features of the 4Mb Xq 21 . 3 XY Homology Region Revealed by a Map at 60-kb Resolution
Forty-three yeast artificial chromosomes (YACs) from the X chromosome have been overlapped across the 4-Mb Xq21.3 region, which is homologous to a segment in Yp11.1. The region is formatted to 60-kb resolution with 57 STSs and is merged at its edges with contigs specific for X. This allows a direct comparison of marker orders and distances on X and Y. In addition to some sequence variation and possible differences in marker order, two larger evolutionary divergencies between the X and Y homologs were revealed: (1) The X homolog is interrupted by a small X-specific region detected by a 3-kb plasmid probe for locus DXS214. An STS was developed from one end of the probe, but the sequence at the other end was highly homologous to an L1 repetitive element. This suggests that the interpolation of the X-specific segment may have involved an L1-mediated event. (2) A 250-kb portion containing DXYS1 is several megabases away from the rest of the homologous DNA on the Y but is contiguous with the remainder of the homologous region on X. Marker orders are consistent with the origin of the Y-specific 250-kb region in a paracentric inversion after the initial transfer of X DNA to the Y chromosome.
In addition to several smaller regions (Affara and Ferguson-Smith 1994), three major regions of homology are shared by the human X and Y chromosomes (Fig. 1).Two of them comprise the pseudoautosomal regions at the tips of the p and q arms, respectively 2.5 and 0.35 Mb in length.The other homologous region, in Xq21.3 and Yp11.1 (Foote et al. 1992;Vollrath et al. 1992;Affara and Ferguson-Smith 1994), differs from the pseudoautosomal segments in several respects.First, it is more extensive, on the order of 4 Mb.Second, unlike the pseudoautosomal portions, the homologous X and Y segments are not thought to pair during meiosis.In the chimpanzee and lower vertebrates, this region only exists on the X chromosome (Page et al. 1984;Geldwerth et al. 1985;Koenig et al. 1985;Affara and Ferguson-Smith 1994).Hence, the XY homology region is unique to humans and presumably arose by translocation of the region from the X to the Y chromosome relatively recently on the evolutionary time-scale.
A clone map of the XY homology region on both X and Y can help to determine features of the evolution of the sex chromosomes and to initiate a comparison of their genetic content.Most of the euchromatic region of the human Y chromosome has been cloned in yeast artificial chromosomes (YACs) and formatted with sequence-tagged sites (STSs) (Foote et al. 1992;Vollrath et al. 1992).A more recent map of the Y chromosome is provided by Jones et al. (1994), although marker order within the XY homology region is essentially the same as in Foote et al. (1992).On the Y chromosome, the corresponding DNA includes STSs sY20, sY21...sY52, in numerical order across a 4-Mb region, and a relatively small segment containing DXYS1, located ∼2 Mb more proximal.
The homologous region on the X chromosome has been less well characterized.Starting with the sY STSs from the Y chromosome map and supplementing them with 32 additional STSs developed from YAC insert ends, we have developed a YAC/STS contig that encompasses the entire homology region of >4 Mb, reaching X-specific DNA at both ends.Direct comparison of marker order and spacing suggests some features of the evolution of this uniquely human XY homology region.
Construction and Internal Consistency of the Contig
Some information about the placement of a group of XY homologous markers in Xq21 had been derived from fluorescent in situ hybridization studies, and other ordering information had been inferred using a panel of deletion breakpoint hybrids, de-rived primarily from DNA of choroideremia patients (Page et al. 1984;Geldwerth et al. 1985;Koenig et al. 1985;Philippe et al. 1993).From such data, successive bins were found to include sY73 (DXYS1) and sY20 (DXYS42); sY24 (DXYS5); DXS214; and sY48 (DXYS12).It was thus clear that sY73 had a different relative location on X compared to Y (where it was on the other side of sY48).We therefore determined to make an X-specific map without regard to the published order on the Y and then to compare the resulting maps.
Contig construction was initiated by screening the series of sY STSs against several YAC libraries to make small contigs.New STSs were then developed from YAC insert ends for recursive screening to achieve long-range contiguity across the entire region (Kere et al. 1992), with three-to eightfold clone coverage.Four additional STSs were placed in the contig, including DXS214 (discussed below); DXS1217 (AFM288ye9), a polymorphic marker in the X-specific region at the centromeric end of the contig (Fig. 2); XSTS0460, an X chromosome STS derived from an X DNA clone; and DXS3 (Stanier et al. 1991).
In the completed contig (Fig. 2), the STS content of all the YACs is shown, and all YACs are drawn to scale.This provides estimates of the size of the complete region and of spacing between markers.The XY homology region spans 4 Mb of the 4.5 Mb in Figure 2. At the centromeric end, a 200-kb X-specific region is shown that is merged with a published 3-Mb contig spanning the candidate region for cleft lip and palate disease (CPX) (Forbes et al. 1996).At the telomeric end of the contig, the 150-kb X-specific region shown is merged with overlapping YACs that span the remainder of Xq21.3 and Xq22 (Vetrie et al. 1994;A.K. Srivastava, M. Shomaker, C. Jermak, S. Mumm, and R. Nagaraja, in prep.).Hence, the entire XY homology region is clearly covered by this contig.
In general, only a few YACs recovered from this region were chimeric and were dropped from the map of Figure 2. Consistent with this index of quality, end clones from even the largest YACs mapped to the expected locations, and the STS content in the YACs was also totally consistent, with no indications of internal deletions.Thus, in Figure 2, the lengths of the 43 YACs and the inter-STS distance of 60 kb are relatively reliable.
Features of the X Chromosome That Vary from the Y Homolog
All of the STSs derived from X-specific YAC ends Figure 1 Major regions of homology between the human X and Y chromosomes.For the XY homology region, the asterisk (*) and the arrow represent marker order differences between the X and Y by comparison of the physical map of the Y chromosome (see text) to the map of the X chromosome presented here.The asterisk represents the region surrounding DXYS1, and the arrow represents the general order of 33 markers (sY20-sY52).(XpPAR) Xp pseudoautosomal region; (XqPAR) Xq pseudoautosomal region.
that are contained within the boundaries of the XY homology region were XY homologous.This inference is based on the comparable amplification of the STSs from human/hamster hybrid cell lines containing either the human X or Y chromosome.During the mapping, however, it became increasingly clear that not all XY homologous STSs amplify equally well from X-and Y-specific templates.Among the sY primers that showed consistent differential amplification, the most striking example was sY25, which gives robust signals from Y DNA but produces only weak signals from a variety of X-specific DNA preparations.Consequently, this STS could not be used to screen libraries for X YACs and was placed on the map by testing clones and reamplifying signals.Differential amplification is consistent with previous Southern analyses, suggesting some sequence differences between the X and Y homologs (see Discussion).
One X-specific marker, DXS214, was shown previously to lie within the boundaries of the XY homology region, between the markers sY24 and sY48 (Philippe et al. 1993).To localize the marker on the map, an STS was developed.Both ends of the insert for the probe pPA20 (DXS214) were sequenced.CENSOR (Jurka et al. 1996) analysis showed that one end has high homology to L1-type repetitive elements.Sequence from the other end showed no matches to repetitive elements using CENSOR with default parameters, and an STS was developed.The STS showed high background at lower annealing temperatures but was specific enough to screen for 1 and 3).(d) Differentially shaded bar represents X-specific regions (light shading) and XY homologous regions (dark shading).(e) Order of markers, identified by common names, from centromeric to telomeric.Markers shown as a number followed by an L or R identify YAC-end STSs; the number is the yWXD Washington University database accession number of the YAC; L or R indicates the left or right end of the YAC.XSTS0460 is a random X-specific marker.(f) YACs and STS content.YACs are drawn to scale and are identified by a yWXD number followed by either a letter identifying the library of origin (E or F libraries) or the complete library location [(I) ICI library; (M) CEPH mega-YAC library].(᭺) The origin of the YAC-end STSs; (᭹) STS content in the YACs.Several STSs were spaced too closely to be discriminated from neighboring STSs and have not been illustrated on the map.These include sY30, sY31, sY33, sY34, sY36, and sY37, which have been placed in general numeric order in relation to the other sY markers.The marker sY25 could not be placed accurately on the map because of weak signals from X-specific templates but is placed tentatively between sY24 and sY26 (see text).The size of the X-specific region at DXS214 is unknown and is not necessarily drawn to scale.
cognate YACs when tested at an annealing temperature of 60°C.DXS214 was thereby placed between sY42 and sY43.
DISCUSSION
Chromosomal Rearrangements Between X and Y When marker order and distances are compared, divergence is revealed at several levels of organization.The largest-scale difference between the X and Y homologs is the location of DNA in and around the DXYS1 (sY73) locus.From the physical maps of the Y chromosome (Foote et al. 1992;Jones et al. 1994), most of the homologous DNA is physically contiguous, containing STS markers ordered by number from sY20 to sY52; but a smaller fragment containing DXYS1 is several megabases more proximal.In contrast, for the X chromosome (Fig. 2), DXYS1 and sY20 are closer together.Both are included in three YACs from different libraries.The distance between DXYS1 and sY20 must be <300 kb (the size of yWXD5264) and is more likely on the order of 100-200 kb.This is consistent with contiguity for the DNA containing DXYS1 and the rest of the XY homology region at the sY20 end.
The DXYS1-containing region on the Y chromosome had a previously estimated minimal size of (Page et al. 1984).More recently, the maximum size was established at 280 kb (Sargent et al. 1996).From our map, the likely size of this region is on the order of 250 kb.An additional four STSs proximal to sY73 on the X chromosome would presumably move with DXYS1 to the displaced segment on the Y, though this remains to be confirmed.
The other most obvious difference between the X and Y homologs is the presence of additional DNA in the middle of the X region, containing the X-specific probe DXS214.The STS developed for this marker places it between sY42 and sY43.One end of the probe for DXS214 has high sequence homology to L1 repetitive elements; the other end appears to have unique sequence.The entire probe pA20 is ∼3 kb, but the X-specific region surrounding DXS214 could be much larger.
Concerning the origin of the X-specific DXS214 region, it might have existed on the X chromosome when the initial transfer was made to Y and was then deleted from the Y chromosome.Alternatively, the region might have been inserted into the X chromosome after the initial transfer from X to Y.In either case, the presence of the L1 element in DXS214 suggests that such an element may have been involved.Sequencing of the region and identification of the X-specific/XY homologous boundaries may provide further indications of a possible underlying mechanism.
Apart from major rearrangements, the order of markers on the X chromosome is in agreement with the binning of five markers in hybrids containing DNA from patients with X-to-autosome translocations (Philippe et al. 1993) and generally follows the order sY20-sY52 reported for the Y chromosome.There are differences, however, one significant one and two minor ones.sY50 is placed between sY43 and sY44 on the X chromosome, at a considerable distance from its position between sY49 and sY51 on the Y chromosome maps.Because the locations are supported by several independent YACs, they suggest another potential rearrangement that has arisen since the transfer of the homologous region from X to Y.In addition, the order of two pairs of markers (sY46 and sY47, sY48 and sY49) are apparently reversed on X compared to Y.The resolution and clone coverage of the two maps is somewhat different, however, so that further work will be required to determine whether these apparent differences are intrinsic to uncloned DNA of the X and Y chromosomes.
During the preparation of this manuscript, a set of four contigs covering portions of the XY homology region was published (Sargent et al. 1996).The separate contigs (0.2, 0.5, 1.5, and 1.2 Mb) total 3.4 Mb of the XY homology region.The orientation of these contigs and the order of markers were based on the X chromosome deletion panel described by Phillipe et al. (1993).The map presented here extends the analysis by providing uninterrupted clone coverage of the entire region.Thus, gaps are filled, totaling >0.6 Mb, and both ends of the XY homology region are merged with neighboring chromosomal DNA.Furthermore, these results are based solely on STS content of YACs, so that the contig alignment determined by deletion panels is an independent verification rather than the sole source of order.The provision of complete DNA coverage, in turn, allows for an accurate estimate of the 4-Mb extent of the region and for comparison of marker order on the X and Y chromosomes.Consequently, it is clear that the marker sY73 (DXYS1) is contiguous with the remainder of the XY homology region on the X chromosome, with no other intervening X-specific material; and by comparison to the maps of the Y chromosome, that sY73 (DXYS1) is separate from the remainder of the XY homology region (Foote et al. 1992;Vollrath et al. 1992;Jones et al. 1994).
At a finer sequence level, comparisons between the X and Y chromosomes in the region have revealed 98% identity for the probe St25 (Koenig et al. 1985) and >99% for DXYS1 (Page et al. 1984).The ability of the STSs across the region to amplify from X or Y is consistent with high overall homology (98%-99%) across the region.From our results, however, the difference in efficiency of amplification from X and Y templates with some STSs is an indication of possibly greater local sequence variation.Preliminary direct sequence analysis for STSs in this region shows levels of identity for X and Y ranging from 96% (for sY25) to 100% (for sY36, sY51) (S.Mumm and D. Schlessinger, in prep.).It will be interesting to determine whether the most highly conserved regions are constrained by coding or other functions.
Evolution and Possible Gene Content of the XY Homology Region
High sequence homology is consistent with the notion of an evolutionarily recent transposition of this region from the X to the Y chromosome.The sex chromosomes would then have diverged somewhat, with the divergence maintained by the absence of chiasmata in this region during male meiosis.Based on the initial common probe content (Page al. 1984;Geldwerth et al. 1985;Koenig et al. 1985), the physical map of the Y (Foote et al. 1992;Vollrath et al. 1992;Jones et al. 1994), and now the physical map of the X, Figure 3 shows a schematic of the possible course of a potential mechanism for the divergence of X and Y in the region.The presence of the XY homology region solely on the X chromosome of apes indicates its origin as X-specific on the homonid ancestor of apes and humans (Fig. 3a).At some point after divergence of apes and humans, the region was duplicated and transposed from Xq21.3 to Yp11.1 in the human lineage (Fig. 3b).A paracentric inversion on Y would have separated DXYS1 from the rest of the region, and changes in either X or Y could have led to the discordant positions of sY50, and so forth (Fig. 3c,d).The X-specific region at DXS214 could then be accounted for by an insertion into the X chromosome or a deletion from the Y, after the original transposition of the region from the X to the Y chromosome (Fig. 3e).The presence of DXS214 and the displacement of DXYS1 on Y chromosomes in all human populations suggest that those changes likely arose before a bottleneck in human evolution fixed them in the genome.
The availability of the complete physical maps of both X and Y should facilitate the systematic analysis of possible gene content and comparative structure.Although no genes from the XY homology region have been reported yet, we believe the area to be transcriptionally active.Several of the YAC-end STSs generate signals by RT-PCR and are positive on top-level pools from cDNA libraries.In particular, sWXD1358 is positive by RT-PCR and from pools for a teratocarcinoma cDNA library (S.Mumm and M. D'Esposito, unpubl.).Because human males, compared to other primate males, have this region on both X and Y, they could have twice the dosage of any constituent genes.If dosage compensation were important for such genes, they would become possible candidates for involvement in Turner syndrome-like pathology (Zinn et al. 1993).Genes in the XY homology region may also be candidates for premature ovarian failure with breakpoints in Xq21 (Forabosco et al. 1979).
STSs
The set of STSs developed from the Y chromosome were obtained from David Page (Vollrath et al. 1992).STSs from YAC ends were made as described in Kere et al. (1992) and use either TNK50 or TNK100 buffer as stated in Table 1 (Blanchard et al. 1993).The STS for DXS3 was described in Stanier et al. (1991).Marker DXS214 (pPA20) was kindly provided by Christophe Philippe and Frans Cremers (University Hospital Nijmegen, The Netherlands).An STS for DXS214 was developed by partially sequencing pPA20 using cycle sequencing (Srivastava et al. 1992).Both ends of the insert were sequenced using pBR322 HindIII forward and reverse primers.STS primers for DXS214 were designed using the OSP program (Hillier and Green 1991).All relevant STS information is available in the Genome Data Base and via the Genome Center World Wide Web page (http://genome.wustl.edu/cgm/cgm.html).
YACs and Screening
YACs were obtained from a variety of libraries, including the E library (Nagaraja et al. 1994), the F library (Lee et al. 1992), the Zeneca, Inc. library (Riley et al. 1990), and the Centre d'Etude du Polymorphisme Humain (CEPH) library (Albertsen et al. 1990).All libraries were screened with STSs, and the cognate YACs recovered were sized by pulsed-field gel electrophoresis.YACs are described with additional information in the Genome Data Base as well as the Genome Center web page; all are available from the American Type Culture Collection (ATCC).
SWXD no. Alias Primer A Primer B Size Buffer
STSs are identified by an sWXD number (for the Washington University database) and an alias (used in Fig. 2).Most of the aliases are derived from YAC ends and are identified by a number (from the YAC) and an L (for left end) or R (for right end).Several STSs were spaced too closely to be discriminated from neighboring ones and are thus not illustrated in Fig. 2.These include sWXD1282, which is near sY38; sWXD1288, which is near sY32; and sWXD1358 and sWXD1359, which are near 1520R.
To ensure that the clones in the contig were from the X chromosome, the libraries initially screened were all derived from female DNA.Some screenings were also done with one collection from a male source, the CEPH mega-YAC collection; but care was taken to recover only X chromosomespecific YACs (e.g., by screening with the X-specific probe DXS214).
Mapping started by using the sY primers to isolate a contingent of clones and then determining the content of neighboring STSs in colony-purified YACs.Additional STSs from YAC end inserts were then used in further screenings to ''walk'' to overlapping clones until subcontigs merged across the entire region.Markers DXS1217, XSTS0460, and DXS3 were screened as STSs in the Genome Center and were incorporated into the XY homology region when they identfied cognate YACs already in the contig.
Figure 2
Figure 2 YAC/STS map of the Xq21 XY homology region.(a) Markers, including X-specific markers (DXS1217, DXS214, and DXS3) and XY homologous markers corresponding to sY markers shown below.(b) Megabase scale.(c)Brackets show corresponding regions of homology on the Y chromosome, including those that are Yp proximal (represented by asterisks in Figs.1 and 3) and Yp distal (represented by arrows in Figs.1 and 3).(d) Differentially shaded bar represents X-specific regions (light shading) and XY homologous regions (dark shading).(e) Order of markers, identified by common names, from centromeric to telomeric.Markers shown as a number followed by an L or R identify YAC-end STSs; the number is the yWXD Washington University database accession number of the YAC; L or R indicates the left or right end of the YAC.XSTS0460 is a random X-specific marker.(f) YACs and STS content.YACs are drawn to scale and are identified by a yWXD number followed by either a letter identifying the library of origin (E or F libraries) or the complete library location [(I) ICI library; (M) CEPH mega-YAC library].(᭺) The origin of the YAC-end STSs; (᭹) STS content in the YACs.Several STSs were spaced too closely to be discriminated from neighboring STSs and have not been illustrated on the map.These include sY30, sY31, sY33, sY34, sY36, and sY37, which have been placed in general numeric order in relation to the other sY markers.The marker sY25 could not be placed accurately on the map because of weak signals from X-specific templates but is placed tentatively between sY24 and sY26 (see text).The size of the X-specific region at DXS214 is unknown and is not necessarily drawn to scale.
Figure 3
Figure 3 Possible events in the evolution of the XY homology region.(a) Structure of the X and Y chromosomes of the human precursor, where the region is unique to the X chromosome at Xq21.As in Figs. 1 and 2, the asterisk (*) represents the region surrounding DXYS1 and the arrow represents the general order of markers sY20-sY52.(b) This region was then transposed to the Y chromosome at Yp11.(c) A paracentric inversion occurred on the Y chromosome to generate the structure in d, where the DXYS1 region was separated from the remainder of the XY homology region.(e) The X-specific region at DXS214 originated as an insertion into X or a deletion from Y, potentially via an L1-mediated mechanism (see text). | 2017-09-07T05:03:26.822Z | 1997-04-01T00:00:00.000 | {
"year": 1997,
"sha1": "da00c26a2967d8c5a7a2ecde141be4e6b18de341",
"oa_license": "CCBYNC",
"oa_url": "https://genome.cshlp.org/content/7/4/307.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "da00c26a2967d8c5a7a2ecde141be4e6b18de341",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
230537681 | pes2o/s2orc | v3-fos-license | Discourse, Critique and Subject in Vocational Language Education in Germany: An Outline of the Concept of Critical Foreign Language Didactics
The paper offers an attempt at including the critical-discursive perspective in the reflection on vocational language education in Germany; its aim is to outline a base for critical foreign language didactics drawing on the critical and post-Foucauldian discourse analysis. The first part of the paper forms a reconstruction of the enmeshment of vocational language education in a number of contexts (political, migrational, and integrative). Constituting a transformative variant of language didactics and examining vocational language education, critical foreign language didactics will be perceived as a research program pertaining to the reflection on education and pedagogical activity (under the framework of critical pedagogy), teaching and learning (from the standpoint of the critical trend in general didactics), and language education and its specificity within teaching and learning particular languages (in the context of foreign language didactics). The final part of the paper will present methodological implications by indicating potential directions for – and levels of – vocational language education analysis. It will also offer an attempt at their further clarification aimed at a critical analysis of subject formation and forms of subjectification.
The Contexts of Vocational Language Education in Germany
Amadeus Hempel, a member of the board of IBH (Intercultural Education Hamburg), believes that politics is currently very much interested in making refugees quickly undertake employment, and not in supporting them on their way to university. It is indicated by the regulation on vocational language support based on the example of the German language ('Verordnung über die berufsbezogene Deutschsprachförderung'), abbreviated as DeuFöV, which has been in force for a solid year now. DeuFöV courses are to increase the refugees' chances on the labor and training markets.
Hempel sees it as follows: half a million of people ought to be promptly incorporated into the labor market. 1 The cited passage, concerned with German language courses in the vocational context, points to several issues that are crucial for vocational language education 2 in Germany. First and foremost, 1 "Cramming for the dream of university studies," Marianne Wellershoff, 2 Sept. 2017, Spiegel Panorama, https://www.spiegel. de/panorama/gesellschaft/fluechtlingsheim-am-grenzweg-safouh-hussain-macht-die-abschlusspruefung-im-sprachkurs-a-1165734.html, author's own translation). "IBH-Vorstand Amadeus Hempel meint, dass es in der Politik inzwischen ein großes Interesse daran gebe, Flüchtlinge eher schnell in Arbeit zu bringen, als sie beim Weg ins Hochschulstudium zu unterstützen. Indiz ist für ihn die "Verordnung über die berufsbezogene Deutschsprachförderung," kurz DeuFöV, die seit gut einem Jahr in Kraft ist. Diese DeuFöV-Kurse sollen die Chancen der Flüchtlinge auf dem Arbeits-und Ausbildungsmarkt verbessern." Hempel sieht es so: "Eine halbe Million Menschen soll schnell in den Arbeitsmarkt integriert werden." 2 As an element of both language education and education as such, vocational language education will denote on the operationalizational level of an offer of courses implemented under the above-mentioned resolution on the vocational language support based on the example of the German language ("Verordnung über die berufsbezogene Deutschsprachförderung"). On the conceptual level, its particular determinant will be preparation of migrants for entering the (specifically understood) labor market and, by the same token, it will denote the realization of a specific vocational discourse immersed in the development of language competencies in response to defined and empirically verified needs (cf. Vogt 2011;Huhta et al. 2013). it is about its enmeshment in the political context, which in the case of Germany is tinged with specifically understood multiculturalism. There is nothing revelatory about this observation. What is cognitively interesting, however, is the reconstruction of mechanisms of the said enmeshment. Education is, has been, and will always be an element of some logic of a political nature. In the framework of vocational language education directed at refugees -or, more broadly, migrants -this logic appears to play one of the key roles. It determines the preferred direction for the development of the vocational language offer through being firmly anchored in state institutions (e.g. the Federal Office for Migration and Refugees, BAMF) and, what is essential, it remains in line with the expectations of the greater part of the host society, which needs to be linked with the labor market. The strong feedback loopespecially in the context of research on integrationbetween the vocational language education and the labor market is not without significance for various stages of the implementation and execution of language education. Apparently, it is the labor market (specifically understood, i.e. in neoliberal categories) that will determine the concepts and contents of language education. There is nothing wrong about it as long as such a perspective does not become devoid of nuance, and the recipients of the variety of designed solutions -together with their accompanying migrational, cultural, humanitarian or, finally, political contexts -do not assume the character of a silent voice. Consequently, vocational language education takes the form of an instrument for specifically understood integration. This wording denotes a change in perspective with regard to the critique On the level of the field of study, vocational language education is embedded in the framework of reflection, concept and program solutions of foreign language didactics, which, in this context, will require to be included in discursive and critical perspectives. These considerations help to understand changes in the attitude to (broadly understood) language education in Germany as well as the specificity of vocational language education. The changes mostly concern moving away from language education predominantly 'serving' the cognitive-cultural in- illustration of a specific empirical area (vocative language education in Germany) and its broad contextualization is connected with the critical-discursive approach, which is preferred in this area as a research program due to its problem-centric dimension. In the following part of the paper, vocational language education will be placed within the framework of foreign language didactics, with all its indispensable clarifications and modifications. This is aimed at posing questions about the possibility and validity of its broad opening for critical-discursive reflection, and drawing the foundation for critical foreign language didactics understood as a research program. the "life subjected to the existing authorities, their rationality and their interests" (Szkudlarek 2010:18) that -importantly for the critical trend -is 'sold' in this viewpoint as something natural or obvious. The goal is to unmask the vision of the socio-economic reality constructed as obviousness. In the perspective delineated in such a manner, teachers will be the representatives of the system, while learners will be its passive recipients. It follows from the above description that in the context of education the system will determine the economic reproduction more strongly than the cultural reproduction. These models will serve as an invi- Moreover, it introduces critical thought to the reflection on education in the context of other disciplines, thereby heading even further toward establishing detailed contexts and elements of vocational language education. One such discipline will be general didactics, which -due to academic and conceptual ties with critical pedagogy -opened up to the critical approach.
Language Education Within Academic
General didactics prompts the reflection on teaching and, increasingly more often, also learning.
Thus, in relation to pedagogy, it is a discipline that narrows down the area of research, if pedagogy is to be defined broadly as the academic discipline of pedagogical activity. Klus-Stańska Similarly to critical pedagogy, critical didactics -being consistent in perceiving itself as a discipline immersed in the transformative paradigm -will also set change as its goal, stating this change as "rec- Moving on to track constructivism, which opens the critical perspective within foreign language didactics, one can find references to that thought. They also indicate, against that background, certain new or newly defined aspects of the didactic process.
Żylińska (2009) Analysis) and it is defined as a "communicative event" which comprises three elements: cognitive processes, the use of language, and interactions under specific socio-cultural conditions (Lewicka 2007:109-110). According to that concept, discourse is understood through the place where these phenomena of verbal processes are articulated. In glottodidactics with the discursive angle, it denotes the inclusion into its spectrum of the development of the discursively understood "communicative competence," defined as "the ability to participate in the discourse." Here it is necessary to pose the first ques-tion -why is not discursive competence taken into account in this context? Secondly, the limitation of the category of discourse to the process of language learning only (developing the "communicative competence") -an an unmistakably important, yet not the sole element of the whole glottodidactic process -appears not to exhaust its potential, especially when one takes a closer look at theoretical, methodological, and methodical grounds of the interdisciplinarily perceived studies on discourse, including the Critical Discourse Analysis itself. The following part of the paper will move along that trail in search for critically-and, therefore, discursively-oriented foreign language didactics.
A separate topic thematized within the preparation and evaluation of glottodidactic aids is the hidden curriculum (Stankiewicz and Żurek 2016).
What is interesting is that the concept of hidden
Critical Discourse Analysis and Vocational Language Education: Theoretical Assumptions
The author of this paper takes the view that the category merging the multidimensionality of different forms of language education through its integrating character is the category of discourse and its whole theoretical, methodological, and methodical background in the form of (inter)disciplinary studies on discourse, mostly those that draw on Critical Discourse Analysis (CDA) and Post-Foucauldian Discourse Analysis (PDA). Therefore, when considering foreign language didactics in the above-outlined context, one must begin with the category that is crucial for this perspective.
Undoubtedly, it will be the category of discourse 6 ; a category only seemingly ambiguous, as it will appropriately -i.e. being constructed in connection with specific theoretical traditions -attempt In modern reflections, the notion of "glottodidactics" is increasingly often treated as the synonym for the term "foreign language didactics" (cf. Gebal 2019). This symbiosis plays a discipline-enhancing role for the whole area of studies on language learning and teaching, reinforcing its academic autonomy.
Discourse, Critique and Subject in Vocational Language Education in Germany: An Outline of the Concept of Critical Foreign Language Didactics of the glottodidactic system, which is further on extended to cover social topics such as: objective reality, social environment, school environment, and state education policy, which form a nod to the constructing of the didactic process in a discursive manner, taking into account its social context. In his model, Gębal (2013) Returning to the question that is fundamental here, namely the possibility to include the critical-discursive tradition in reflections on vocational language education, it is necessary to evoke its main assumptions, which Duszak and Fairclough (2008:15-18) specify synthetically -in a bird's eye view on a number of variants of CDA 8 -in the following way: • "CDA deals with social processes and problems"; • "Discourse is the key factor in the construction of social life"; • "Discourse is an important element of power relations"; • "Critical discourse analysis contains an element of a detailed text analysis." 9 This means that CDA is a research program with the problem-centric slant, drawing on constructivism, stressing the power-forming aspect of discourse and the significance of an in-depth analysis of texts treated in the categories of discursive fragments and traces of broader social processes.
In the course of logical operations, this will denote treating vocational language education as: 1) a social process and problem, which will imply its broad contextualization; 2) discourse which (co)creates the social reality; 3) an element reproducing complex power relations; and 4) the phenomenon that is reconstructed through texts.
As a consequence of the adopted assumptions, one distinguishes three levels of vocational language education in terms of discursive categories: • the level connected with institutions responsible for vocational language education (the discourse of vocational language education); • the level connected with non-education institutions, speaking or writing about vocational language education (the discourse about vocational language education); • the level connected with those teaching and those learning [the discourse of subjects of vocational language education (cf. Kumięga and Nawracka 2020)].
When related to vocational language education together with the distinction of its three levels, CDA assumptions outlined above send one back to the question of the specificity of a particular empirical area that is to be subjected to analysis. This area is the didactic reality in Germany in connection with migrational processes.
The Empirical Context: Vocational-Language Education in Germany
It is now necessary to have a closer look at the offer of language education in Germany which is direct- • lower significance of the social origin and status; • more emphasis on individual behaviors (as opposed to behaviors shaped by social groups); 14 When examining the policy of German publishers, one can distinguish three types of didactic aids, i.e. coursebooks concurrently 'attending to' the specialist language and general language in everyday, colloquial use; communicative aids and activity-oriented tools directed at developing strategic competencies in the area of specialist communication without particular regard for specific variants of specialist languages; and, finally, specialist materials dedicated to particular occupations, e.g. medical. On the level of state institutions as well as publishing houses themselves, there is a visible preference for general language education oriented at broadly understood professional needs. At this point, one allows CDA, supplemented with the post-Foucauldian perspective, to don the robes of analyses interested in the category of the subject.
In a direct reference to the 'entrepreneur of the self,' the subject will be understood in Foucauldian terms, i.e., as noted by Nowicka (2016), not as "an agent of an autonomous, free action, but as a personage that is subjugated and, therefore, subjected to external and internal control." Beside knowledge and power, the subject will be one of the key aspects of the Foucauldian understanding of discourse. Returning to the above-mentioned ways of understanding the category of discourse in research practices, this perspective will fall into understanding discourse as "knowledge selection and process of negotiating collective meanings" (Czachur 2020:138).
To consider in more detail turning the subject into one of the key topics of the intended research, it is necessary to clarify the ways in which it is to be understood and methodologically represented. Two categories will become helpful to this end, namely subject formation and forms of subjectification.
Forms of subjectification concern the manner in
Critical Foreign Language Didactics: Perspectives
The above considerations were aimed at a wide contextualization of vocational language education in Germany in relation to the multi-perspective character of the discourse category, with which this education is being connected (indicating its political, social, economic, and integrational-inclusive character, as well as its knowledge-, power-. subjectivity-, and identity-forming character). Drawing on the critical-discursive and post-Foucauldian approaches, it led the author to profile the discourse analy- is included in the glottodidactic reflection; secondly, when all institutions and players engaged in vocational language education, as well as all elements of the didactic process, are treated as entities (re)producing social systems; and, thirdly and lastly, when critical foreign language didactics is treated as a research program of a transformative character. This means that critical foreign language didactics will form another critical voice in the discussion on education. It will pose questions about the boundaries of government and, in its vocational language context, it will reverse the trend of the critical pedagogy oriented at the critique from the liberal perspective, revealing its traps and limitations. This critique will not be concerned with total criticism, demanding absolute rejection of vocational language education based on the neoliberal discourse, but it will turn attention toward: extricating the nuances of that discourse, the necessity to notice its 'naturalization,' and the need to allow for other, probably less 'popular' perspectives in this extremely significant educational context. Objections regarding the creation of the pretence of emancipation, present in the critical commentary on the critical trends, will become more nuanced along with the references to the Foucauldian understanding of the critique whenever the discussion will be concerned with how not to be governed like that, by that, in the name of those principles, with such an such an objective in mind and by means of such producers, not like that, not for that, not by them (Foucault 1990:36-39, as cited in Nowicka 2016). As an important consequence, the 16 The multiperspectivity of the category of discourse and the multiplicity of its definitions will also open a possibility to construct it differently for the needs of a further broadening of this viewpoint within research on various forms of education.
Łukasz Kumięga aforesaid critique might be conducted on three levels: textual, social, and prognostic, i.e. firstly, when it is related to revealing inconsistencies or internal contradictions of the analysed texts while simultaneously pointing out their functions; secondly, when it reconstructs manipulative and persuasive strategies in discourses; and, thirdly, when it identifies "moments problematic from the point of view of government" (cf. Kopytowska and Kumięga 2017).
In this triad, one can discern a special function of To conclude, critical foreign language didactics will bridge an important gap in the humanistic reflection on vocational language education, significantly extending the insufficient presence of references to this tradition of thinking in foreign language didactics. The perspectives illustrated in this paper constitute the first attempt at drafting the foundation for foreign language didactics profiled in such a manner, and they form an invitation to further, indepth reflections that deal with this very complex academic and empirical area. | 2020-12-17T09:11:00.756Z | 2020-12-16T00:00:00.000 | {
"year": 2020,
"sha1": "2912fd198e095ac66b61c3a1b2c3dabaa5d9ac13",
"oa_license": "CCBYSA",
"oa_url": "https://czasopisma.uni.lodz.pl/socjak/article/download/8880/8685",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7279310741becee6e50d7c0cf9b9940a21711c0d",
"s2fieldsofstudy": [
"Education",
"Linguistics",
"Political Science"
],
"extfieldsofstudy": [
"Sociology"
]
} |
249967125 | pes2o/s2orc | v3-fos-license | Carbon–Nitrogen–Sulfur-Related Microbial Taxa and Genes Maintained the Stability of Microbial Communities in Coals
Coal microbes are the predominant form of life in the subsurface ecosystem, which play a vital role in biogeochemical cycles. However, the systematic information about carbon–nitrogen–sulfur (C–N–S)-related microbial communities in coal seams is limited. In this study, 16S rRNA gene data from a total of 93 microbial communities in coals were collected for meta-analysis. The results showed that 718 functional genera were related to the C–N–S cycle, wherein N2 fixation, denitrification, and C degradation groups dominated in relative abundance, Chao1 richness, Shannon diversity, and niche width. Genus Pseudomonas having the most C–N–S-related functions showed the highest relative abundance, and genus Herbaspirillum with a higher abundance participated in C degradation, CH4 oxidation, N2 fixation, ammoxidation, and denitrification. Such Herbaspirillum was a core genus in the co-occurrence network of microbial prokaryotes and showed higher levels in weight degree, betweenness centrality, and eigenvector centrality. In addition, most of the methanogens could fix N2 and dominated in the N2 fixation groups. Among them, genera Methanoculleus and Methanosaeta showed higher levels in the betweenness centrality index. In addition, the genus Clostridium was linked to the methanogenesis co-occurrence network module. In parallel, the S reduction gene was present in the highest total relative abundance of genes, followed by the C degradation and the denitrification genes, and S genes (especially cys genes) were the main genes linked to the co-occurrence network of the C–N–S-related genes. In summary, this study strengthened our knowledge regarding the C–N–S-related coal microbial communities, which is of great significance in understanding the microbial ecology and geochemical cycle of coals.
INTRODUCTION
Coal is the most vital fossil fuel on the Earth. 1,2 The formation of coals is driven by geological events, 3 geologic settings, 4 and microorganisms. 5 Among them, microbes are the predominant form of life in the subsurface ecosystem including coals and play a vital role in biogeochemical cycles, 1 which have accompanied the evolution of coals over tens to hundreds of millions of years. 6 Microbial activities run throughout the whole process from humus deposition to anthracite formation. 6 During the humus deposition period, anaerobic or facultative anaerobic bacteria participated in the decomposition of peat or low-rank coals in the anaerobic environment; and in the coalification metamorphic stage, microbes in surface water and groundwater infiltrated into coal seams by surface uplifting, which could act on n-alkanes and other organic matters in coals. Previous studies observed that some organic substances in coals could be degraded by a variety of microorganisms following a quasistep-by-step biodegradation process. 1,6 The macromolecular substances of coals and/or peats were degraded into single molecules and oligomers by hydrolysis and fermentation bacteria at first, and then some intermediate products were generated by different acidifying bacteria, acetic acid-producing bacteria, and hydrogen-producing bacteria. The resulting products could further generate methane under the action of methanogens. However, these studies mainly monitored the biomarkers in coals, but the important factors influencing the coal biodegradation 7 and the interaction between microorganisms have received extensive attention 8,9 merely in recent years.
Except for these above C metabolic processes, the biogeochemical processes of N and S also have an impact on the coal biodegradation. Guo et al. 10 detected the microbial taxa related to N metabolism in coal seams, including N 2 fixing taxa and denitrifying taxa. The participation of these microorganisms in N metabolism can increase the N availability of coal seam ecosystems because bioavailable N is a major limiting factor in the extreme oligotrophic environ-ments 11 including coal seams. Shi et al. 12 found that microbial N metabolism had an effect on organic matter decomposition in coals such as the decomposition of cellulose and carbohydrate.
In contrast, S in coals is the most notorious environmental pollutant, and its geochemical processes are closely related to the deposition and formation of coals. 13 For example, most of the S in coals was derived from the seawater submerged in the peat swamp during the peat accumulation process. A large amount of seawater sulfate diffused into the bottom peat and was reduced to H 2 S, S, and polysulfides by microorganisms. 14 In addition, the release of H 2 S due to sulfate reduction would be detrimental to the methanogenesis process during the coal biodegradation. 15 The process of anaerobic fermentation of coals might also be affected by degraded intermediates and final products (such as sulfides), whose high concentrations affected pH, disrupted cell membranes, prevented protein synthesis, altered hydrogen partial pressure, reduced bioavailability of trace elements, and hindered mass transfer, and thereby disrupted the anaerobic degradation chain. 16 Among these inhibitory compounds, sulfide is formed by the microbial reduction of sulfate and the degradation of S-containing organic matter under anaerobic conditions, and microorganisms involved in the sulfate reduction could compete with other anaerobic bacteria in an environment with low redox potential. 16,17 On the other hand, the microbial S metabolism process is not necessarily completely detrimental to the biogeochemical processes of coals. Among the bacterial groups, there are a large number of microbial groups closely related to Desulfomonas. These sulfate-reducing bacteria can use kerogen as an end-point autoacceptor or shuttle to oxidize acetic acid or other simple fatty acids, which is the key to the degradation of organic matter in coal seams. 18 Mesléet al. 19 pointed that the depletion of bitumen by solvent extraction resulted in an increase in methane volume in some shales, indicating the methanogenic potential of the shale matrix. These shaleassociated microbial communities were able to produce more acetate when grown on the fulvic acid fraction than on ether extracts of the same shale, wherein the microbes were grown under sulfate-reducing conditions rather than under methaneproducing conditions. 20 In summary, C−N−S-related microbial communities play an important role in the decomposition of coal organic matter and coal evolution. Therefore, it is of great significance to systematically describe the C−N−S-related microbial communities in coal seams. In this study, 16S rRNA data of microbial composition in coal samples from the NCBI database were extracted and reanalyzed. The study aims to (1) describe the levels of the C−N−S-related microbial communities and functional genes in coal seams, (2) explore the important role of C−N−S-related groups in the microbial community, and (3) clarify the correlation among C−N−Srelated groups in coal seams.
MATERIALS AND METHODS
2.1. Data Sets. Until February 2022, literature retrieval was conducted through the Web of Science database, and the published papers 5,8,21−30 of "coal" and "microbial communities" were retrieved. The fastQ files according to the accession numbers of the 16S rRNA gene data from coal samples were downloaded. 16S rRNA gene data from 93 microbial communities in coals were collected for meta-analysis. The detailed sample information is shown in Table S1.
2.2. Bioinformatics Analysis. For microbial community (bacteria and archaea) analysis, the reads from 16S genes were merged and the raw sequences were quality filtered using the QIIME pipeline. The chimeric sequences were identified by the "identify_chimeric_seqs.py" command and removed with the "filter_fasta.py" command according to the UCHIME algorithm. The selection and taxonomic assignment of operational taxonomic units (OTUs) were performed based on the SILVA reference data (version 128) at 97% similarity.
Reads that did not align to the anticipated region of reference alignment were removed as chimeras by the UCHIME algorithm. Also, reads that were classified as "chloroplast", "mitochondria", or "unassigned" were removed.
The predictive functional abundance was predicted by PICRUSt2 (Phylogenetic Investigation of Communities by Reconstruction of Unobserved States) with "picrust2_pipeline.py" (https://github.com/picrust/picrust2), 31 which performed four key steps including sequence placement, hiddenstate prediction of genomes, metagenome prediction, and pathway-level predictions. In addition, the additional output file Predicted Enzyme Commission (EC) number copy numbers was used to screen the C−N−S-related microbial genera.
2.3. Data Analysis. To avoid differences in amplified fragments among different samples, microbiological analysis was performed at the genus level according to the classification. The Shannon diversity and Chao1 richness were determined according to the relative abundance of genera. In addition, Bray−Curtis dissimilarity was calculated based on the relative abundance of genera matrix in the Vegan package of R v 4.1.2. Also, nonmetric multidimensional scaling (NMDS) was applied based on the Bray−Curtis dissimilarity by Vegan's metaMDS function. Spearman's correlations between the Shannon diversity, NMDS1, and relative abundance of C− N−S-related groups were performed by PerformanceAnalytics package in R. Random forest machine learning was performed with the caret and random forest package in R. These C−N− S-related groups with nonzero abundance values in at least 10% of the samples were preselected and z-score standardized prior to model training. Network analysis was used to explore co-occurrence patterns of microbial groups by ggClusterNet package in R. Spearman's correlations between the relative abundance of the C−N−S-related genera and genes were considered. Also, Gephi (v0.9.1) and Cytoscape (v3.9.1) were used to visualize the co-occurrence networks for the C−N−Srelated microbial communities and the C−N−S-related genes, respectively.
RESULTS
3.1. Relative Abundance and Diversity of C−N−S-Related Microbial Communities. Based on the predicted EC number for each OTUs in the coal microbial communities, a total of 718 functional genera related to C (C degradation, methanogenesis, and CH 4 oxidation), N (N 2 fixation, ammoxidation, denitrification, and dissimilatory nitrate reduction to ammonium (DNRA)) and S (S reduction) cycles were detected (Table S1). Among the relative abundance of eight microbial groups (Figure 1a), the relative abundance of N 2 fixation taxa was the highest (43.85 ± 2.35%, ranging from 2.40 to 97.08%), followed by denitrification taxa (41.49 ± 2.36%, ranging from 0.68 to 96.04%), and C degradation taxa (32.58 ± 2.25%, ranging from 1.34 to 94.24%). The relative abundance of methanogenesis taxa was the lowest (5.77 ± 1.54%, ranging from 0.00 to 63.98%). In addition, the regularity of the α diversity indexes and niche width was slightly different from that of relative abundance (Figure 1). The Chao1 richness, Shannon diversity, and niche width of denitrification taxa were the highest (71.91 ± 6.74, 2.34 ± 0.12, and 3.84 ± 0.40, respectively).
3.2. Main C−N−S-Related Microbial Genera and Genes. Among 718 functional genera, many microbial communities participated in multiple element cycles (Table S2). For example, the vast majority of methanogens can fix N 2 . The top 20 C−N−S-related functional genera are shown in Figure 2. Among them, the genus Pseudomonas has the most C−N−S functions except methanogenesis and showed the highest relative abundance (12.02 ± 1.97%, ranging from 0.00 to 89.15%). In addition, the genus Herbaspirillum participated in C degradation, CH 4 oxidation, N 2 fixation, ammoxidation, and denitrification, which accounted for 8.84 ± 1.79% (ranging from 0.00 to 86.34%). The methanogen genera such as Methanobacterium, Methanosaeta, Methanolobus, Methanosarcina, Methanobrevibacter, and Candidatus Methanoperedens were also the main N 2 fixation groups in coals. In addition, some common anaerobic taxa such as Clostridium sensu stricto 1 were also widely involved in the C−N−S cycles (C degradation, N 2 fixation, and S reduction) of the coal seam environment.
3.3. Effect of the C−N−S-Related Group on the Total Microbial Communities. The diagnostic value of microbiome (n = 93) in coals was further evaluated by applying a random forest machine learning classification and regression analysis with the diversity index, relative abundance, and gene abundance of functional genera. The effectiveness of functional genera in reducing uncertainty and variance within the machine learning algorithm was measured by the decrease in mean accuracy for classification and mean-squared error (% Inc. MSE) for regression ( Figure 4). The most important diversity indexes of functional taxa for microbial diversity (Shannon diversity and NMDS1) mainly included the diversity (abundance, Shannon diversity, and NMDS1) of denitrification, DNRA, ammoxidation, N 2 fixation, and C degradation. The most important genes and genera for microbial Shannon diversity mainly included DNRA genes (nrfA and nrf H), S reduction genes (dsrB, cysD, cysH, dsrA, and sir), N 2 fixation genes (nif H, nif D, and nif K), and Enhydrobacter. The most important genes and genera for microbial NMDS1 mainly included denitrification genes (norB, norC, nosZ, and nirS), S reduction genes (cysNC and nirS), and C degradation genes (MAN2C1, celF, and chitinase). In addition, the Shannon diversity and NMDS1 of microbial communities were significantly related to the Shannon diversity of denitrification communities and NMDS1 of DNRA communities, respectively (Figure 4b,d).
In ecosystem studies, a co-occurrence network has become an essential tool for understanding the symbiotic patterns of microbial communities in ecosystem studies. 32 The cooccurrence network of microbial prokaryotes constructed from correlations with r ≥ |0.60| and p < 0.05 had 161 nodes (including 96 C−N−S-related genera) and 679 edges (Figure 5a). The correlations identified were predominantly positive. The top 20 genera with the highest values of weight degree, betweenness centrality, and eigenvector centrality were listed. Among them, C−N−S-related taxa had 11, 7, and 10 in the top 20 genera of such three node centrality indices ( Figure 5b−d). Herbaspirillum ranked in the top 20 in all three node centrality indices, and such genus participated in C degradation, CH 4 oxidation, N 2 fixation, ammoxidation, and denitrification. Methanogenesis genera Methanoculleus and Methanosaeta were the main hub microbes with a higher betweenness centrality index. In addition, unclassified Comamonadaceae, unclassified Alphaproteobacteria, and unclassified Clostridiales had the highest values of weight degree, betweenness centrality, and eigenvector centrality, respectively.
Coupling Relationship between C−N−S-Related
Groups. Pearson correlation showed that the Shannon diversities, NMDS1, and relative abundances of the majority of the C−N−S-related genera (expect methanogenesis ones) were significantly related to each other ( Figure 6). The methanogenesis group merely had a significantly positive correlation with the N 2 fixation group in Shannon diversity and relative abundance, and methanogenesis genes were also significantly related to N 2 fixation genes at the level of relative abundance.
The network of correlations (r ≥ |0.60| and p < 0.05) between the relative abundance of the C−N−S-related genera included 88 nodes and 154 edges (Figure 7a). All these correlations identified were positive. Multiple modules were shown in the co-occurrence network of the C−N−S-related genera. Two genera Clostridium sensu stricto 3 and Clostridium sensu stricto 5 linked to the module enriched with various methanogenesis genera. In addition, the genera with higher degree indexes mainly possessed the functions of N 2 fixation, denitrification, DNRA, C degradation, and S reduction except for methanogenesis (Figure 7b).
The network of correlations (r ≥ |0.60| and p < 0.05) between the relative abundance of the C−N−S-related genes included 63 nodes and 207 edges (Figure 7c). These correlations identified were predominantly positive. Multiple modules were shown in the co-occurrence network of the C− N−S-related genes and S-related genes (including sir, cysNC, cysH, cysD, cysI, cysN, and cysJ) and celF gene were the main genes linking the co-occurrence network of the C−N−Srelated genes.
DISCUSSION
This study comprehensively demonstrated the C−N−S-related microbial taxa and functional genes in coals. It was mainly based on the coal microbial data released in NCBI. To avoid OTU sequence differences caused by various amplified primers, the analyzed taxonomic unit was used at the genus level. In the field of coal seam microbial researches including these referenced researches, most attention has been paid to these groups related to the formation of biogenic coal bed methane, 33,34 and these studies are the key hubs for applying microbial knowledge to practical production. However, coal seams were important habitats for the coexistence of underground microbial communities, and the stable microecology in coal seams was inseparable from the synergy of multiple functional microorganisms.
Characterizing the functional properties and diversity is critical for understanding the community assembly and function relationships of biodiversity and ecosystem. 35 In particular, the methanogenesis taxa in coal seams, which have attracted wide attention, had the lowest abundance, biodiversity, and niche width among C−N−S-related microbial taxa (Figure 1). It showed that the number and relative abundance of phylotypes 36 and the range of environmental conditions that a species may tolerate 37 were lower than those of other functional groups. The C degradation and N 2 fixation groups that provided available C and N for coals and the denitrification groups with nitrate as electron acceptor had higher abundance, biodiversity, and niche width (Figure 1). Microbial growth is influenced by many factors, the most important of which is the availability of nutrients. Therefore, the availability of nutrients determined microbial community assembly. 38 Although the coal is mainly composed of the C element, the lack of available nutrients (especially available C and N) limits the microbial activity in coal seams. 39 Therefore, the addition of nutrients 40,41 was considered to improve the coal bed microbial activities and further stimulate the production potential of biogenic methane. In the coal seam environments where the available nutrients were extremely deficient, the transfer of nutrients and energy along the trophic level during the assimilation and dissimilatory biomass utilization was the basis of the ecosystem. 42 The C degradation, N 2 fixation, and denitrification could provide available C, N, and energy for the microecology in coal seams and ensure the exchange of metabolites in the microbial communities. 42 For functional genes in this study, the total abundance of genes related to the S reduction process was the highest among the different functional genes investigated. The S reduction genes here were mainly dominated by the cys genes but not the dsr genes encoding sulfate respiration (Figure 3). These cys genes were also the core linking the co-occurrence network of the C−N−S-related genes (Figure 7). There are a large number of microorganisms related to S reduction in coal seams. For example, Midgley et al. 43 found that some fermentative desulfurizing bacteria could produce H 2 S, which might be related to the symphoretic relationship of other bacteria and might also favor coal degradation. Beckmann et al. 44 considered that a high sulfate concentration and sulfatereducing bacteria did not prevent the growth of methanogenic archaea, but sulfate-reducing bacteria had limited energy and competed with methanogenic archaea for acetate. This study found that sulfate in coal seams was mainly utilized by bacteria through the assimilatory sulfate reduction pathway. Several cys enzymes were used to synthesize sulfites and convert sulfates into sulfides, and the existence of sulfate utilization enhanced the bacterial ability to produce amino acids, such as cysteine and methionine. 45 These processes provided biosulfur for the microbial communities in coals. Gene celF was another core gene linking the co-occurrence network of the C−N−S-related genes (Figure 7), which was dominant in C degradation genes (Figure 3). Such a gene was the key gene that encodes glycosidases hydrolyzing O-and S-glycosyl compounds 46 and played a vital role in the degradation of oligo-and polysaccharides 47 in coal degradation.
The study found that there were generally diverse functional microbial groups in many coal seams, among which some microorganisms with a variety of special functions had a high abundance, such as genera Pseudomonas and Herbaspirillum ( Figure 2). The C−N−S functional communities dominated the diversity and composition of microbial communities ( Figure 4) and the co-occurrence network of microbial prokaryotes ( Figure 5), and the genus Herbaspirillum also ranked in the top 20 in all three node centrality indexes. Pseudomonas is a bacterial genus that has been reported to be ubiquitous in coal seams. 48 This is precisely because that such genus has different metabolic potentials, allowing it to persist and grow in a wide range of coal seam environments and to utilize a variety of C compounds under special environmental conditions. Their lifestyle may be opportunotrophic, which was described by Singer et al. 49 Vick et al. 48 observed two Pseudomonas species with markedly different metabolic and ecological lifestyles, reflecting the broad metabolic and lifestyle diversity within such taxa, from parasitic to mutually beneficial 50 and free-living lifestyles. Genus Herbaspirillum has raised wide attention due to its ability to fix N 2 under microaerobic or anaerobic conditions; 51 in addition, it is widely involved in the C−N metabolic process including aromatic compounds metabolization 52 and nitrate reduction. 53 In addition, methanogenesis genera Methanoculleus and Methanosaeta were the main hub microbes with a higher betweenness centrality index. Methanoculleus has been reported in a coal seam in Hokkaido as the dominant methanogen. 54 Zhang et al. 55 found that the existence of Methanosaeta with Pseudomonas could enhance direct interspecies electron transfer and further promote the anaerobic degradation. These methanogens were the terminal carriers for the transformation of coal organic matter into methane and were an important driving force for the geochemical cycle of C, N, S, and other elements between the lithosphere and the atmosphere.
In contrast, the methanogenesis group was merely related to the N 2 fixation group at the genus and gene levels ( Figure 6). It fully showed the deficiency of nitrogen source in the coal seams. The N 2 fixation taxa were widely found in many coal seams, such as Jharia coal bed 56 and Alberta coal beds. 57 Here, genera Clostridium linked the module that was enriched in various methanogenesis-related genera (Figure 7). These Clostridium taxa might exist in a wide pH and temperature range and metabolize a wide range of substrates including cellobiose, glucose, xylose, vanillate, ferulate, lactate, propanol, and formate, 58 and were considered important substrate suppliers for methanogens. 59,60 In addition, such taxa contained a variety of regulatory genes responsible for regulating and absorbing N and urea. 61 In conclusion, this study comprehensively demonstrated C− N−S-related microbial taxa and functional genes in coals. There are a large number of C−N−S-related groups in coal seams. The inter-relationship of these taxa ultimately affects the microhabitat and has important implications for the decomposition of organic matter and the geochemical cycles in coal seams. Together, this study strengthens our knowledge regarding the microbial diversity and community composition of coals. Y.L., J.C., and X.Y. conducted the bulk of the data analysis for the study and co-wrote the manuscript. Y.L. and B.L. provided the funding for the study and were involved in the conceptualization of the study, as well as assisting in writing the manuscript. All authors read and approved the final manuscript.
Notes
The authors declare no competing financial interest.
The data sets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. | 2022-06-24T15:05:38.868Z | 2022-06-22T00:00:00.000 | {
"year": 2022,
"sha1": "8835020f5117085af6f19eabd3ed53f61544cad7",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "42e3e619a9bcc6450040757e33bb81226bc5b58c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
119151666 | pes2o/s2orc | v3-fos-license | The similarity degree of an operator algebra
Let $A$ be a unital operator algebra. Let us assume that every {\it bounded\/} unital homomorphism $u\colon \ A\to B(H)$ is similar to a {\it contractive\/} one. Let $\text{\rm Sim}(u) = \inf\{\|S\|\, \|S^{-1}\|\}$ where the infimum runs over all invertible operators $S\colon \ H\to H$ such that the ``conjugate'' homomorphism $a\to S^{-1}u(a)S$ is contractive. Now for all $c>1$, let $\Phi(c) = \sup\text{\rm Sim}(u)$ where the supremum runs over all unital homomorphism $u\colon\ A\to B(H)$ with $\|u\|\le c$. Then, there is $\alpha\ge 0$ such that for some constant $K$ we have: $$\Phi(c) \le Kc^\alpha.\leqno (*)\qquad \forall c>1$$ Moreover, the smallest $\alpha$ for which this holds is an integer, denoted by $d(A)$ (called the similarity degree of $A$) and $(*)$ still holds for some $K$ when $\alpha=d(A)$. Among the applications of these results, we give new characterizations of proper uniform algebras on one hand, and of nuclear $C^*$-algebras on the other. Moreover, we obtain a characterization of amenable groups which answers (at least partially) a question on group representations going back to a 1950 paper of Dixmier.
Moreover, the smallest α for which this holds is an integer, denoted by d(A) (called the similarity degree of A) and ( * ) still holds for some K when α = d (A). Among the applications of these results, we give new characterizations of proper uniform algebras on one hand, and of nuclear C * -algebras on the other. Moreover, we obtain a characterization of amenable groups which answers (at least partially) a question on group representations going back to a 1950 paper of Dixmier. Consider a unital operator algebra A (i.e. a subalgebra of B(H), containing I, not assumed self-adjoint). We are interested in the following "similarity property" of A:
Contents
For any bounded unital homomorphism u: A → B(H), there is an invertible operator S : H → H (= a similarity) such that x → S −1 u(x)S is contractive.
In other words, every bounded unital homomorphism on A is similar to a contractive one. Let Sim(u) = inf{ S S −1 } where the infimum runs over all invertible operators S: H → H such that the "conjugate" homomorphism a → S −1 u(a)S is contractive. Now for all c > 1, let Φ(c) = sup Sim(u) where the supremum runs over all unital homomorphism u: A → B(H) with u ≤ c. Assume that the above similarity property holds. Then it is easy to show that Φ(c) is finite for all c > 1. Our first observation, simple but crucial, will be that necessarily Φ(c) has polynomial growth, i.e. there is a number α ≥ 0 and a constant K such that (0.1) ∀c > 1 Φ(c) ≤ Kc α , equivalently: any bounded unital homomorphism u: A → B(H) satisfies Sim(u) ≤ K u α . Let d (A) be the infimum of the numbers α ≥ 0 for which (0.1) holds for some constant K.
Our second observation (which lies a bit deeper) is that d (A) is an integer, i.e. we have d(A) ∈ {0, 1, 2, 3, . . . }, and moreover there is a constant K such that (0.1) holds for α = d (A). We call d (A) the similarity degree of the operator algebra A. If the similarity property fails, then we set d(A) = ∞.
By a result due to Paulsen ([Pa4]), the similarity property is closely related to the notion of complete boundedness, for which we refer to [Pa1]. To decribe this connection, we will consider the following property (C) of an operator algebra A: (C) Every contractive unital homomorphism u: A → B(H) is completely bounded. It is easy to see that this holds for all C * -algebras and for several examples of uniform algebras (such as the disc and the bidisc algebras).
Under this assumption, (see [Pa4]) a unital homomorphism u: A → B(H) is similar to a contractive one iff it is completely bounded (c.b. in short).
Let K be the C * -algebra of compact operators on ℓ 2 , let C 0 ⊂ K be the subspace of diagonal operators and let K ⊗ min A be the minimal (= spatial) tensor product. Under the above assumption (C) on A, we will show (see Theorem 4.2) that d (A) is the smallest integer d with the following property: there is a constant K such that any x in the unit ball of K ⊗ min A can be written as a product of the form Thus, d(A) appears as the minimal "length" necessary to express any element of the unit ball of K ⊗ min A as a alternated product as above with 2d + 1 factors (with a good control of the norms of the factors).
More generally, if A is merely a Banach algebra with unit, we may consider it as embedded as a dense unital subalgebra into its enveloping unital operator algebraÃ. The morphism A ⊂Ã is characterized by the property that a unital homomorphism v: A → B(H) is contractive (i.e. has norm equal to 1) iff it extends to a completely contractive homomorphismṽ:Ã → B(H). In particular,Ã satisfies (C). In this situation, let us assume that every bounded unital homomorphism u: A → B(H) extends to a completely bounded unital homomorphismũ:Ã → B(H). We define where the infimum runs over all α ≥ 1 such that for some K we have for all bounded unital homomorphisms u: If there is no such α, then we set by convention d(A) = ∞. Then again the same observations are valid: (A) is an integer and the infimum is attained in (0.2).
An interesting example of this situation is given by group algebras (or semi-group algebras). Let G be a discrete group (resp. semi-group with unit). Let A be the group (resp. semi-group) algebra of G i.e. A = ℓ 1 (G) equipped with convolution. In the group case,Ã coincides with the (full) C * -algebra of G, denoted by C * (G). Let g → δ g be the natural mapping from G into ℓ 1 (G) (i.e. δ g (s) = 1 iff s = g). Let u: ℓ 1 (G) → B(H) be a linear map and let π(g) = u(δ g ). Clearly u is a bounded unital homomorphism iff π is a uniformly bounded group (resp. semigroup) representation. Moreover if we define |π| = sup g∈G π(g) we have obviously |π| = u .
In this setting, we will write d(G) instead of d (A). We can show (see Theorem 3.2 and Corollary 3.4 below) that d(G) = 1 iff G is finite and d(G) = 2 iff G is amenable and infinite.
This result gives some information on the "similarity problem" for uniformly bounded group representations. Namely, we can prove Theorem 0.1. Let G be a discrete group. The following are equivalent: (i) G is amenable.
(ii) There is a constant K and α < 3 such that for any H and for any uniformly bounded group representation π: G → B(H) there is an invertible operator S: H → H (called "a similarity") with S −1 S ≤ K|π| α and such that g → S −1 π(g)S is a unitary representation of G. (iii) Same as (ii) with K = 1 and α = 2.
Note: A uniformly bounded representation π: G → B(H) is called unitarizable if there is an invertible S: H → H such that S −1 π(·)S is a unitary representation. The implication (i) ⇒ (iii) is a classical fact proved in 1950 by Dixmier [Di], following earlier work by Sz.-Nagy [SN] for G = ZZ. At that time, there were no known example of uniformly bounded non-unitarizable representation. The first example of this phenomenon was given in 1955 by Ehrenpreis and Mautner [EM] on the group SL 2 (IR) (cf. also [KS]). See Cowling's notes [Co] for more information on the Lie group case. Later on, many constructions were given on non-commutative free groups (or on any discrete group containing a non-commutative free group as a subgroup). See for example the references in [MPSZ] and [BF2]. See also [P1,Chapter2]. In the same paper, Dixmier asks whether amenable groups are the only groups G on which every uniformly bounded representation π is unitarizable. This remains an open question. Our result shows that if one incorporates in Dixmier's question the fact that the similarity S can be found with S S −1 ≤ |π| 2 , then the answer is affirmative.
It seems conceivable that d(G) < ∞ ⇒ d(G) ≤ 2 automatically, but at the time of this writing we have not been able to prove this, and we are now more inclined to believe (in analogy with Corollary 6.2) that there are examples of discrete groups G with 2 < d(G) < ∞. Note that these would be non-amenable groups not containing F 2 , the free group on two generators. While such examples are known to exist [O1-2], they still appear difficult to understand (see for example the exposition in [Pat]).
Recently, we proved ( [P5]) that when A is the disc algebra we have d(A) = ∞, thus solving the "Halmos problem" on polynomially bounded operators. Of course, this also holds for the polydisc algebra, the ball algebra or for any uniform algebra admitting a quotient algebra (unitally) isometric to (or completely isomorphic to) the disc algebra. It is conceivable that d(A) = ∞ for any proper uniform algebra, however at this point we are only able to show the following (see Theorem 5.1).
Theorem 0.2. Let K be a compact set. Let A ⊂ C(K) be a uniform algebra (i.e. a closed unital subalgebra which separates the points of K). Then A = C(K) iff d(A) ≤ 2 and A satisfies (C).
We now turn to C * -algebras. Unfortunately, at this time we are unable to produce examples of C * -algebras A for which d(A) takes arbitrarily large finite values (or one for which the degree is infinite). This would solve (negatively) a well known open problem, due to Kadison [Ka] (see [P1]). We conjecture that there is a C * -algebra A (probably the reduced C * -algebra of the free group on infinitely many generators) with d(A) = ∞. Unfortunately we only are able to produce examples of C * -algebras A with d(A) equal to either 1 (finite dimensional case), 2 (nuclear case), and 3 (B(ℓ 2 )).
We give a result (Theorem 6.1) which is very close to proving that, for a C *algebra, d(A) ≤ 2 implies A nuclear. Indeed, it is known (see [CE]) that A is nuclear iff, for any * -representation π: A → B(H), the von Neumann algebra generated by π is injective. What we can prove is the following (see Theorem 6.1).
Theorem 0.3. Let A be a unital C * -algebra such that d(A) ≤ 2. Then, whenever a * -representation π: A → B(H) generates a semi-finite von Neumann algebra, that von Neumann algebra is injective.
We are convinced that the similarity degree d (A) can take arbitrary integer values when A runs over all possible (non self-adjoint) operator algebras, but again we have not been able to verify this yet. However, in the more general framework of "similarity settings" considered below, it is easy to exhibit examples realizing any possible integral value of the degree, see Remark 3.6.
The present investigation was considerably influenced by several sources [Pel,B1,BRS,BP2] which I would like to acknowledge here: 1) Peller's paper [Pel] contains a discussion (partly based on some ideas of A. Davie [Da]) of the space of coefficients of representations of a Q-algebra (with some consequences for operator algebras). In view of the recent characterizations in [BRS] and [B1] of operator algebras which use the Haagerup tensor product, it was natural to try to transpose these ideas from [Pel] to the "new" category of operator algebras with c.b. maps as its morphisms. This is the content of section 1 below. 2) Blecher and Paulsen's paper [BP2] contains several striking factorization theorems for elements in the maximal tensor products of various operator algebras, analogous to the factorization of polynomials into products of polynomials of degree 1. Their factorization is into infinitely (or at least unboundedly) many matricial factors (see §7). It was natural to wonder in which case the number of factors could be bounded by a fixed integer. This is what lead to the central notion of this paper: the "similarity degree" (see Theorem 4.2).
M n (max(E)) we have x < 1 iff, for some integer N , there is a diagonal matrix D in M N (E) and scalar matrices β ∈ M n,N and γ ∈ M N,n such that (0.4) x = βDγ and β D γ < 1.
We refer the reader to [Pa6] for more information on this.
We now review the contents of this paper, section by section. In section 1, we introduce the notion of "similarity setting" which allows us to unify the various similarity problems that we wish to consider. A similarity setting is a triple (i, E, A) where E is an operator space, A ⊂ B(H) a unital subalgebra and i: E → A is an injective linear map with i cb ≤ 1, such that A is generated by i (E).
Given such a setting, for any c ≥ 1 we construct the enveloping unital operator algebraà c which contains A as a dense unital subalgebra and has the property that for any unital homomorphism u: In particular, when c = 1, u is completely contractive onà 1 iff it is completely contractive when restricted to E. We also introduce in §1, the universal unital operator algebra (denoted by OA(E)) of an arbitrary operator space E. The inclusion E → OA(E) can be viewed as the "maximal" setting involving E. The main result of §1 is Theorem 1.7 which gives an alternate description ofà c as a canonical quotient of OA (E). This is the crucial tool used in §2, where we present our theory of the similarity degree d of a setting (i, E, A). This degree d is defined as the smallest number α ≥ 0 with the following property: there is a constant K such that for any c ≥ 1 and any unital homomorphism u: A → B(H) with ui cb ≤ c there is an invertible S: H → H such that e → S −1 ui(e)S is completely contractive and S −1 S ≤ Kc α .
We prove (see Corollary 2.7) that d is an integer and that the preceding property still holds for α = d.
In §3, we apply this to uniformly bounded group representations on a discrete group G, we denote the degree in this case by d(G), and we prove the above Theorem 0.1, which implies that d(G) ≤ 2 iff G is amenable. We actually prove a stronger version involving the space of "coefficients" of uniformly bounded (u.b. in short) representations. This is proved by applying §2 to the following setting: E = ℓ 1 (G) with its (usual) maximal operator space structure (which also can be defined by duality with c 0 , cf. [ER,BP1]), and A ⊂ C * (G) is the image of ℓ 1 (G) under the canonical map from ℓ 1 (G) into C * (G).
In §4, we come to the most natural "setting": we consider a unital operator algebra A ⊂ B(H) and we let A = A and E = max (A) in the sense of [BP1]. Then we denote by d(A) the corresponding degree. We give a number of characterizations of this number.
In §5, we investigate the class of uniform algebras, i.e. A ⊂ C(K) (K compact), A is unital and separates the point of K. In analogy with the group case, we prove that there is a constant C such that any unital homomorphism u : In §6, we turn to C * -algebras and prove an analogous result (with the assumption that A has sufficiently many semi-finite representations): d(A) ≤ 2 iff A is nuclear or equivalently (by results of Connes and Haagerup, see [H4]) iff A is amenable.
Finally, in §7, we give a slightly expanded version of some of Blecher and Paulsen's results in [BP2]. We give, as an illustration, an apparently new characterization of the elements of the space B(G) formed of the coefficients of unitary representations of a discrete group G, to be compared with the case of uniformly bounded representations treated in Theorem 1.12. §1. Enveloping operator algebras. Preliminary results It will be convenient to work in the following very general setting: we give ourselves a unital algebra A together with a linear subspace E ⊂ A. We assume that E is given with an operator space structure. We will denote by i: E → A the inclusion mapping. Moreover, we assume that the unital algebra generated by i(E) is the whole of A.
In addition, we assume that A can be faithfully represented in B(H) for some Hilbert space H by a unital representation u 0 : A → B(H) such that u 0 i cb ≤ 1. We will then say that the triple (i, E, A) is a "similarity setting".
Given such a setting, we can define for any c ≥ 1 the enveloping unital operator algebraà c as follows: Consider the family C c of all unital homomorphisms with H u a Hilbert space, such that ui cb ≤ c.
Then we equip A with the norm Note that a c < ∞ since u is a homomorphism and i(E) generates A. Moreover, since u 0 ∈ C c , we indeed have a norm. We denote byà c the completion of A for this norm. Clearly we have an isometric unital homomorphism which allows us to consider from now onà c as a unital operator algebra (and a fortiori as an operator space).
Note that whenever 1 ≤ c ≤ d we have C c ⊂ C d hence we have a completely contractive unital homomorphism i c,d : Note thatà c is characterized by the following property: (1.1) any unital homomorphism u: In this general setting, we wish to study the following Similarity Property: For each u in c>1 C c , there is an invertible operator S: H → H (= a similarity) such that the homomorphism u S : a → S −1 u(a)S satisfies u S i cb ≤ 1 (or equivalently is in C 1 ). As we will see in the examples below, our setting contains a number of fundamental similarity problems: when A is a group algebra (i.e. A = ℓ 1 (G)) or when A is a C * -algebra, or when A is the disc algebra.
Example 1.1. Let G be a discrete group. Let A be the group algebra of G, i.e. A = ℓ 1 (G) equipped with convolution. Let Γ ⊂ G be a set of generators for G and let E = ℓ 1 (Γ).
In this situation, it is easy to check thatà 1 = C * (G) the "full" C * -algebra of G (= the enveloping C * -algebra of ℓ 1 (G)). Then the similarity property in this context means that for any group representation u: there is a similarity S: H → H such that sup t∈G S −1 u(t)S ≤ 1. We study this problem in section 3. Since there are power bounded operators which are not similar to contractions ( [Fo, Le]), the similarity property does not hold in this case.
Example 1.3. Let A = A(D) the disc algebra and let E = A(D) equipped with its "maximal" operator space structure, i being the identity on A(D). Then consider u: A → B(H) such that ui cb ≤ c and let T = u(z). Here ui is c.b. iff T is "polynomially bounded". Moreover ui cb ≤ c holds iff we have for any polynomial P .
The similarity problem in this case is a well known problem usually attributed to Halmos. The problem was solved by a counterexample in [P5]. Analogous questions can be formulated for any uniform algebra. We will return to this topic in §5.
8 Example 1.4. Let A be a C * -algebra and let E = max (A), with i again equal to the identity. Then the similarity problem reduces again to a well known open problem raised by Kadison [Ka]: is every bounded unital homomorphism u: A → B(H) similar to a * -representation?
We discuss the C * -algebra setting in §6.
Let E be an arbitrary operator space. We wish to define the "free unital operator algebra" associated to E. One way to define it is as follows. We consider the free unital (noncommutative) algebra P(E) associated to E (equivalently, this is the tensor algebra over E). The elements of P(E) may be described as the vector space of formal sums with λ 0 , λ i 1 , . . . , λ N i 1 ...i N ∈ | C and with e 1 i , e 2 i 1 , . . . all in E, equipped with the "free" product operation.
Grouping terms, we may rewrite (1.2) as (1.3) P = P 0 + P 1 + · · · + P N with P 0 , P 1 , . . . , P N "homogeneous", i.e. (1.3') We will denote by E (N) the linear subspace of P(E) spanned by all elements of the form (1.3)'. When N = 0 we define by convention The space E (1) is just E viewed as a subset of P (E). Then consider the family J of all the mappings v: and P = sup v∈J v(P ) .
We will denote by OA(E) the completion of P(E) for this norm. (The fact that it is a norm easily follows from (1.10) and (1.14) below.) Clearly we have P Q ≤ P Q for all P, Q in P(E), hence we have a unital Banach algebra structure on OA (E). We denote by OA N (E) the closure in OA (E) of all the elements of the form (1.3). Moreover, we denote by E N the closure in OA(E) of the linear subspace E (N) . By construction, we have a natural embedding which allows us to consider from now on OA(E) as a unital operator algebra (and a fortiori as an operator space) containing E completely isometrically. This operator space structure can be described as follows: consider an element G in K ⊗ P (E). Clearly G can be written (for some N ) as a finite sum of the following form For short we will also write this as Then the following formula encodes the operator space structure of OA(E): This algebra OA(E) is characterized by the following (easily verified) property: (1.7) Let B be any unital operator algebra. For any v: E → B with v cb ≤ 1 there is a unique unital homomorphismv: OA(E) → B extending v such that v cb ≤ 1.
(Note that actually the extensionv is the restriction of a C * -algebra representation.) For instance, we may consider B = OA (E) and v z : where z ∈ | C with |z| ≤ 1. Then by construction of OA (E), v z cb ≤ 1 hence there is a unique unital homomorphismv z : OA(E) → OA (E) extending v z and such that v z cb ≤ 1. We will use the notation Then it is easy to check (sincev z is a homomorphism extending v z ) that if P is as in (1.3) we have ω(z)P = P 0 + zP 1 + · · · + z N P N .
Similarly, if G is as in (1.5) we have It will be useful to record here the following fact.
Lemma 1.5. Each P in P(E) can be written in a unique way as (1.9) P = P 0 + P 1 + · · · + P N , for some integer N with P j ∈ E (j) for all j ≥ 0. If we define Q j (P ) = P j (i.e. Q j (P ) = 0 ∀j > N ) then Q j extends to a complete contraction from OA(E) onto E (j) .
Proof. By the preceding formula (1.8) we have where m denotes normalized Haar measure on {z | |z| = 1}. By convexity this yields (1.10) G j min ≤ G min .
Since G j = (I K ⊗ Q j )(G), this shows that Q j cb ≤ 1, and it also shows the unicity of the expression (1.9).
Remark. Another description of OA(E) is as follows: we consider the C * -algebra C * < E > constructed in [Pe]. This is characterized by the property that for any v: E → B (B any C * -algebra) with v cb ≤ 1 there is a representation π: C * < E >→ B extending v. Then we can define OA(E) as the unital (non-selfadjoint) operator algebra generated by the elements of E in the unitization of C * < E >. We now introduce the "product map" π 1 and a whole family of deformations π z . Consider z ∈ | C with |z| ≤ 1 and let c = 1/|z|. We can define a unital homomorphism π z : OA(E) →à c as follows: Let V z = zi: E →à c . Then V z cb ≤ 1. Therefore, by (1.7) (sinceà c is an operator algebra) there is a unique unital homomorphism π z : OA(E) →à c extending V z and such that π z cb ≤ 1. We will need the following simple observation.
Lemma 1.6. Consider our usual similarity setting (i, E, A). Assume that E contains the unit element 1 A of A. Let 0 ≤ j < N . Then for any g in Proof. We introduce the map V : OA(E) → OA (E) which is simply the left multiplication by 1 A , i.e. V (x) = x ⊗ 1 A . Clearly we have Moreover for any P in P(E) we have clearly π 1 (V (P )) = π 1 (P ) hence Similarly for any g in K ⊗ E (j) , let Then we clearly have (1.11) and (1.12). (Indeed, since OA(E) is an operator algebra We now come to our first result.
Theorem 1.7. Let c ≥ 1 and z = 1/c. The mapping π z is a completely contractive surjection from OA(E) ontoà c . Moreover, it induces canonically a completely isometric isomorphism Proof. Note that π z is characterized as the unital homomorphism such that if we view E ⊂ OA(E) and A ⊂à c . By construction we have π z cb ≤ 1. On the other hand, note that π z (OA(E)) contains i (E) and is a subalgebra, hence it contains A ⊂à c since we assume that i(E) is generating. Therefore we can define a mapping simply by setting This is clearly a unital homomorphism. Moreover we have Hence by the defining property ofà c , (since OA(E)/ ker(π z ) is an operator algebra [BRS]) there is a unique unital homomorphismũ z :à c → OA(E)/ ker(π z ) such that ũ z cb ≤ 1. Moreoverũ z|A = u z hence we have (σ z ) −1 (a) = u z (a) ≤ a à c for any a in A. By the density of A inà c , it follows that σ z is a surjective isometry, andũ z = (σ z ) −1 , so that finally Thus σ z is a complete isometry. For the last assertion in Theorem 1.7, we need an obvious extension of the main result in [BRS] to non-closed unital subalgebras of B(H), as follows: let P be a unital subalgebra of B(H) and let I ⊂ P be a 2-sided ideal which is closed in P.
Then the quotient space Z = P/I can be equipped with a (noncomplete) operator space structure by Ruan's theorem [R]. Consequently, its completion Z can be equipped with a (complete this time) operator space structure.
On the other hand, Z is clearly a unital Banach algebra, and it is easy to check that the product mapping Z ⊗ h Z → Z is a complete contraction. Hence by [BRS] there is a completely isometric unital homomorphism from Z into B(H) for some Hilbert space H.
Returning to the situation in Theorem 1.7, let P = P(E) (equipped with the operator space structure induced by OA(E)) and I = P(E) ∩ ker(π z ). Let us denote Z c = P/I in this case. Clearly, the restriction of π z to P(E) induces a completely contractive unital homomorphismσ z : Z c →à c , which is injective on Z c . Since we assume that i(E) generates A, we haveσ z (Z c ) = π z (P(E)) = A. Thus the inverse ofσ z|Z c defines a homomorphism u z : A → Z c such that u z i cb ≤ c, and repeating the preceding argument we obtain thatσ z must be a complete isometry from Z c ontoà c .
The next result is a simple reformulation of Paulsen's results in [Pa4].
Proposition 1.8. Let K ≥ 0 be a constant. Let E be an operator space, A a unital algebra and i: E → A an injection with i(E) generating A. The following properties of a unital homomorphism u: (ii) There is an invertible operator S: H → H with S S −1 ≤ K such that the map u S : A → B(H) defined by u S (a) = S −1 u(a)S extends completely contractively toà c . (iii) There is an invertible operator S with S S −1 ≤ K such that Proof. The equivalence (i) ⇔ (ii) is exactly Paulsen's result [Pa4]. Clearly (ii) ⇒ (iii) holds since by construction i: E →à c cb ≤ c. Now assume (iii). By the defining property ofà c , u S : A → B(H) admits an extensionũ S :à c → B(H) with ũ S cb ≤ 1. Hence we obtain (ii).
Remark 1.9. Let E be any operator space. Consider the iterated Haagerup tensor product Then x can be written as a finite sum It is proved in [CES] that we have where the supremum runs over all possible choices of H and of complete contractions We claim that actually this supremum is attained when σ 1 , . . . , σ N are all the same, more precisely we have where the supremum runs over all possible H and all complete contractions σ: E → B(H).
Indeed, this follows from a trick already used by Blecher in [B1] and which seems to originate in Varapoulos's paper [V]. The trick consists in replacing σ 1 , . . . , σ N by the single map σ: (More precisely σ(e) is the (N + 1) × (N + 1) matrix having (σ 1 (e), . . . , σ N (e)) above the main diagonal and zero elsewhere). Then it is easy to check that σ cb = sup j σ j cb and ∀x 1 , . . . , ¿From this our claim immediately follows. Note that our claim shows for instance that in the case E = max(ℓ n 1 ), the space E ⊗ h · · · ⊗ h E (N times) can be identified completely isometrically with a subspace of C * (F n ) (here F n is the free group with n generators). Namely the subspace spanned by all products where U 1 , . . . , U n denote the free unitary generators of C * (F n ). Although this useful fact might have been observed by others, it does not seem to have been recorded into print. Proposition 1.10. Let E be any operator space. Consider E as embedded into OA (E). Fix N ≥ 1, recall that we denote by E N the closed subspace of OA (E) generated by all products of the form x 1 · x 2 . . . x N with x i ∈ E. Then the natural "product" mapping Proof. For simplicity let us denote X = E ⊗ h · · ·⊗ h E (N times). Since the algebraic tensor product E ⊗ · · · ⊗ E is dense in X and similarly for E N , it suffices to prove that for any element But this is immediate by (1.13) and (1.6).
We now record here several consequences of Theorem 1.7 (inspired by Peller's results for the category of Q-algebras in [Pel,Prop. 4.2 and 4.3]).
Moreover, for any linear map w: A → B(H), the following assertions are equivalent: (i) For some c ≥ 1 and some K ≥ 0, w extends to a c.b. map w:Ã c → B(H) with w cb ≤ K.
(ii) There are constants c ′ ≥ 1 and K ′ ≥ 0 such that, for any N ≥ 1, the mapping w N : (ii)' There are constants c ′ ≥ 1 and K ′ ≥ 0 such that, for any N ≥ 1, there are bounded linear mappings u i : Proof. The first part is an obvious consequence of the first assertion in Theorem 1.7. We now prove the second part. The equivalence between (ii) and (ii)', with the same constants K ′ , c ′ , is a particular case of the well known factorization of completely bounded multilinear forms (cf. PaS]). We now turn to the remaining equivalence. Assume (i). Then wπ z cb ≤ K, hence wπ z|E N cb ≤ K, but for Hence by Proposition 1.10, we have This proves (i) ⇒ (ii). Conversely, assume (ii). Let c > c ′ and z = 1/c as before. Then we have by (1.16) By Lemma 1.5, this implies using (1.15), We now illustrate the meaning of Corollary 1.11 in the group case.
Theorem 1.12. Let G be a group (or merely a semi-group). Consider a function f : G → B(H). The following assertions are equivalent.
(i) There is a uniformly bounded representation π: G → B(H π ) and bounded operators ξ: H π → H and η: H → H π such that (ii) There are constants K ′ ≥ 0 and c ′ ≥ 1 such that, for each N ≥ 1, the function f N : defines (with the obvious identification) an element of cb(ℓ Proof. We merely apply Corollary 1.11 and Proposition 1.8 with A = E = ℓ 1 (G). Note that, using the factorization of cb maps, it is easy to verify that (i) holds iff the mapping t → f (t) extends linearly to a mapping w:Ã c → B(H) with w cb ≤ K.
We leave the details to the reader.
Remark. Note that if (i) holds in the preceding statement with |π| ≤ c then we obtain (ii) with K = ξ η and the same number c. However, if (ii) holds we only obtain (i) with a representation π such that |π| ≤ (1 + ε)c ′ (with ε > 0) and with ξ η ≤ K ε = 1 + K
N≥1
(1 + ε) −N . Indeed, these are the constants appearing in the proof of Corollary 1.11. Nevertheless, we will see below (see Corollary 7.8) that, in the particular case c ′ = 1, we can get rid of this extra factor (1 + ε). §2. Main results Let E, A and i: E → A be our general setting as described in the beginning of §1. We will assume that the following holds: Every unital homomorphism u: A → B(H) such that ui cb < ∞ is similar to a homomorphism such that ui cb ≤ 1, i.e. there is an invertible operator S: H → H such that the map e → S −1 ui(e)S is completely contractive.
When this holds we will say that in this setting the similarity property holds. We will need to carefully keep track of the constants involved in this phenomenon.
Lemma 2.1. If the similarity property holds then there is, for each c ≥ 1, a number Φ(c) such that, for any unital homomorphism u: Proof. This is elementary. Just consider the unital homomorphism U = u∈C c u and a similarity S such that S −1 U S is contractive then restrict to the invariant subspaces associated to each u in C c . We get the announced bound with Φ(c) = S S −1 .
The preceding lemma allows us to define the following parameter associated to the similarity property where the infimum runs over all S: H u → H u invertible such that u S i cb ≤ 1 where u S (a) = S −1 u(a)S. When the supremum is infinite, we write Φ(i, c) = ∞ by convention. Equivalently by Proposition 1.8 we have where by u cb(à 1 ,B(H u )) we mean that we compute the cb norm of u: By the definition ofà c and by (2.2), when the similarity property holds, then the natural mapà c →à 1 (which is always a complete contraction by (1.0)) is a complete isomorphism and (2.2) can be rewritten as It will be convenient to introduce the following notation Clearly we can assume that S is hermitian. We then invoke the three lines lemma. Consider z ∈ | C with 0 ≤ Re(z) ≤ 1 and e ∈ E. If Re z = 1 we have S −z ui(e)S z ≤ e , and if Re z = 0 we have S −z ui(e)S z ≤ c e . Hence by the subharmonicity of More generally, the same reasoning exactly yields that the map v: Since this holds for all u in C c , we have Proof. Assume c > 1 and ϕ(c) = c β with β < α.
The rest is obvious by monotonicity.
Hence we have
Corollary 2.4. If the similarity property holds (i.e. if Φ(i, c) < ∞ for all c ≥ 1) then there is a constant K and an exponent α ≥ 0 such that We come now to our main result.
Theorem 2.5. Consider our usual setting E, A, i and assume that the similarity property holds. More precisely we assume that for some constants K > 0 and α > 0 we have Φ(i, c) ≤ Kc α for all c large enough. Let N be an integer with α < N + 1.
this last property implies that for some constant K 1 we have Proof of Theorem 2.5. Let us denote for simplicity We first fix c > 1 chosen large enough so that Note that since N + 1 > α, this choice is possible. By the standard iteration argument used in the proof of the open mapping theorem, it suffices to prove the following.
Claim. There is a constant K ′′ such that for any f in the open unit ball of K ⊗ minÃ1 , with f ∈ K ⊗ A, there is an elementf in K ⊗ min X N with f min < K ′′ and such that (2.6) (I K ⊗ π 1 )(f ) − f min < 1/2.
By our assumption, we have a natural isomorphism ϕ z :Ã 1 →Ã c which is the identity on A, with ϕ z cb ≤ Kc α , z = 1/c. Let f be as in our present claim. Then we have Hence by Theorem 1.7 there is g in K ⊗ min OA(E) such that (2.7) g min < Kc α and (I K ⊗ π z )(g) = (I K ⊗ ϕ z )(f ).
Note that since ϕ z is the identity on A and f ∈ K ⊗A, we may write (I K ⊗ϕ z )(f ) = f . We can assume that, for some m, g is of the form g = g 0 +· · ·+g m with g j ∈ K⊗E (j) . By Lemma 1.5 we have
Letf =
N 0 z j g j , then by (2.8) we have f min ≤ (N + 1)Kc α = K" and (2.6) holds. This proves our claim. Thus we have proved that the "product map" π 1|X N : X N →Ã 1 is a complete surjection. Now let us show that this implies that Φ(i, c) ≤ K 1 c N for all c ≥ 1. To do that, consider u: A → B(H) unital homomorphism with ui cb ≤ c, let u:Ã c → B(H) be the canonical extension of u. Then we have (2.9) ũ cb(Ã 1 ,B(H)) ≤ K ′′ ũπ 1|X N cb(X N ,B(H)) but, for any j, we have, if z = 1/c and by (1.1) and Theorem 1.7 ≤ c j .
By Lemma 1.5 this implies Hence (2.9) and (2.10) yield By (2.2) this gives the announced estimate (2.4). Finally, replacing g 0 , . . . , g N−1 by g ′ 0 , . . . , g ′ N−1 according to Lemma 1.6 we can obtain, in the case 1 A ∈ E, an elementf = N−1 0 Moreover by (1.12) and (2.8) we have This justifies the last assertion. By a simple modification of the preceding proof we obtain: Theorem 2.6. Fix a number α > 0 and let N be an integer with N ≤ α < N + 1. Let X ⊂ K be a closed subspace for which there is a projection P : K → X with P cb = 1. Assume that there is a constant K such that, for any f in X ⊗ A we have Then the restriction of there is a constant K ′ such that, for any f in X ⊗ minÃ1 with f min < 1, there isf in X ⊗ min OA N (E) with f min < K ′ such that (I X ⊗ π 1 )(f ) = f . Furthermore, this last property implies that (2.11) actually holds with α = N for some (possibly different) constant K. Finally, if 1 A belongs to E, then the restriction of I X ⊗π 1 defines a surjection from X ⊗ min E N onto X ⊗ minÃ1 .
Corollary 2.7. Consider our usual setting (E, A, i) and assume that the similarity property holds. Let d be the infimum of the numbers α > 0 for which there is a constant K such that Φ(i, c) ≤ Kc α for all c large enough. Then d is an integer. Moreover, there is a constant K ′ such that for all c ≥ 1 we have We will call d the "similarity degree" of our setting (E, A, i).
Proof. Let N be the integer such that N ≤ d < N + 1. Then Theorem 2.5 implies d ≤ N , hence d = N . Thus d is an integer. Fix α < d. Then by Lemma 2.3, we have necessarily Φ(i, c) ≥ c α for all c ≥ 1. By continuity, this must hold also for α = d, whence the left side of (2.12). Finally, the right side of (2.12) follows from the last part of Theorem 2.5.
Remark 2.8. The case d = 0 is of course trivial, this case happens iff A is one dimensional. The case d = 1 also is trivial, although a bit more interesting. By Theorem 2.5, d = 1 happens only if the operator spaceà 1 is completely isomorphic to a quotient space of the direct sum of | C with E. For instance, in the situation of the basic Example 1.1, we have d = 1 only if C * (G) is completely isomorphic to a quotient space of ℓ 1 (Γ), or equivalently only if C * (G) is a max-space, in the sense of [BP1]. By [BP1,Pa5], we know that this can happen only if C * (G) is finite dimensional, whence only if (and a posteriori iff) G is finite.
Remark 2.9. For simplicity, we will identify E with i (E) in this remark, so we view E simply as a subset of A. We also view A as a subset ofà 1 . We will moreover assume that E contains the unit. Then, by Theorem 2.5, the degree d (as defined in Corollary 2.7) is equal to the smallest integer d with the property that the natural product map from E ⊗ h · · · ⊗ h E (d times) toà 1 is a complete surjection. By the very definition of the Haagerup tensor product, this last property can be restated as follows: there is a constant K such that for any n, any ε > 0 and any a = (a ij ) in M n (A) we can find matrices x 1 , . . . , x d with (say) x 1 ∈ M q 1 q 2 (E), (E) and with q 1 = n = q d+1 so that the matricial product and finally we have In most of the "concrete" examples considered below the space E is a "maximal" operator space in the sense of [BP1]. In that case, we may apply the decomposition decribed in (0.4) to any rectangular matrix x in M pq (E) (by just adding enough zeros to make it a square matrix). Using this fact, we obtain the following.
Proposition 2.10. Consider a setting (i, E, A). Assume that the operator space E is a maximal operator space (in the sense of [BP1]) and that i(E) contains the unit of A. Then the similarity degree d of (i, E, A) is equal to the smallest integer d with the following property: there is a constant K such that for all n any element x in M n (A) with x M n (Ã 1 ) < 1 can be written as a limit (in the norm of M n (Ã 1 )) of matricial products of the form (again we view E as a subset of A) where α 1 , . . . , α d+1 are rectangular scalar matrices, with say α i ∈ M p i q i , p 1 = n, q d+1 = n and D 1 , . . . , D d are diagonal matrices with entries in E, with D i ∈ M q i q i (E) (with q i = p i+1 ) and finally we have (Note that we can assume if we wish, by adding zero entries, that q 2 = p 3 = q 3 = · · · = p d = q d = N for some N large enough.) §3. Groups Let G be a discrete group. In this section, we apply our results in the case with i : E → A equal to the identity. We equip E = ℓ 1 (G) with its "maximal" operator space structure, so that for a map u: E → B(H) boundedness and complete boundedness are equivalent and u cb = u .
22
Observe that A = ℓ 1 (G) is a unital (Banach) algebra for the convolution product. The unit element of A is δ e defined by δ e (t) = 1 if t = e and 0 otherwise. We have δ e E = 1. It is classical in this case thatà the full C * -algebra of G. Indeed, any contractive unital homomorphism u: ℓ 1 (G) → B(H) induces a norm one representation π: G → B(H) which is automatically a unitary representation. (Indeed π(g) ≤ 1 and π(g) −1 ≤ 1 implies π(g) unitary for any g in G.) It also is a classical fact that the dual ofà 1 = C * (G) can be identified with the space B(G) of all coefficients of the unitary representations of G (cf. [Ey, FTP]). The space B(G) is defined as the space of all functions ϕ: G → | C for which there is a unitary representation π: G → B(H π ) and vectors ξ, η ∈ H π such that where the infimum runs over all possible representations of ϕ as in (3.1). One can imitate this definition for the algebraà c : Let us denote by B c (G) the space of all functions ϕ: G → | C for which there is a uniformly bounded representation π: G → B(H π ) with |π| ≤ c and vectors ξ, η in H π such that (3.1) holds. We then define again where the infimum runs over all possible such decompositions of ϕ. If c = 1, we recover the unitary case so that B 1 (G) is identical to B(G). Now consider a function f in A = ℓ 1 (G). Clearly we have More precisely, we have the following well known fact. Proof. By (3.3) the unit ball of B c (G) (which is convex) is weak- * dense in the unit ball of (à c ) * . Hence for any ϕ in the unit ball of (à c ) * there is a net ϕ i in the unit ball of B c (G) which tends pointwise to ϕ. Then, by a standard ultraproduct argument, one can check that ϕ itself is in the unit ball of B c (G).
We will also need the space of Herz-Schur multipliers on G which we denote by M 0 (G), we refer to [DCH, BF1-2, Bo1, H3, P2] for more information. We recall that a function ϕ: G → | C is in the space M 0 (G) iff there are bounded Hilbert space valued functions x: G → H and y: G → H such that ∀ s, t ∈ G ϕ(s −1 t) = x(t), y(s) .
Moreover, we denote where the infimum runs over all possible factorizations of ϕ. For the reader's convenience, we will now reformulate explicitly the meaning of the constants introduced in the previous section. Let c ≥ 1. Consider a bounded representation π: G → B(H) with |π| ≤ c. Assume that π is unitarizable then we denote Sim(π) = inf{ S −1 S } where the infimum runs over all invertible operators S: H → H such that t → S −1 π(t)S is a unitary representation. Then we set It will be convenient for our discussion to introduce also Note that by (2.2) ′ the inclusionà 1 = C * (G) →à c has norm ≤ Φ G (c), hence we have by Proposition 3.1 Moreover, again by Proposition 3.1 Theorem 3.2. The following properties of a discrete group G are equivalent: (iii) There is α < 3 and a constant K such that for all c ≥ 1 Φ G (c) ≤ Kc α .
Hence it remains only to prove (v) ⇒ (i). Assume (v). By Theorem 2.6 with X = | C, the restriction of π 1 to E 2 is a surjection from E 2 ontoà 1 = C * (G). Equivalently, this means that the adjoint map w 2 : B(G) → (E 2 ) * is an isomorphic embedding, so that for some δ > 0 we have Now assume ϕ finitely supported. We have where the supremum runs over all α = s,t∈G α(s, t)δ s · δ t in the unit ball of E 2 . By Proposition 1.10, the space E 2 can be naturally identified with ℓ 1 (G) ⊗ h ℓ 1 (G), so that for α as above .
Hence we find where e t ∈ ℓ 1 (G) * = ℓ ∞ (G) is biorthogonal to δ t , i.e. e t (δ s ) = 1 if t = s and 0 otherwise. But now it is well known (cf. [DCH,or P1]) that the right side of (3.8) is equal to ϕ M 0 (G) . Hence we deduce from (3.7) that for all finitely supported ϕ: By a result due to Bo.zejko [Bo2], this implies that G is amenable, whence (i).
Corollary 3.3. If G is not amenable, then for any c > 1 there is a representation π c : G → B(H c ) with π c ≤ c such that c 3 ≤ inf{ S S −1 }, where the infimum runs over all the similarities S such that S −1 π c (.)S is unitarizable.
Proof. By the preceding statement, we know that Φ G (c) ≥ c 3 for all c > 1. If Φ G (c) = ∞, we clearly have the conclusion. Otherwiseà 1 andà c are isomorphic. Then we representà c as a subalgebra of some B(H), sayà c ⊂ B(H c ) and we define π c to be the representation on G associated to the restriction to E = ℓ 1 (G) of the canonical morphismà 1 →à c . By (2.2)' we have à 1 →à c cb ≥ c 3 , hence by Proposition 1.8, this representation has the desired property.
The next result recapitulates what we know from §2.
Theorem 3.4. Assume that every uniformly bounded group representation on G is unitarizable.
(i) Then the function Φ G (c) defined in (3.5) is finite for all c ≥ 1. Moreover, let d(G) be the smallest α > 0 such that Φ G (c) ∈ O(c α ) when α → ∞. Then d(G) is an integer. We call it the similarity degree of G. all c ≥ 1 and d(G) is the largest integer with this property. (iii) The degree d(G) is the smallest integer N such that the natural "product" mapping which takes δ t 1 ⊗ · · · ⊗ δ t N to δ t 1 t 2 ...t N is a complete surjection onto C * (G).
Proof. The first part follows from Theorem 2.5 and especially from (2.4). By Lemma 2.3, if Φ G (c) < c α for some α < d(G) then Φ(c) ∈ O(c α ). Therefore we must have Φ G (c) ≥ c α for all c ≥ 1 and α < d(G), whence the second part. Finally, the third part follows from Theorem 2.5 and Proposition 1.10, which tell Remark 3.5. With the preceding notation, Theorem 3.2 says that d(G) ≤ 2 iff G is amenable.
Remark 3.6. We now return to the Example 1.1. Let G be a discrete group. Let A be the group algebra of G, i.e. A = ℓ 1 (G) equipped with convolution. Let Γ ⊂ G be a set of generators for G and let E = ℓ 1 (Γ), equipped again with its natural (=maximal) operator space structure. Here again, we haveà 1 = C * (G), but the similarity degree now depends very much on the choice of the generators. Let us denote by d = d(Γ, G) the similarity degree for this setting, according to Corollary 2.7. Then, by Theorem 2.5, the product map Then, the elements supported by [Γ] d must be dense in C * (G), and a fortiori, say, in ℓ 2 (G). This clearly implies (denoting by e the unit element of G) {e} ∪ ∪ j≤d [Γ] j = G. Therefore, every element of G can be written as a product of at most d elements of Γ. If we introduce the usual distance on G relative to (the Cayley graph of) Γ, this means that the diameter of G is at most d. This remark allows to produce examples of similarity settings with arbitrarily large finite similarity degrees. Indeed, just consider G = ZZ N , for some integer N , and take for Γ the subset formed of all elements with only one non zero coordinate. Clearly, by the preceding remarks we have N ≤ d in this case.
In the converse direction, we claim that d ≤ 2N . Indeed, let π : G → B(H) be a representation such that sup t∈Γ π(t) ≤ c. Then clearly sup t∈G π(t) ≤ c N . Now, since G is amenable, this implies by Dixmier's theorem that Sim(π) ≤ c 2N . Hence, we have shown that N ≤ d ≤ 2N . More precisely, consider in C * (ZZ N ) ≃ C * (Z Z) ⊗ min · · · ⊗ min C * (ZZ) the subspace E = C 1 + · · · + C N with C i = 1 ⊗ · · · ⊗ 1 ⊗ C * (Z Z) ⊗ 1 ⊗ · · · ⊗ 1 where C * (Z Z) appears at the i-th place. We equip E again with its maximal operator space structure and we let A be the algebra generated by E in C * (Z Z N ). We will show that the degree of the "similarity setting" constituted of the inclusion E ⊂ A ⊂ C * (Z Z N ) is equal to 2N . Let π: C * (Z Z N ) → B(H) be a unital homomorphism such that π |E ≤ c. Then clearly sup t∈Z Z N π(t) ≤ c N , hence by Dixmier's Theorem π cb ≤ c 2N . On the other hand, since d(C * (Z Z)) = 2, there exists a unital homomorphism u: C * (ZZ) → B(H) with u ≤ c and u cb ≥ c 2 . Let π = u ⊗ u ⊗ · · · ⊗ u: C * (Z Z N ) → B(H) ⊗ min · · · ⊗ min B(H). Then π is a unital homomorphism and it is not hard to check that π |E ≤ 1 + 2N c.
On the other hand, we clearly have π cb ≥ (c 2 ) N = c 2N , hence this proves that the degree d of this similarity setting is exactly equal to 2N .
Remark. In the setting described in Example 1.1, let G be a discrete amenable group, so thatà 1 = C * (G) = C * λ (G). We claim that the smallest constant K appearing in Proposition 2.10 (with d = 2) is actually equal to 1. Indeed, consider x in M n (C * λ (G)), with x < 1. Since G is amenable, its Fourier algebra A(G) has an approximate unit in its unit ball. Hence, we may assume (by density) that x is of the following form x = t∈G y(t) ⊗ λ(t)ϕ(t) with y(t) ∈ M n and y = t∈G y(t) ⊗ λ(t) such that y M n (C * λ (G)) < 1 and ϕ of the form ϕ(t) = λ(t)ξ, η with ξ η < 1 where ξ(·), η(·), y(·) and ϕ(·) are all finitely supported. Equivalently we have Then a simple computation shows that we can write (3.10) where A 1 , A 2 , A 3 are rectangular scalar matrices and where D 1 , D 2 are diagonal with entries in A of the form λ(t) for some t in G. Moreover, we have Explicitly, we can take A 3 ((θ, ℓ), j) = ξ(θ)δ ℓj and the diagonal matrices defined by Note that we can restrict the sums in (3.9) to be over the finite subsets of G where ξ and η are supported, so that we indeed obtain finite matrices in (3.10), and (3.11) is easy to check. Thus the decomposition (3.10) clearly implies our claim that Proposition 2.10 holds with d = 2 and K = 1. §4. Operator algebras We now come to the main application of our results. Let A be a unital operator algebra. Let E = max (A) in the sense of [BP1] (see (0.4) above). The operator space E is equal to A as a Banach space, but its operator space structure is characterized by the property that, for any linear map u: E → B(H), we have Here, of course we take A = A, and we let i: E → A be the identity of A. Of course, we haveà 1 = A isometrically (but perhaps not completely so). Then we denote by d(A) the similarity degree of this setting (i, E, A). Note that, by definition, d(A) ≤ α iff there is a constant K such that, for any bounded unital homomorphism u: A → B(H), there is an invertible S for which a → S −1 u(a)S is contractive and such that S S −1 ≤ K u α . It is easy to check that for any closed two sided ideal I ⊂ A, the quotient space A/I (which, by [BRS], is an operator algebra) satisfies
d(A/I) ≤ d(A).
Moreover, if B is another unital operator algebra and if A ⊕ B denotes the direct sum (equipped with the norm (x, y) = max{ x , y } and the obvious "block diagonal" operator algebra structure) then we have Now assume that every unital contractive homomorphism u: A → B(H) is completely bounded. Then clearly A ≃Ã 1 completely isomorphically, and there is a constant K such that u cb ≤ K for all unital contractive homomorphisms u: A → B(H). This implies that, if we define Φ A (c) = sup{ u cb } where the supremum runs over all unital homomorphisms u: A → B(H) with u ≤ c, then in the present setting we have Thus, to recapitulate, we obtain the following two statements (note that the equivalence between (a) and (b) below is due to Paulsen [Pa4]). Proof. For the equivalence between (a) and (b) (due to Paulsen [Pa4]), see the above Proposition 1.8. If (a) or (b) holds, then in the present setting we have A ≃Ã 1 completely isomorphically and (4.2) holds. Thus (b) ⇒ (c) follows from Corollary 2.4, and the converse is obvious.
Theorem 4.2. For any fixed integer d ≥ 0, the following properties of a unital operator algebra A are equivalent: (i) There is a constant K such that any unital homomorphism u: A → B(H) satisfies u cb ≤ K u d . (ii) There is a number α with d ≤ α < d + 1 for which there exists a constant K such that any unital homomorphism u: onto A is a complete quotient map, i.e. it induces a complete isomorphism from the quotient space max(A) ⊗ h · · · ⊗ h max(A)/ker(T d ) onto A. (iv) There is a constant K such that any bounded linear map u: There is a constant K such that the following holds: assume that a linear map u: then we have (vi) There is a constant K such that the following holds: for all n, any element x in M n (A) with x M n (A) < 1 can be written, for some integer N , as a matricial product of the form where α 0 ∈ M nN , α 1 ∈ M N ,..., α d−1 ∈ M N , α d ∈ M Nn are scalar matrices ( i.e. α 0 and α d are rectangular of size n × N and N × n, and the others are square matrices of size N × N ), and D 1 , . . . , D d are N × N diagonal matrices with entries in A, and finally we have (vii) There is a constant K such that any x in the unit ball of K ⊗ min A can be written as a product of the form (recall that C 0 ⊂ K denotes the subspace of diagonal operators) This implies, by (0.4), that any a with a M n (à 1 ) < n −2 can be written as a = αDβ as in (0.4). Thus, by Proposition 2.10, any x with x M n (A) < 1 can be written as a sum x = a + y with y = α 1 D 1 α 2 D 2 . . . D d α d+1 having factors α 1 , D 1 , α 2 , D 2 , . . . , D d , α d+1 as in (vi), and with a = αDβ as above. Now, by adding redundant factors equal to the unit, we can assume that a is of the same form as y, say a = α ′ 1 D ′ 1 α ′ 2 D 2 . . . D ′ d α ′ d+1 , and then changing N to 2N (and K to K + 1), it is easy to rewrite the sum x = a + y as a single product as in (vi). This shows that (v) implies (vi) and concludes the proof. Proof. The first assertion is clear. The second one follows from (2.2)', (2.12) and the obvious fact that, for any unital operator algebra A, in the present setting the natural inclusion ofà 1 into A is completely contractive. Remark 4.5. In the present setting (i, E, A) as defined in the beginning of §4, the similarity property holds iff we have: (SP) Every bounded unital homomorphism u: A → B(H) is similar to a contractive one.
Clearly, by §2, this holds iff in this setting the degree is finite. Up to now, in this section, we have concentrated on algebras A which satisfy (4.2). Nevertheless, the above property (SP) could be of interest even if the right side of (4.2) fails. Note however that, if we replace A byà 1 , then we return to the situation discussed in Theorem 4.2. More precisely, the setting being still the same as throughout this section, we have and A satisfies (SP) iff every bounded unital homomorphism u: Note that here A andà 1 are isometric, but perhaps not completely isomorphic.
Remark 4.6. Fix n ≥ 1. Let U be the unitary group in M n with normalized Haar measure m. It is not hard to show that the mapping is completely contractive from M n to max(M n ) ⊗ h max(M n ). Thus, in the case A = M n , the surjection appearing in Theorem 4.2 (iii) (with d = 2 here) actually admits a completely contractive lifting. Consequently, when A = K and d = 2, the constant K appearing in (iii),(iv) (v) or (vi) in Theorem 4.2 is actually equal to 1. Probably a more general result holds in the context of "normal virtual diagonals" in the sense of [E].
Remark. It is probably possible to develop the theory of the "similarity degree" in the category of dual operator algebras, replacing the Haagerup tensor product by the dual variant considered in [BS] and restricting attention to weak- * continuous homomorphisms, but we have not pursued this yet. (Note added may 97: this program has now been successfully carried out by C. Le Merdy.) Remark 4.7. Recently, Kirchberg [Ki] showed that a unital C * -algebra A has the similarity property (in other words d(A) < ∞) iff every derivation δ: A → B(H), relative to an arbitrary * -representation π: A → B(H) (we will call such a derivation a π-derivation) is inner. Equivalently, we have d(A) < ∞ iff there is a constant K such that any such derivation δ satisfies More precisely, a simple adaptation of Kirchberg's argument shows that (4.3) implies Here is a brief sketch: we follow the presentation of Kirchberg's argument in [P1, p. 129]. Let π: A → B(H) be a unital * -representation and let S: H → H be self-adjoint and invertible. Let π S (x) = S −1 π(x)S and let δ(x) = Log(S)π(x) − π(x)Log(S) (x ∈ A). We assume π S = c and π S cb = S −1 S . Fix a unitary in A. Consider the entire function f (z) = S z π(a)S −z . We have f (z) ≤ c if Re(z) = 1 and f (z) = 1 if Re(z) = 0. Hence by log-subharmonicity, we have f (z) ≤ c θ if Re(z) = θ, 0 < θ < 1. Since f (0) = π(a) is unitary, we have f (θ) = 1 + θf ′ (0)π(a) −1 + o(θ) ≤ 1 + θLog(c) + o(θ) when θ > 0 tends to zero. Therefore the Hermitian operator T = f ′ (0)π(a) −1 = Log(S) − π(a)Log(S)π(a) −1 satisfies, for any h in the unit sphere of H T h, h ≤ Log c.
Applying this last estimate with a replaced by its inverse, we obtain for any h in the unit sphere of H − T h, h ≤ Log c.
Consequently T ≤ Log(c). Hence, we find δ(a) = f ′ (0) = T ≤ Log c, so that δ ≤ Log c, whence by (4.3) δ cb ≤ K δ ≤ K Log c. By following Kirchberg's argument as presented in [P1, p. 130], we then conclude that Log π S cb ≤ δ cb ≤ K Log c, hence finally π S cb ≤ π S K . Let A be a unital operator algebra and let K(A) be the smallest constant K such that δ cb ≤ K δ for any completely contractive unital homomorphism π: A → B(H) and any π-derivation δ: A → B(H). Curiously, in almost all of E. Christensen's works in the C * -case, the upper estimates which appear for K (A), are all natural integers (cf. [C1-4]). On the other hand, note that if A satisfies (iv)-(vi) in Theorem 4.2, then we have K(A) ≤ Kd. This suggests various questions which we could not answer (we ask this for C *algebras only, but the questions make sense in general): Problem 4.8. Is any of the best constants K appearing in the conditions (i), or (iv)-(vi) (from Theorem 4.2) automatically equal to 1? Problem 4.9. Is K(A) always an integer when it is finite? 32 §5. Uniform algebras A uniform algebra is a closed unital subalgebra A of a commutative unital C *algebra C, such that A generates C as a C * -algebra. Equivalently, we can view A as a unital subalgebra of the algebra C(T ) of all continuous complex functions on some compact set T , which separates the points of T . We say that A is proper if A = C(T ). A typical example is the disc algebra A(D) formed of all continuous complex valued functions on ∂D which extend continuously and analytically inside D. Equivalently, A(D) ⊂ C(∂D) can be viewed as the closure in C(∂D) of the space of all polynomials.
Recently, we produced the first example of a bounded unital homomorphism on A(D) which is not c.b. (cf. [P5]). It is possible that every proper uniform algebra admits such homomorphisms and has infinite degree (note that the extension to other domains of | C n such as the polydisc or the ball is trivial). However, at the time of this writing, the only general result we have in this direction is the following one.
Theorem 5.1. Let A be a uniform algebra, such that any contractive unital ho- Remark 5.2. Equivalently (by [Sh]), d(A) ≤ 2 iff A is an amenable Banach algebra in the sense of e.g. [Pi2]. Compare with Remark 3.5. Note that there seem to be no known example of an amenable operator algebra which is not a C * -algebra (see [CL]).
The proof is based on the following two results. To state the first one, it will be convenient to work with a slightly unconventional version of the space H ∞ , which we now introduce. Let T = ∂D. Consider Ω = T I with I = {1, 2, 3, . . . }. Let (z i ) i≥1 denote the coordinates on Ω and let A n be the σ-algebra generated by (z 1 , . . . , z n ) with A 0 the trivial σ-algebra. Let m be the usual probability measure on T I (= normalized Haar measure). Every m-integrable function f : Ω → | C defines a martingale (f n ) n by setting f n = IE(f | A n ). A martingale (f n ) n , relative to the filtration (A n ), is called "Hardy" if for each n ≥ 1 the function f n depends analytically on z n (but arbitrarily on z 1 , . . . , z n−1 ). We denote by H ∞ m the subspace of L ∞ (Ω, m) formed by all f which generate a Hardy martingale. In Harmonic Analysis terms, the space H ∞ m is indeed the version of H ∞ associated to the ordered group Z Z (I) (formed of all the finitely supported families n = (n i ) i∈I with n i ∈ ZZ), ordered by the lexicographic order, i.e. the order defined by setting n ′ < n ′′ iff the last differing coordinate (="letter" with reversed alphabetical order) satisfies n ′ i < n ′′ i . As explained e.g. in [Ru,Chapter 8], this group has a "linear " behaviour and the associated H p spaces on it behave like the classical (unidimensional) ones. More generally, for any Banach space X, we will denote by H p m (X) (1 ≤ p ≤ ∞) the usual H p -space of X-valued functions on the group Ω (with ordered dual ZZ (I) ), in Bochner's sense.
Lemma 5.3. Let I be any set, and let X = (ℓ 1 (I) ⊗ h ℓ 1 (I)) * . Then there is a constant C such that for any Hardy martingale (f n ) in H ∞ m (X) we have Proof. We follow [P2, §4]. First observe that it suffices to prove this when I is a finite set. Indeed, if we know (5.1) for all finite sets then we can obtain it for an arbitrary set I by taking the supremum of each side over all finite subsets I ⊂ I. Let us assume that I is finite, so that X = ℓ ∞ (I) ⊗ h ℓ ∞ (I).
It clearly suffices to prove that for all functions f in the open unit ball of H ∞ m (X) we have N 0 z n df n dm ⊗ e n X⊗ min max (ℓ 2 ) ≤ C for some absolute constant C independent of N and f . Let f ∈ H ∞ m (X) with f H ∞ m (X) < 1. By invoking [P2,Theorem 4.2 and Remark 4.4], it follows that we can write for n = 0, . . . , N for some absolute constant C. Here we denoted by ℓ ∞ (I, ℓ 2 ) the Banach space of all bounded ℓ 2 -valued functions on I equipped with its natural norm. In addition, we denoted by ⊗ the projective tensor product and we made the obvious identifications of ℓ ∞ (I) ⊗ ℓ ∞ (I) ⊗ ℓ 2 with a subset respectively of ℓ ∞ (I, ℓ 2 ) ⊗ ℓ ∞ (I) and ℓ ∞ (I) ⊗ ℓ ∞ (I, ℓ 2 ). By a simple argument, one can check that . Therefore, we conclude from (5.2) and (5.3) that N 0 z n df n dm ⊗ e n X⊗ min max (ℓ 2 ) ≤ C.
Remark. It is possible to complete the proof without appealing to the projective tensor product, remaining in the framework of the Haagerup tensor product, but this option would unnecessarily lengthen the argument.
In [Kis], S. Kisliakov proved the remarkable fact that, if A ⊂ C(T ) is any proper uniform algebra, there is no bounded linear projection from C(T ) onto A. In [Ga], Garling extended Kisliakov's result. In particular, the following result is implicit in Garling's paper, but is proved there (see the proof of Theorem 2 in [Ga]).
Lemma 5.4. Let A be any proper uniform algebra. Then for some β > 0 there is, for each integer n, a Hardy martingale f 1 , . . . , f n with values in the unit ball of A * but such that z k df k A * ≥ β for k = 1, 2, . . . , n.
Proof of Theorem 5.1. Let A ⊂ C(T ) be a subspace with the induced operator space structure. By a joint result due independently to Junge and to Paulsen and the author (cf. [Pa6,Theorem 4.1]) there is a constant C ′ such that for any sequence (ξ n ) in A * we have . By Theorem 4.1, if d(A) ≤ 2, then A is a quotient (as an operator space) of max(A) ⊗ h max(A), or a fortiori of ℓ 1 (I) ⊗ h ℓ 1 (I) for some index set I. Therefore, there is a subspace Y ⊂ X and an complete isomorphism w: A * → Y with w cb ≤ 1. By Lemma 5.3 and by (5.4), this implies that for all Hardy martingales with C ′′ = CC ′ w −1 cb . Finally, by Lemma 5.4, A cannot be proper (since (5.5) would imply β √ n ≤ C ′′ for all n).
Remark 5.5. The preceding proof establishes more than claimed in Theorem 5.1. Indeed, we conclude that if A is proper then the operator space A * is not completely isomorphic to any subspace of (ℓ 1 (I)⊗ h ℓ 1 (I)) * for any set I. Stated in that form the result cannot be improved much. Indeed, it can be shown that if A is an arbitrary operator space, then for a suitable set I, A * embeds completely isometrically into (ℓ 1 (I) ⊗ h ℓ 1 (I) ⊗ h ℓ 1 (I) ⊗ h ℓ 1 (I)) * . Indeed, let Note that for any operator space A, the space max (A) is completely isometric to a quotient space of ℓ 1 (I) for some set I (cf. [BP1]). Then since K is nuclear, we have d(K) = 2, so that K is completely isometric to a quotient space of X 2 (I) for some suitable countable set I (see Remark 4.6). Therefore, K ⊗ h K is completely isometric to a quotient space of X 2 (I) ⊗ h X 2 (I) = X 4 (I). Since R and C are completely isometric to quotients of K, it follows that S 1 = R ⊗ h C is completely isometric to a quotient of K ⊗ h K. Finally, since every separable operator space is completely isometric to a quotient of S 1 (cf. [B2, p. 24]), we conclude that every (resp. separable) operator space is completely isometric to a quotient of X 4 (I) for some set I (resp. countable). The modification for the non-separable case is immediate.
35 §6. C * -algebras Let A be a unital C * -algebra. The "setting" used in this section is the same as in §4, i.e. E = max(A) and A = A. Note that in the C * -case, we haveà 1 = A completely isometrically, and (4.2) becomes Φ A (c) = Φ(i, c) for all c > 1. It is known that any nuclear C * -algebra satisfies d(A) ≤ 2 (cf. Bunce [Bu] and Christensen [C3]). In this section we study the converse. Essentially, we show that if A admits sufficiently many type II representations, then indeed the converse holds. More precisely we will prove the following.
Theorem 6.1. Let A be a C * -algebra such that d(A) ≤ 2. Then, for any representation π: A → B(H) such that π(A) generates a semi-finite von Neumann subalgebra of B(H), the bi-commutant π(A) ′′ is injective.
Proof. Indeed, on one hand we know by Haagerup's result ([H1,Prop. 1.8]) that d(B(H)) ≤ 3. On the other hand, by Joel Anderson's results in [A], there is a type II ∞ representation π: B(H) → B(H) such that π(B(H)) ′′ ∼ = M ⊗ B(H) where M is a II 1 factor containing a non trivial ultraproduct of matrix spaces. By Wassermann's result [W], we know that the latter is not injective, so π(B(H)) ′′ is not injective. Thus, Theorem 6.1 implies in particular that d(B(H)) ≥ 3. (I am grateful to Simon Wassermann for kindly directing me to Anderson's result and explaining to me its consequences.) Remark. By [C4], for any II 1 -factor M with property Γ we have d(M ) ≤ 44. Since these cannot be nuclear ( [W]), the preceding result ensures that 3 ≤ d(M ) ≤ 44. It would of course be interesting to reduce the interval of possible values of d(M ).
The proof uses the following results. The first lemma is a simple variant of a result from [JP].
Lemma 6.3. Let A be any C * -algebra. Then for any n and any ξ 1 , . . . , ξ n ∈ A * we have .
Proof. (The proof combines observations made independently by M. Junge [J] and the author.) Let u: A → max(ℓ n 2 ) be the map defined by u(a) = n 1 ξ i (a)e i . Let E be a finite dimensional operator space. We use the same notation as in [JP], i.e. we denote d SK (E) = inf{ v cb v −1 cb } where the infimum runs over all possible isomorphisms v: E → E between E and a subspace of the C * −algebra of all compact operators on ℓ 2 , which we have denoted above by K. Let a 1 , . . . , a n be a finite subset of A and let E ⊂ A be their linear span. Then the mapping u |E : E → max(ℓ n 2 ) factors through A completely boundedly with a corresponding constant ≤ u cb . Fix ǫ > 0. By Lemma 6.2.11 in [P3] this implies that u |E can be written as a composition u |E = u 2 u 1 with u 1 : E → E and u 2 : E → max ℓ n 2 such that u 1 cb = 1, d SK ( E) = 1 and u 2 cb ≤ u cb (1 + ǫ). By the main result in [JP], this implies Hence, since ǫ > 0 is arbitrary, and since , taking the supremum over all possible n-tuples (a i ) i≤n in A, we obtain (6.1).
Lemma 6.4. Let (e i ) be the canonical basis of the operator space max(ℓ 2 ). Let H be any Hilbert space and let X be either B( | C, H) or B(H * , | C), or equivalently let X be either the column Hilbert space or the row Hilbert space. Then for all x 1 , . . . , x n in X we have Proof. Assume X = B( | C, H) or B(H * , | C). We identify X with H as a vector space. Let (δ m ) be an orthonormal basis in H. Observe that for any finite sequence a m in B(ℓ 2 ) we have in both cases whence we have, for any x 1 , . . . , x n in X, Proof of Theorem 6.1. Recall that, since π(A) is a quotient C * -algebra of A, we have obviously d(π(A)) ≤ d (A). Hence it suffices to prove the statement with π(A) in the place of A. More precisely, we assume given A ⊂ B(H) such that M = A ′′ admits a faithful semi-finite normal trace denoted by τ and we must show that d(A) ≤ 2 implies that M is injective. First we can reduce to the finite case: indeed it suffices to show that, for any projection p in M and 0 < τ (p) < ∞, the algebra pM p is injective. Then, by a result due to Connes for factors and to Haagerup [H2] in the general case, pM p is injective iff there is a constant C such that for any central projection q = 0 in pM p, for any n and any n-tuple u 1 , . . . , u n of unitaries in pM p, we have Fix p, q and u 1 , . . . , u n unitary in pM p as above. We will show that (6.2) holds.
Let ξ i ∈ A * be the functional defined by Consider the mapping u: A → max(ℓ n 2 ) defined by ∀a ∈ A u(a) = and Taking this for granted for the moment, let us now complete the argument. By Theorem 4.2 our assumption d(A) ≤ 2 implies that there is a constant K such that the product mapping P : max(A) ⊗ h max(A) → A satisfies for all maps u: A → B(H) u cb ≤ K uP cb .
Note that ifû: max(A) × max(A) → B(H) is the bilinear form associated to uP , then by PaS] we know that uP cb = û cb and sinceû = ϕ 1 + ϕ 2 we obtain u cb ≤ K[ û cb ] ≤ K( ϕ 1 cb + ϕ 2 cb ) but by the specific factorization of ϕ 1 and ϕ 2 given above we have ϕ i cb ≤ α i β i whence Equivalently we have ≤ 2τ (q)tK.
By (6.1) this implies But on the other hand and finally n ≤ 64K 2 t 2 . Thus we obtain (6.2).
This completes the proof modulo the claim. We now turn to the latter claim. Let L = L 2 (M, τ ). We denote by x → r(x) ∈ B(L, | C) the canonical identification. Note that r(x)r(y) * ∈ B( | C, | C) can be identified with x, y . With this identification, we have for all a, b in A ϕ 1 (a, b) = n 1 r(a i j(a))r(j(b * )) * e i = n 1 r(a i j(a)) ⊗ e i • (r(j(b * )) * ⊗ I).
Corollary 6.5. Let G be a discrete group and let A be either C * (G) or the reduced C * -algebra C * λ (G). Then d(A) ≤ 2 iff G is amenable. Remark. Note however, that the equivalence with (v) in Theorem 3.2 concerning the spaces of coefficients does not follow from this new approach.
Corollary 6.6. Let A be a C * -algebra which generates a non-injective semi-finite von Neumann algebra. Then for any c > 1, there is a unital homomorphism u c : A → B(H) with u c ≤ c and u c cb ≥ c 3 . Remark 6.7. A unital C * -algebra A satisfies the similarity property ( i.e. d(A) < ∞) as soon as Φ A (c) < ∞ for some c > 1. Indeed, this follows from Lemma 2.3 and the remark preceding it.
Remark. The following result proved in [H1] and [C3] plays an important rôle in these papers: Let u: A 1 → A 2 be a bounded homomorphism between C * -algebras. Then for any finite subset (x i ) in A 1 we have The next result shows that the exponent 2 cannot be improved in this result.
40
Proposition 6.8. Suppose that a number α ≥ 1 has the following property: there is a constant K such that for any bounded homomorphism and for any finite subset x 1 , . . . , x n in B(H) we have Then necessarily α ≥ 2.
Proof. Our assumption can be written as follows: for any c ≥ 1 and any unital homomorphism u with u ≤ c, we have for any n and any x 1 , . . . , x n in B(H) In other words, the subspace X spanned in K by the sequence (e i1 ) (i = 1, 2, ...) satisfies the assumption (2.11) in Theorem 2.6. Assume α < 2. Then, by Theorem 2.6, (2.11) actually holds for α = 1 (for some K). Thus, if α < 2 we may as well assume α = 1. But then Haagerup's argument in [H1] (or the proof presented in [P1, chapter 7]) will lead to d(B(H)) ≤ 2, which contradicts Corollary 6.2. Thus we must have α ≥ 2. §7. The Blecher-Paulsen factorization In this section, we connect our description of the enveloping algebraà 1 with some ideas of Blecher and Paulsen in [BP2]. We take a slightly more general viewpoint than them in order to cover the situation of a group (or an algebra) generated by a subset, but the main idea is in [BP2]. We consider our usual "setting" (i, E, A), where E is an operator space, A a unital operator algebra (not assumed complete) , and i: E → A is a completely contractive linear injection with range generating A. But in addition we will assume throughout this section that E is "unital", by which we mean that E contains a norm one element e such that i(e) = 1 A .
Consider again the algebraà 1 as defined above, with unital embeddings E ⊂ A ⊂à 1 . It will be convenient to consider E as "included" into A and to view i as an inclusion map. The reader should be warned however that i will generally not be assumed completely isometric: in general the operator space structure on A only plays a auxiliary rôle. What really matters here is the given operator space structure on E and the resulting operator algebra one onà 1 , which appears as "generated" by E.
Theorem 7.1. With the above notation, let n be a positive integer. Then the following properties of an element x in M n (A) are equivalent: (i) x M n (Ã 1 ) < 1. (ii) The matrix x can be written, for some integer N and some integer d, as a matricial product of the form where α 0 ∈ M nN , α 1 ∈ M N ,..., α d−1 ∈ M N , α d ∈ M Nn are scalar matrices ( i.e. α 0 and α d are rectangular of size n × N and N × n, and the others are square matrices of size N × N ), and D 1 , . . . , D d are N × N matrices with entries in E, and finally we have Proof. The proof follows from an immediate adaptation of an argument in [BP2]. We merely sketch it. It is clear that (ii) implies (i). Conversely assume (i). This means that there is a number θ < 1, such that for any contractive unital homomorphism u : A → B(H), we have (E) and any N . Hence, returning to the particular n and x appearing in (i), by (7.1) we must have x (n) ≤ θ. Equivalently, we obtain (ii).
Remark 7.2. Recall that when A is a C * -algebra (resp. A = C * (G)) and E = max(A) (resp. E = ℓ 1 (G)) as in §5 (resp. §3), thenà 1 = A completely isometrically. When the latter holds, Theorem 7.1 gives a characterization of the elements of the unit ball of M n (A). Note also that when E = max (E), the elements of M N (E) admit a specific factorization (a kind of diagonalization) described above in (0.4).
As application, we have the following apparently new characterization of the coefficients of unitary representations of a group G, i.e. of the elements of the space B(G), as follows. (Take H unidimensional in the next statement, then (i) below is the same as saying that the norm of f in B(G) is ≤ K.) Corollary 7.3. Let G be any discrete group, and let Γ ⊂ G be a subset containing the unit element and generating G in the sense that every element of G can be written as a product of elements of Γ. Let K ≥ 0 be a fixed constant. The following properties of a function f : G → B(H) are equivalent: (i) There are a unitary representation π: G → B(H π ) and operators ξ: H π → H and η: H → H π such that f (t) = ξπ(t)η for any t in G and ξ η ≤ K. (ii) For each N ≥ 1, the function f N : Γ N → B(H) defined by f N (t 1 , . . . , t N ) = f (t 1 t 2 . . . t N ) extends (with the obvious identification) to an element of cb(ℓ 1 (Γ) ⊗ h · · · ⊗ h ℓ 1 (Γ), B(H)) (where the tensor product is N -fold) with norm ≤ K.
But now, (7.2) is but a reformulation of (ii), so that (i) is equivalent to (ii). Moreover, since (i) ⇒ (ii) is valid for any Γ, it holds when Γ = G, whence (ii) ⇒ (iii), and the converse is obvious. Finally, the equivalence between (ii) and (iv) follows from the well known factorization theorem of c.b. multilinear maps (cf. PaS]). §8. Banach algebras The general method of this paper can be applied in other situations when studying a Banach algebra A given together with a generating system, or a family of generating subalgebras. The rôle of the "degree" is then played by the minimal length of the products necessary to generate (in a suitable Banach algebraic sense) the unit ball (or some ball centered at the origin). One can also develop our approach for a general "variety of Banach algebras" (in the sense of [Dix]) instead of that of operator algebras. To illustrate briefly what we have in mind, take the variety of all Banach algebras, then our basic idea leads to: Theorem 8.1. Let A be a Banach algebra with unit ball B A . Consider a subset β ⊂ B A and assume that the algebra it generates, denoted by A, is dense in A. For n = 1, 2, ..., let us denote by β n the set of all products of n elements taken in β. Let d be a positive integer and let α be any number such that d ≤ α < d + 1. Consider the following properties: (i) α There is a constant K such that, for any Banach algebra B and for any homomorphism u : A → B, if u is bounded on β, u is continuous and we have u ≤ K sup x∈β u(x) α .
(ii) d There is a constant K ′ such that (here aconv stands for the absolutely convex hull)
43
(iii) d There is a constant K ′′ such that, for any Banach algebra B and for any continuous homomorphism u : A → B, we have Then (i) α ⇒ (ii) d . Moreover, if β contains a unit element for A, then conversely (ii) d ⇒ (iii) d .
Proof. (Sketch) In this proof, we will say "morphism" for homomorphism with values in a Banach algebra. We will follow the same strategy as in §1 and §2. Let u β = sup{ u(x) | x ∈ β}. Let c ≥ 1 and let C c be the set of all morphisms u : A → B u with u β ≤ c. We define the Banach algebraà c as the completion of A for the embedding J : A → ⊕ u∈C c B u defined by J(x) = ⊕ u∈C c u(x). Let F β be the free semi-group with free generators indexed by β. We will consider β as a subset of F β . Consider the space ℓ 1 (F β ), viewed as a Banach algebra for convolution. Let δ t (t ∈ F β ) denote the canonical basis and let B be the dense subalgebra linearly generated by δ t (t ∈ F β ), equipped with the induced norm. Let c ≥ 1 and z = 1/c. We have a unique morphism π z : B → A such that π z (δ x ) = zx for all x in β. It is easy to check that if u : A → B u is any morphism, then u β ≤ c iff uπ z ≤ 1. Moreover,à c can be identified with the completion of B/ ker(π z ), and (i) α means that The proof can then be completed by arguing as in Theorem 2.5. We leave the remaining details to the reader. | 2019-04-12T09:23:36.111Z | 1997-06-06T00:00:00.000 | {
"year": 1997,
"sha1": "0dcd67679148a6a0b1ed4588a7f88c51f12d9d5d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e315d5c93fa2c37ca09964cb8d8b1acadc5386b4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
37373814 | pes2o/s2orc | v3-fos-license | Xanthones from the green branch of Garcinia dulcis
Abstract Two new prenylated xanthones, namely dulcisxanthone H and dulcisxanthone I along with garciniaxanthone C, were isolated from the dichloromethane extract of the green branch of Garcinia dulcis. Their structures were elucidated by the analysis of 1-D and 2-D NMR spectral data. Their antibacterial activities were also examined. Two new prenylated xanthones named dulcisxanthone H and dulcisxanthone I were isolated from the green branch of Garcinia dulcis.
Introduction
The genus Garcinia (Clusiaceae) is a rich source of phenolic compounds with exhibited biological activities such as, antioxidative, antibacterial, antifungal and cytotoxic activities (Chin et al. 2008;Ngoupayo et al. 2009;Chen et al. 2010;Feng et al. 2014;Fouotsa et al. 2014). Garcinia dulcis is an Asian plant used in traditional medicine against lymphatitis, parotitis, struma and other disease conditions (Kasahara & Henmi 1986). Some of the xanthones, flavonoids and biflavonoids of this plant species have shown antimalarial, antibacterial, antioxidative and antiandrogenic activities (Likhitwitayawuid et al. 1998;Deachathai et al. 2005;Shakui et al. 2014). We have previously reported the isolation of 12 new compounds and 64 known compounds from the fruits, flowers, seeds and leaves of the species collected in Thailand and tested them for antibacterial and antioxidative activities (Deachathai et al. 2005(Deachathai et al. , 2006(Deachathai et al. , 2008Saelee et al. 2015). As a continuation of our search for new natural products in relation to biological activity, we now report the isolation and structural elucidation of three prenylated xanthones from the green branch of G. dulcis and examined their antibacterial activity.
Results and discussion
The dichloromethane extract of an air-dried green branch of G. dulcis was purified by chromatography along with partitioning with 5% NaOH and an organic solvent to give two new prenylated xanthones named dulcisxanthone H (1) and dulcisxanthone I (2), along with known garciniaxanthone C (3) (Minami et al. 1994;Iinuma et al. 1996) (Figure 1). Their structures were elucidated by the analysis of spectroscopic data and comparison of the NMR spectral data with previous reports in the literature.
All the isolated xanthones and the crude dichloromethane extract were tested for their antibacterial activity against Staphylococcus aureus ATCC25923, methicillin-resistant S. aureus (MRSA) SK1, Escherichia coli ATCC25922 and Pseudomonas aeruginosa ATCC27853, with vancomycin and gentamicin as the standard drugs. They showed no antibacterial activity against those bacterial strains at the minimum inhibitory concentration (MIC) of 200 μg/mL, except for the crude extract, that displayed antibacterial activity with an MIC of 128 μg/mL.
General experimental procedures
Melting points were obtained on a Fisher-Johns Melting Point Apparatus. Infrared (IR) spectra were determined on a Perkin-elmer 783 FTS 165 FT-IR spectrometer and were recorded as wave number (cm −1 ). ultraviolet (uV) absorption spectra were determined by using MeOH on a Shimadzu uV-160A spectrophotometer and principle bands (λ max ) were recorded as wavelengths (nm) and log ε in a methanol solution. electrospray ionisation mass spectrometric (eSI-MS) data were recorded on a Waters Alliance 2690 and a Micromass LCT (united Kingdom) mass spectrometers at the Scientific equipment Center, Prince of Songkla university. Nuclear magnetic resonance (NMR) spectra were measured on a bruker FT-NMR ultra Shield TM 300 spectrometer at the Department of Chemistry, Faculty of Science, Prince of Songkla university. 1 H and 13 C NMR spectra were measured at 300 and 75 MHz, respectively. Chemical shifts (δ) were recorded in parts per million (ppm) in CDCl 3 and acetone-d 6 containing TMS as an internal standard (δ 0.00) and the coupling constant (J) was expressed in hertz (Hz). Column chromatography (CC) was performed with silica gel 100 (70-230 Mesh ASTM, Merck), silica gel RP-18 (40-63 μm, Merck) and Sephadex LH-20 (Amersham biosciences, Sweden). Preparative thin-layer chromatography (pTLC) and thin-layer chromatography (TLC) were performed on silica gel GF 254 (20 × 20 cm with a layer thickness of 0.2 mm, Merck) and compounds were detected under uV (254 nm) fluorescence. All solvents were of spectroscopic grade or distilled from glass prior to use.
Plant material
A green branch of G. dulcis was collected from Nakhon Si Thammarat in the southern part of Thailand, in April 2013. The voucher specimen (Coll. No. 02, Herbarium No. 0012652) has been deposited at the Herbarium of the biology Department, Faculty of Science, Prince of Songkla university, Thailand.
Antibacterial assay
This was carried out according to the previously reported procedure of Saelee et al. (2015).
Conclusion
Investigation of the chemical constituents of a dichloromethane extract of a G. dulcis green branch led to the isolation of three prenylated xanthones. Dulcisxanthone H and dulcisxanthone I are new naturally occurring compounds. Further investigation of antibacterial active compound from this plant species is ongoing.
Supplementary material
Supplementary material relating to this article is available online, alongside Tables S1-S3 and Figures S1-S14. | 2018-04-03T04:01:45.364Z | 2016-04-07T00:00:00.000 | {
"year": 2016,
"sha1": "b7e007e5b5a9eaa2da8fd9e47ac8a519b9694c4b",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Xanthones_from_the_green_branch_of_i_Garcinia_dulcis_i_/3160411/1/files/4922365.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "56f9b8cde27d0fbcfa7579b198add651b8e6da5b",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
223518628 | pes2o/s2orc | v3-fos-license | Spanish influenza score: Predictive power without giving up the classic
The present number of Medicina Intensiva publishes a study on a Spanish severe influenza registry that develops a predictive score of mortality in the Intensive Care Unit (ICU). 1 In the 1980s, intensive care became immersed in the understanding of reality and in the adoption of aids for decision making based on severity scores. Despite its par-ticularities, the APACHE II remains valid for the assessment of severity in the critically ill. These scores were based on the accumulation of a large body of representative data and made use of logistic regression (LR) and multivari-ate analytical techniques to generate predictive models, with the use of beta-estimators to produce the individual scores. In relation to investigators and clinicians, the level of familiarity with the mathematical details needed to obtain the beta-estimators is sufficient, and there is a reason-able correlation between understanding and the odds ratios (ORs) and their corresponding confidence intervals----forming a methodological construct that is both comprehensible and interpretable. 2
The present number of Medicina Intensiva publishes a study on a Spanish severe influenza registry that develops a predictive score of mortality in the Intensive Care Unit (ICU). 1 In the 1980s, intensive care became immersed in the understanding of reality and in the adoption of aids for decision making based on severity scores. Despite its particularities, the APACHE II remains valid for the assessment of severity in the critically ill. These scores were based on the accumulation of a large body of representative data and made use of logistic regression (LR) and multivariate analytical techniques to generate predictive models, with the use of beta-estimators to produce the individual scores. In relation to investigators and clinicians, the level of familiarity with the mathematical details needed to obtain the beta-estimators is sufficient, and there is a reasonable correlation between understanding and the odds ratios (ORs) and their corresponding confidence intervals----forming a methodological construct that is both comprehensible and interpretable. On the other hand, there has been an exponential growth in the use of big data analytical techniques through machine learning (ML), as can be seen from the number of literature references found in Medline. 3 However, one of the problems of ML is the difficulty of transferring the analyses to the clinical practice setting. 4 In contrast to the conventional statistical analytical techniques, the results of the published studies possess good mathematical indicators, but clinicians see only limited practical applicability in them. 5 This is due in part to the difficulty of understanding the mechanisms through which the results or outcomes are generated, and of using a large number of variables simultaneously. Such analyses are probably more concordant to the complex biological reality, but reduce the possibilities for adequate handling on the part of the healthcare professionals within the clinical practice setting.
In this regard, the article presented in this number of Medicina Intensiva pursues a double aim: to incorporate ML techniques to a large database on severe influenza in the ICU, and to generate a mortality risk score combining this approach with other classical techniques more amenable to incorporation to clinical practice.
Each year, during the winter months, severe influenza poses a challenge for ICUs all over the world. which severe influenza has generated care problems in ICUs, affecting also young individuals, causing severe respiratory distress, with prolonged admissions, and a high mortality rate. 6 Comparison of the results obtained with conventional techniques and those obtained through advanced random forest analysis (ML) reinforces the findings, and appears to indicate that the new techniques will be able to add information to the classical analytical methods ---though much of the substantial information can be gained from the latter. 7 Nevertheless, in order for the LR techniques to offer consistency, we need quality registries of sufficient size, as has been guaranteed in this study----in contrast to other recent publications in which an insufficient sample size strengthened the predictive capacity of ML over LR. 8 The development of a mortality predictive score in critical patients with severe influenza may help in decision making referred to patient admission, treatment (prone decubitus, extracorporeal oxygenation, nitric oxide) or even patient transfer for the application of advanced techniques in other centers. Another utility of this score is the possibility of stratifying risk groups for guiding or orientating therapeutic trials, as well as for the benchmarking of units. The use of variables present at the time of admission in this study also must be viewed as an advantage, since it would facilitate early counseling in decision making. Some models that use clinical outcome variables may be valid for comparing the results or outcomes of different units, but not for establishing early prognoses in the first hours of patient admission or for defining groups amenable to therapeutic trials.
The study does have some limitations, however. The database is large and multicentric, but covers a broad period of time (10 years) in which the therapeutic strategies and outcomes have experienced changes. Although internal validation is made, segmenting the database, it is essential to assess the usefulness of the score on a prospective basis in order to corroborate the accuracy of the predictions. On the other hand, the score analyses mortality in the ICU, and the APACHE II score is designed for application to in-hospital mortality, while the SOFA score was not even designed with this purpose in mind. Likewise, we cannot rule out the possibility that the use of ML with a larger number of registered variables could have had greater predictive power.
The future of the analytical techniques based on ML will almost surely lie in the real-time counseling of clinical activity, with immediate feedback and enrichment of the analytical processes. 9 Although we will witness this scenario, it will be necessary to assess the power which such information will have in decision making, from an ethical, legal and deontological perspective. 10 In addition, it will be necessary to clarify the role of the clinician in the application and withdrawal of treatments when the ML system becomes fed by the decisions it induces. These will be problems for the new generations, and the near impossibility of understanding how the mathematics work will generate complex sensations among the professionals. In the meantime, we will have to continue relying on the development of accessible and valid techniques such as that presented in this number of the journal.
Intensive care medicine works locally with few patients, and when attention must focus on concrete disease conditions, the limitations are even greater. Hence the importance of having potent multicentric registries to facilitate complex analyses and allow us to add knowledge in areas characterized by difficult management and with an impact upon the health of the population. Given the current importance of the COVID-19 pandemic, this represents a call for the development of collaborative data registries.
Financial support
The author declares that this study has received no financial support. | 2020-10-18T13:05:43.598Z | 2020-10-17T00:00:00.000 | {
"year": 2020,
"sha1": "ba32dcff14e1046646f9c27fe8ff64a134cdb43b",
"oa_license": "unspecified-oa",
"oa_url": "https://europepmc.org/articles/pmc7568483?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "cba0e2f960d24115e64743087e55bbc06180067d",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
230592098 | pes2o/s2orc | v3-fos-license | Forecasting Warping Deformation Using Multivariate Thermal Time Series and K-Nearest Neighbors in Fused Deposition Modeling
Over the past decades, additive manufacturing has rapidly advanced due to its advantages in enabling diverse material usage and complex design production. Nevertheless, the technology has limitations in terms of quality, as printed products are sometimes different from their desired designs or are inconsistent due to defects. Warping deformation, a defect involving layer shrinkage induced by the thermal residual stress generated during manufacturing processes, is a major factor in lowering the quality and raising the cost of printed products. This study utilized a variety of thermal time series data and the K-nearest neighbors (KNN) algorithm with dynamic time warping (DTW) to detect and predict the warping deformation in the printed parts using fused deposition modeling (FDM) printers. Multivariate thermal time series data extracted from thermocouples were trained using DTW-based KNN to classify warping deformation. The results showed that the proposed approach can predict warping deformation with an accuracy of over 80% by only using thermal time series data corresponding to 20% of the whole printing process. Additionally, the classification accuracy exhibited the promising potential of the proposed approach in warping prediction and in actual manufacturing processes, so the additional time and cost resulting from defective processes can be reduced.
Introduction
Additive Manufacturing (AM), also known as 3D printing, refers to a form of fabricating three-dimensional objects, where materials are deposited layer-by-layer to ensure even complex shapes. Due to this characteristic, AM has become a major source of paradigm shifts, as seen in various industries. From small and simple tools to industries that require huge sizes and high reliability, such as the aerospace, energy, engine, and biomedical industries, 3D printing technology is gradually expanding [1][2][3][4]. Despite the various potentials of 3D printing, some defects, such as thermal deformations and geometrical errors, have been major hindrances in AM processes. Such defects cause unnecessary time and cost increases in many manufacturing processes, crucially reducing the efficiency of the manufacturing industry. Therefore, defect minimization and detection, including early detection, are vital regarding increasing the efficiency and reliability of AM processes.
where Y i (x) is the output of the X i (x) nearest neighbor of x. The equation above can be rewritten aŝ where X i (x) is the i-th nearest neighbor of x. Using the Euclidian distance, this means that x − X 1 (x) ≤ x − X 2 (x) ≤ · · · ≤ x − X n (x) , where n is the total number of inputs. The parameter choice plays a fundamental role in the algorithm performance. Oftentimes, a number of K values are assessed before selecting the best one for a specific task. Note that, in general, a very small K often models the noise, while with a very large K, the neighbors lead to high bias. Some implementations also use a distance-based voting scheme, where closer neighbors, which correspond to setting a dynamic weight, have more influence, as shown in Equation (2).
Dynamic Time Warping
DTW is an algorithm used to obtain an effective alignment between two discrete signals. Let X = X 1 , X 2 , . . . , X N and Y = Y 1 , Y 2 , . . . , Y M be two time-dependent sequences. DTW seeks to find a non-linear warping of X and Y, such that the points with high similarities (X i , Y j ) are aligned together. That is, the best alignment maximizes the sum of the similarities of the alignment pairs. Thus, DTW can be thought of as a mapping function f : X × Y → R , where for each pair (x n , y m ), there is an associated cost, which is also known as a local distance measure. The goal is to find the best alignment that is equivalent to where f (x i , y i ) is inversely proportional to the similarity between x i and y i . Note that the warping path must satisfy the following three conditions: boundary condition, monotonicity condition, and step size condition. The boundary condition assesses whether the first elements of X and Y and the last elements of X and Y are aligned together. The monotonicity condition implies that for any ordered sequence X i , X j , where X i , X j is aligned to element Y i , Y j respectively, the order in the X sequence shall be maintained in the corresponding alignment in Y. Finally, the step size condition ensures that the alignment is bijective or that it has a one-to-one correspondence. That is, all the index pairs in a warping path p are pairwise distinct.
Experimental Setup
All the experiments in this study were performed using a commercial desktop FDM machine Zortrax M200 Plus. The machine was equipped with four thermocouples operating in continuous mode to measure the specimen temperature during the printing process. The experimental test setup is shown in Figure 1. Type K thermocouples with 0.1 • C resolution and ±0.5 • C accuracy were mounted on special plastic holders to minimize the noise resulting from the vibration of the machine, and the holders were attached to the build platform at a distance of 10 mm from each specimen corner. The thermocouples were numbered according to their measuring corner. The surrounding temperatures of the specimen were recorded during the entire printing process using an eight-channel thermocouple temperature data logger OM-CP-OCTTEMP-A with a reading rate of 1 Hz, which was connected to a local computer. As a specimen, a cuboid model with dimensions of 100 mm × 30 mm × 5 mm (l × w × h) as shown in Figure 2a was designed based on the works of Alsoufi and Elsayed [19]. Experiments were conducted with the extrusion temperature varying from 240 °C to 275 °C and platform temperature varying from 50 °C to 90 °C with 5 °C increments. All other manufacturing process parameters were kept constant and they are listed in Table 1.
Dataset and Preprocessing
The dataset had a total of 50 samples collected with various process parameter combinations, with each sample consisting of four thermal time series data. The samples were divided into two categories according to the existence or non-existence of warping deformation. After printing a specimen, the warping deformation of the specimen was specified by measuring the angle at each corner. The corner angle is obtained using the following equation: As a specimen, a cuboid model with dimensions of 100 mm × 30 mm × 5 mm (l × w × h) as shown in Figure 2a was designed based on the works of Alsoufi and Elsayed [19]. Experiments were conducted with the extrusion temperature varying from 240 • C to 275 • C and platform temperature varying from 50 • C to 90 • C with 5 • C increments. All other manufacturing process parameters were kept constant and they are listed in Table 1. As a specimen, a cuboid model with dimensions of 100 mm × 30 mm × 5 mm (l × w × h) as shown in Figure 2a was designed based on the works of Alsoufi and Elsayed [19]. Experiments were conducted with the extrusion temperature varying from 240 °C to 275 °C and platform temperature varying from 50 °C to 90 °C with 5 °C increments. All other manufacturing process parameters were kept constant and they are listed in Table 1.
Dataset and Preprocessing
The dataset had a total of 50 samples collected with various process parameter combinations, with each sample consisting of four thermal time series data. The samples were divided into two categories according to the existence or non-existence of warping deformation. After printing a specimen, the warping deformation of the specimen was specified by measuring the angle at each corner. The corner angle is obtained using the following equation:
Dataset and Preprocessing
The dataset had a total of 50 samples collected with various process parameter combinations, with each sample consisting of four thermal time series data. The samples were divided into two categories according to the existence or non-existence of warping deformation. After printing a specimen, the warping deformation of the specimen was specified by measuring the angle θ at each corner. The corner angle θ is obtained using the following equation: where y is the corner height and x is the horizontal distance of the corner from the nearest center of the specimen, both measured using a digital vernier caliper. In this work, the center point in between corners 1 and 4 and corners 2 and 3 were considered as the centers of the specimen. The parts peeled away with an angle greater than 1 • with the platform were considered to have warping deformation. If a specimen had more than one warped corner, it was classified as a warped specimen. Images of the produced specimens with and without warping deformation are shown in Figure 3.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 11 where is the corner height and x is the horizontal distance of the corner from the nearest center of the specimen, both measured using a digital vernier caliper. In this work, the center point in between corners 1 and 4 and corners 2 and 3 were considered as the centers of the specimen. The parts peeled away with an angle greater than 1° with the platform were considered to have warping deformation. If a specimen had more than one warped corner, it was classified as a warped specimen. Images of the produced specimens with and without warping deformation are shown in Figure 3. The thermal time series data extracted from the thermocouple sensors installed at the corners of the specimen with and without warping deformation are shown in Figure 4. As seen in the figure, TC1 and TC2, as well as TC3 and TC4, moved relatively similarly as their corner distance was closer.
The noise data at both ends of the data were removed to extract the patterns of the time series data in a unified interval. The length of each time series data was set as 2640 steps to conduct a model learning and performance evaluation. Min-max normalization was also conducted on the data for more stable learning. In general, min-max normalization is used to prevent certain features from overwhelming others and to make models more stable to learn. The min-max normalization is calculated as in Equation (5).
where x is the original value, and and are the maximum and minimum values of each feature, respectively. The thermal time series data extracted from the thermocouple sensors installed at the corners of the specimen with and without warping deformation are shown in Figure 4. As seen in the figure, TC1 and TC2, as well as TC3 and TC4, moved relatively similarly as their corner distance was closer.
The noise data at both ends of the data were removed to extract the patterns of the time series data in a unified interval. The length of each time series data was set as 2640 steps to conduct a model learning and performance evaluation. Min-max normalization was also conducted on the data for more stable learning. In general, min-max normalization is used to prevent certain features from overwhelming others and to make models more stable to learn. The min-max normalization is calculated as in Equation (5).
where x is the original value, and x max and x min are the maximum and minimum values of each feature, respectively.
Model Training
In this paper, the KNN model was trained and evaluated in two different ways using thermal time series data to predict whether warping deformation occurs or not.
First, warping deformation was predicted based on the thermal time series data extracted after the entire printing process was completed. The data collected from the 50 samples were split into training and testing sets of 40 and 10 samples, respectively. 5-fold and 10-fold cross-validations were conducted to generalize the performance.
Another experiment was conducted using partial thermal time series data corresponding to a portion of the whole printing process to predict whether warping deformation occurs in endproducts. The entire time-series data were divided into five segments and increased by 20%. The generalized performance was also compared and analyzed through cross-validation.
Performance Evaluation Methods
In the field of ML, the confusion matrix, also called the error matrix, is generally used to represent the performance of models. A corresponding matrix comprises class labels and subsets as in Table 2.
In this study, the labels were classified into two classes: positive for warped samples and negative for unwarped samples.
Four subsets were considered based on the classified labels. True positive is the correct prediction of a warped sample, and true negative is the correct prediction of an unwarped sample.
False positive is the incorrect prediction of an unwarped sample into a warped sample, and false negative is the incorrect prediction of a warped sample into an unwarped sample.
Model Training
In this paper, the KNN model was trained and evaluated in two different ways using thermal time series data to predict whether warping deformation occurs or not.
First, warping deformation was predicted based on the thermal time series data extracted after the entire printing process was completed. The data collected from the 50 samples were split into training and testing sets of 40 and 10 samples, respectively. 5-fold and 10-fold cross-validations were conducted to generalize the performance.
Another experiment was conducted using partial thermal time series data corresponding to a portion of the whole printing process to predict whether warping deformation occurs in end-products. The entire time-series data were divided into five segments and increased by 20%. The generalized performance was also compared and analyzed through cross-validation.
Performance Evaluation Methods
In the field of ML, the confusion matrix, also called the error matrix, is generally used to represent the performance of models. A corresponding matrix comprises class labels and subsets as in Table 2. In this study, the labels were classified into two classes: positive for warped samples and negative for unwarped samples.
Four subsets were considered based on the classified labels. True positive is the correct prediction of a warped sample, and true negative is the correct prediction of an unwarped sample. False positive is the incorrect prediction of an unwarped sample into a warped sample, and false negative is the incorrect prediction of a warped sample into an unwarped sample. In this experiment, the evaluation metrics defined in Equations (6)-(10) were used based on the above-classified subsets to determine the classification performances of predicting and categorizing the occurrence of warping deformation.
Both the prediction error (ERR) and accuracy (ACC) provided general information on how many samples were misclassified, and they are expressed in Equations (6) and (7), respectively. Error is the sum of incorrect predictions divided by the number of predicted samples, and accuracy is the sum of correct predictions divided by the number of predicted samples. Accuracy has an inverse relationship with prediction error and can be expressed in terms of error as in Equation (7). Precision is the proportion of the warped samples among the ones predicted as warped, while recall is the proportion of the predicted samples as warped among the warped ones. F1-score is the harmonic mean of precision and recall. Higher values of the following three metrics indicate better predictive performances. These metrics are expressed in Equations (8)- (10).
Results and Discussion
As mentioned in Section 3.3, the performances of the warping deformation prediction in the two cases were compared and analyzed through various classification evaluation metrics. The performance results of the DTW-based KNN model are shown in Figure 5. The model was trained with 40 training data (25 warped data and 15 unwarped data) and tested with 10 testing data (5 warped data and 5 unwarped data) for warping prediction according to the K value. K is one of the major hyperparameters of the KNN algorithm. The results showed the lowest error rate of 10% and the highest accuracy of 90% when the K value was 26. Based on Figure 5 and Equations (6) and (7), the error rate and accuracy have an inverse relationship. The performance of classifying the time series patterns rapidly dropped with a K value greater than the highest performing K value. Once the K value exceeded 30, the DTW-based KNN model completely lost its classification capabilities.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 11 In this experiment, the evaluation metrics defined in Equations (6)-(10) were used based on the above-classified subsets to determine the classification performances of predicting and categorizing the occurrence of warping deformation.
Both the prediction error (ERR) and accuracy (ACC) provided general information on how many samples were misclassified, and they are expressed in Equations (6) and (7), respectively. Error is the sum of incorrect predictions divided by the number of predicted samples, and accuracy is the sum of correct predictions divided by the number of predicted samples. Accuracy has an inverse relationship with prediction error and can be expressed in terms of error as in Equation (7). Precision is the proportion of the warped samples among the ones predicted as warped, while recall is the proportion of the predicted samples as warped among the warped ones. F1-score is the harmonic mean of precision and recall. Higher values of the following three metrics indicate better predictive performances. These metrics are expressed in Equations (8)- (10).
Results and Discussion
As mentioned in Section 3.3, the performances of the warping deformation prediction in the two cases were compared and analyzed through various classification evaluation metrics. The performance results of the DTW-based KNN model are shown in Figure 5. The model was trained with 40 training data (25 warped data and 15 unwarped data) and tested with 10 testing data (5 warped data and 5 unwarped data) for warping prediction according to the K value. K is one of the major hyperparameters of the KNN algorithm. The results showed the lowest error rate of 10% and the highest accuracy of 90% when the K value was 26. Based on Figure 5 and Equations (6) and (7), the error rate and accuracy have an inverse relationship. The performance of classifying the time series patterns rapidly dropped with a K value greater than the highest performing K value. Once the K value exceeded 30, the DTW-based KNN model completely lost its classification capabilities. The confusion matrix and receiver operating characteristic (ROC) curve evaluating the testing data using the highest performing K value are shown in Figure 6. As depicted in Figure 6a, all five unwarped samples were correctly classified, while one warped sample was classified as an unwarped sample.
The classification performance was as follows when evaluated based on the confusion matrix in the case of the warped data. The precision was 0.83, recall was 1.0, and f1-score was 0.91. However, in the case of the unwarped data, the precision was 1.0, recall was 0.8, and f-1 score was 0.89. The graph in Figure 6b shows the performance of the classification model at all the thresholds, where the area below the curve is called the area under the curve (AUC). A high AUC value signifies a high model performance. The AUC in this experiment was found to be 0.9, which is consistent with the accuracy.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 11 The confusion matrix and receiver operating characteristic (ROC) curve evaluating the testing data using the highest performing K value are shown in Figure 6. As depicted in Figure 6a, all five unwarped samples were correctly classified, while one warped sample was classified as an unwarped sample.
The classification performance was as follows when evaluated based on the confusion matrix in the case of the warped data. The precision was 0.83, recall was 1.0, and f1-score was 0.91. However, in the case of the unwarped data, the precision was 1.0, recall was 0.8, and f-1 score was 0.89. The graph in Figure 6b shows the performance of the classification model at all the thresholds, where the area below the curve is called the area under the curve (AUC). A high AUC value signifies a high model performance. The AUC in this experiment was found to be 0.9, which is consistent with the accuracy. K-fold cross-validation (CV) was performed to avoid any bias in the prediction performance for specific test samples due to the limited data and to generalize the performance. The k values of 5 and 10 were used along with 20% and 10% of the testing data, respectively. Figure 7 shows the obtained mean accuracy results according to the K values in each CV. At a K value of 3, 82% accuracy for the 5-fold CV and 84% accuracy for the 10-fold CV were obtained. Like the results obtained using a single testing set, the classification performance of the prediction model was drastically reduced as the K value exceeded a specific value (31 for the 5-fold CV and 36 for the 10-fold CV). Although the highest performance of the two CVs did not differ significantly, the k value with the lowest performance was smaller for the 5-fold CV. K-fold cross-validation (CV) was performed to avoid any bias in the prediction performance for specific test samples due to the limited data and to generalize the performance. The k values of 5 and 10 were used along with 20% and 10% of the testing data, respectively. Figure 7 shows the obtained mean accuracy results according to the K values in each CV. At a K value of 3, 82% accuracy for the 5-fold CV and 84% accuracy for the 10-fold CV were obtained. Like the results obtained using a single testing set, the classification performance of the prediction model was drastically reduced as the K value exceeded a specific value (31 for the 5-fold CV and 36 for the 10-fold CV). Although the highest performance of the two CVs did not differ significantly, the k value with the lowest performance was smaller for the 5-fold CV.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 11 The confusion matrix and receiver operating characteristic (ROC) curve evaluating the testing data using the highest performing K value are shown in Figure 6. As depicted in Figure 6a, all five unwarped samples were correctly classified, while one warped sample was classified as an unwarped sample.
The classification performance was as follows when evaluated based on the confusion matrix in the case of the warped data. The precision was 0.83, recall was 1.0, and f1-score was 0.91. However, in the case of the unwarped data, the precision was 1.0, recall was 0.8, and f-1 score was 0.89. The graph in Figure 6b shows the performance of the classification model at all the thresholds, where the area below the curve is called the area under the curve (AUC). A high AUC value signifies a high model performance. The AUC in this experiment was found to be 0.9, which is consistent with the accuracy. K-fold cross-validation (CV) was performed to avoid any bias in the prediction performance for specific test samples due to the limited data and to generalize the performance. The k values of 5 and 10 were used along with 20% and 10% of the testing data, respectively. Figure 7 shows the obtained mean accuracy results according to the K values in each CV. At a K value of 3, 82% accuracy for the 5-fold CV and 84% accuracy for the 10-fold CV were obtained. Like the results obtained using a single testing set, the classification performance of the prediction model was drastically reduced as the K value exceeded a specific value (31 for the 5-fold CV and 36 for the 10-fold CV). Although the highest performance of the two CVs did not differ significantly, the k value with the lowest performance was smaller for the 5-fold CV. The KNN model was also trained using the partial time series data rather than the entire data with the purpose of early detection of the warping deformation. The length of the partial time series data was adjusted by splitting the entire data into five equal sections. The early detection results using the partial time series according to the highest performing K values are listed in Table 3. The predictive model did not differ much in performance between the CVs, but it showed a difference depending on the length of the time-series data. For both the 5-fold and 10-fold CV, the overall performance of the model was lowest when only 20% of the data were fed, and it was highest when 40% of the data were fed. The experimental results have shown that warping deformation can be early detected with at least 80% accuracy even when using a small length of thermal time series data. The possibility of detecting the warping deformation using as little as 20 percent of the printing process implies two possible conclusions. First possibility is that the warping deformation occurs early in the printing process. The second possibility is that early patterns in heat data possess all the predictive power of warping deformation occurring in later stages of the printing.
Conclusions
This paper presented two types of warping deformation prediction methods in FDM based on a DTW-integrated KNN algorithm and multivariate thermal time series data. First, a KNN model was trained using the thermal time series data of a whole printing process to determine whether warping deformation had occurred. Second, only partial thermal time series data were fed into the predictive model to carry an early detection of the warping deformation in the specimens.
To ensure stable performance results, the value of the main parameter K of the KNN algorithm was searched using the K-elbow method. The key classification performance indicators were also compared and analyzed using 5-fold and 10-fold CVs to generalize the performance of the limited number of data.
The results showed 84% accuracy in the 10-fold cross-validation and 82% accuracy in the 5-fold cross-validation when the warping deformation was predicted based on the pattern of the entire thermal time series data. Moreover, the trained model showed a relatively good performance by predicting the warping deformation occurrence with an accuracy of 80% using thermal time series data corresponding to only 20% (one-fifth) of the entire printing time.
The proposed methodologies not only detected the defects in a printed product but also predicted warping deformation with high accuracy. According to the obtained results, this approach can be introduced to actual manufacturing processes as a quality monitoring system to support the inspection process of defective products. This study can also be used as a stepping stone to build a feedback control system for warping prediction that can significantly reduce the unnecessary time and additional cost resulting from printing defective products.
The presented study is limited due to a major element, which is data. In general, machine-learning-based prediction models show stronger performance when fed with more data. However, a limited amount of data was provided to the current model due to physical restraints. Another limitation is unbalanced data, which can lead to biased model prediction due to the data being skewed to particular target values. Although the unbalanced data problem was tackled in this study by printing a similar number of warped and unwarped samples, such a method cannot be applied in actual production processes. Hence, the authors aim to improve the prediction performance by increasing the number of samples and by trying to come up with a solution to the unbalanced data problem. The transfer of the presented results to other AM environments will also be investigated in the near future.
Author Contributions: D.S. are responsible for conceptualization, the model development and analysis of the results. A.M.C.B. contributed to conceptualization, experimental setup, data collecting, and curation. J.K. and M.B. investigate related works and methodology used in this paper. All authors contributed to writing of the manuscript. N.K. verified the model framework and results and supervised the project. All authors have read and agreed to the published version of the manuscript. | 2020-12-17T09:13:26.619Z | 2020-12-15T00:00:00.000 | {
"year": 2020,
"sha1": "ebe31782db9091231ed821417603bdd4781ee9fe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/24/8951/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5f8d3c8286f279f4c8a19c70709cba81f90f27b3",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
58942204 | pes2o/s2orc | v3-fos-license | First experiences with the ATLAS Pixel Detector Control System at the Combined Test Beam 2004
Detector control systems (DCS) include the read out, control and supervision of hardware devices as well as the monitoring of external systems like cooling system and the processing of control data. The implementation of such a system in the final experiment has also to provide the communication with the trigger and data acquisition system (TDAQ). In addition, conditions data which describe the status of the pixel detector modules and their environment must be logged and stored in a common LHC wide database system. At the combined test beam all ATLAS subdetectors were operated together for the first time over a longer period. To ensure the functionality of the pixel detector a control system was set up. We describe the architecture chosen for the pixel detector control system, the interfaces to hardware devices, the interfaces to the users and the performance of our system. The embedding of the DCS in the common infrastructure of the combined test beam and also its communication with surrounding systems will be discussed in some detail.
Introduction
For the first time, segments of all ATLAS subdetectors were integrated and operated together with a common Trigger and Data Acquisition (TDAQ), close to final electronics and the Detector Control System (DCS) at a CERN test beam. During this test and certainly in the future experiment the overall aim of the DCS was and is to Email address: imhaeuse@physik.uni-wuppertal.de (Martin Imhäuser). guarantee a reliable physics data taking and a safe operation of the detector. This is done by monitoring and controlling the DCS hardware, reacting to error conditions, providing several user interfaces and maintaining the communication to common infrastructure of the ATLAS experiment like TDAQ or database systems. Especially the communication between TDAQ and DCS is of major importance for the operation of the pixel detector as tuning of the read out chain requires access to both systems in parallel. The pixel detector module, shown in figure 1, is the smallest unit pixel DCS can act on. It consists of a silicon sensor and 16 front end chips as well as a Module Controller Chip (MCC) gathering hit data and servicing trigger requests. Every detector module is connected to an optoboard which converts the electrical data signals transmitted from the modules to an optical signal for transmission to the off-detector electronics via optical fibres. In parallel it receives optical signals from the off-detector electronics and converts these to electrical signals for distribution to the modules. The off-detector component Back Of Crate card (BOC) which serves as the optical interface between the Read Out Driver (ROD) and the optoboard [1] is controlled by TDAQ while DCS takes care of the on-detector component optoboard. To operate six pixel detector modules as a part of the whole pixel detector its DCS provided various equipment at the combined test beam (shown in figure 2). For more details about the design of the pixel DCS please refer to [2]. General purpose IO devices for the read out of the DCS hardware (ELMB 1 ) developed by ATLAS DCS group, a home made supply system of three low voltage sources together with a reset signal to operate the optoboard (SC-OLink 2 ), a regulator system for protecting the FE chips of the detector modules, developed by INFN Milano, a high voltage source for the depletion of the sensors and also temperature and humidity sensors have come into operation. To integrate the hardware and to super-
Detector Control System
The ATLAS detector is hierarchically organized in a tree-like structure into sudetectors, sub-systems, etc.. This has to be reflected in the design and implementation of the DCS. Therefore DCS is organized in three functional layers: -the global control station which e.g. provides tools for the overall operation of ATLAS, -the subdetector control station which e.g. provides full stand-alone control capability and synchronises the supervision of all subsystems below and -the local control station which e.g. reads data from the DCS hardware.
The core of the software is based on the commercial Supervisory Control And Data Acquisition (SCADA) package PVSS 3 . PVSS allows to gather information from the DCS hardware and offers the implementation of supervisory control functions such as data processing, alert handling and trending. It has a modular architecture based on functional units called managers. Applications can be distributed over many stations on the network which defines a distributed system [3].
Fig. 4. Distributed system
At the combined test beam we embedded three PVSS stations as a distributed system based on a full detector simulation as shown in figure 4. This test demonstrated successfully the partitioning over several computers and their interconnection in a common environment.
Software tools
The software of the pixel DCS consists of several subprojects such as tools for the implementation of the DCS hardware in the software environment and the configuration in an automated way, tools for combining all information concerning one detector module in a flexible way (see figure 8, last page) and also graphical user interfaces. For example figure 8 (last page) shows the System Integration Tool (SIT) which follows the detector hierarchy and therefore maps the real cabling structure into the software. 3 Prozeß-Visualisierungs und Steuerungs-Software, ETM, Austria All these software tools were used at the combined test beam and the experience now helps to develop advanced tools for the experiment.
DAQ-DCS Communication
TDAQ and DCS are controlled by finite state machines which consist of different states and transition functions which map a start state to a next state. Both systems are independent while TDAQ has the master control during detector operation. This means that the TDAQ finite state machine has to be able to cause transitions in the DCS finite state machine. Further more TDAQ applications have to transfer selective data to DCS as well as DCS must make required data available to TDAQ. Nevertheless TDAQ must be informed about state conditions. To cover all the required transfers, the DAQ-DCS Communication (DDC) software [4] has been developed by the ATLAS DCS group (see figure 5). DDC was set up in pixel configuration by the authors and was running for four months in the combined environment. During this time the pixel specific DDC application was tested intensely. Concerning the command transfer, we were able to show that the used pixel DCS finite state machine reacted in a well defined way on TDAQ transitions. Additionally pixel DCS directly computed actions via DDC in response to three TDAQ transitions at the combined test beam. Further more the possibility to set DCS hardware with TDAQ applications without changing the TDAQ state was tested successfully.
Regarding the data transfer, DCS visualised data like temperatures, depletion and low voltages of the detector modules or the states of DCS hardware for TDAQ while DCS received data from TDAQ like the status of TDAQ services or run parameters. Especially the run number was used for storing run relevant DCS data. In combination with a shown dynamical integration of more transfer data this was done very efficiently at the combined test beam.
For the message transfer we built up a DCS finite state machine to monitor the parameters of the detector modules and to generate corresponding states. Pixel DCS sent messages with severity flags which were read by the shifter during data taking.
Performing timing studies, certain DCS actions were connected to TDAQ transitions. For the reason mentioned above, the interconnection between on-and off-detector parts of the optical link is of special interest. Thus the setting of the reset signal of the SC-OLink which allows a slow and controlled recovery of the delay control circuit was linked to the transition 'LOAD' (see figure 6). The total time for a transition is composed of the time of the DDC process and the time of the DCS process. For the above example we measured 50 ms for the DDC and around 5 s for the DCS process. Due to these measurements we were able to optimise the control code during the test beam period together with changes in the hardware properties.
To verify the full functionality of DDC by the shifter during the experiment, additional tools for data analysing (see figure 9, last page) are inserted in the structure of the pixel detector control system which did not effect the normal operation. Checking the command transfer is done by switching and setting any number of virtual power supplies while checking the message transfer is done by simulating differently weighted temperatures of a detec-tor module and sending corresponding messages with severity flags. Reviewing the data transfer, one could observe from the TDAQ side a simulated high voltage scan of a virtual detector module inside DCS. On the other hand simulated TDAQ data is visible in DCS. These tools were very helpful during the operation and they are now an inherent part of the detector control system.
Interface to the Conditions Database
Conditions data is seen as every data needed for reconstruction besides the event data itself, thus it has to reflect the conditions the experiment was performed and the actual physics data were taken [5]. For the pixel DCS this includes basically parameters of the detector modules such as voltages, currents and temperatures but also parameters and status information of further DCS devices. As already mentioned, the ATLAS detector control system is based on the software PVSS. PVSS contains an internal archive to store and to read back the history of DCS values, but does not allow to access the data from outside the PVSS framework. Therefore a PVSS API 4 manager was developed by the Lisbon TDAQ group. This custom made manager is based on a C++ interface between PVSS and a MySQL database. When running, it connects to each of the DCS parameters defined by the user and stores the values together with a timestamp in the database. When a value change occurs, the previous value is updated by replacing the timestamp by a time interval and the new value is stored in the database in the same way as the first value.
During the combined test beam the system reliability of the pixel DCS set up, the amount of data and the handling of the interface have been examinated. Due to the given storing mechanism described above, no data filter or smoothing processes could be used. As one result we had about 5 storing pro- Fig. 7. Schematic of data flow through the interface cesses per second and per value which produced a non acceptable amount of data and necessitated a limitation of changes. By the integration of a new storing mechanism with a storage at the begin of a run, every minute 5 and at the end of the run we were able to reduce the amount of data significantly. Based on about 150 Bytes per storage for a detector module, pixel DCS would produce more than 37 GBytes of data for physics analysis per year at an estimated 5 minutes storage interval.
Summary
At the combined test beam we have built up a pixel detector control system which worked very well during the four month beam period. Pixel specific software tools were used with good acceptance by shifters. Many functionality issues could be studied sufficiently. The DAQ-DCS communication software was tested intensely and was established very successfully in the pixel configuration. We were able to use the full funcitonality of DDC. We provided commands for several actions inside DCS. TDAQ data were computed by DCS in a well defined way while DCS data was used by TDAQ for monitoring. Messages with severity flags were available. From this point all further requirements to pixel DCS coming with a system scale up could be achieved by this package. 5 If a monitored value run out of its limits this storage interval was scaled down to get more information about the bahavior DDC is the appropriate tool to handle the interaction between on and off-detector parts of our optical link. It allows us to develop tuning algorithms to find the optimal operation point for the components of the read out chain. As a first step, a graphical user interface which shows inside DCS various parameters of the BOC is currently under development. The used interface to the conditions database did not cover all the pixel DCS aims. After the combined test beam ATLAS intended to use the LHC Computing Grid (LCG) framework for developing a new interface to the conditions database which makes available general database tools and interfaces for subsequent analysis. Further better cofigurability and more flexibility for filtering data as well as the possibility to read data from the conditions database in PVSS has to be considered.
Acknowledgments
These studies are a result of sharing knowledge in the ATLAS pixel DAQ group and the ATLAS DCS group. We would like to thank all people being involved in the work, especially V. Khomoutnikov for support during the test beam period. He was always open for discussions and gave us a lot of fruitful hints. | 2014-10-01T00:00:00.000Z | 2005-10-28T00:00:00.000 | {
"year": 2005,
"sha1": "263782b0b611e3847b74792963fc921d806ed51b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/physics/0510262",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dbfa5a060bc2ce2a6373b81aab41f211104f4ee4",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
102630858 | pes2o/s2orc | v3-fos-license | Antidiabetic potential of methanol extracts from leaves of Piper umbellatum L. and Persea americana Mill.
Objective: To determine inhibitory activity of methanolic leaf extract of Piper umbellatum and Persea americana (P. americana) (traditionally used in Cameroon against diabetes) on α -glucosidase, β -glucosidase, maltase-glucoamylase, aldose reductase and aldehyde reductase activities, enzymes involved in starch digestion or diabetic complications. Methods: The methanol extracts from Piper umbellatum and P. americana were prepared by maceration. To assess relative efficacy of these extracts, the determination of concentrations that were needed to inhibit 50% of enzyme activity was done, whereas, gas chromatography-mass spectrum was used to identify components from extracts that may be responsible for the activities. Results: The tested extracts strongly inhibited α -glucosidase, maltase-glucoamylase, aldose reductase and aldehyde reductase activities with IC50 ranging from (1.07 ± 0.03) to (31.77 ± 1.17) μg/mL. Among the tested extracts, P. americana was the most active against sensitive enzymes (IC50 of 1.07 ± 0.03 to 15.63 ± 1.23). But, none of the extracts showed interesting inhibitory effect against β -glucosidase as their percentage inhibitions were less than 16%. From gas chromatography-mass spectrum analysis, 10 and 8 compounds were identified in Piper umbellatum and P. americana extracts respectively, using NIST library 2014. Conclusions: Results of this study provide the scientific credential for a prospective usage of these plants to treat diabetes.
Introduction
Based on the most recent estimation of WHO, approximately 200 million individuals in the world are diabetics. By the year 2025, this value may rise to nearly 350 million and could lead to a severe consequence on human beings' health [1,2]. Glycemic management is a long-term treatment for persons suffering from diabetes mellitus [3,4]. Glycemic control is primarily focused on inhibitors designed to target glucosidases, members of hydrolases found at the level of gastro-intestinal tract and whose exo-acting abilities are necessary for carbohydrate digestion. These inhibitors of glucosidase are usually recommended to diabetics to diminish glucose flow into the bloodstream from dietary starch, reducing the postprandial effect of consumption of carbohydrates on glycemia [5]. Some leading inhibitors of glucosidase such as miglitol and acarbose are available on the market but they have been reported to cause diarrhea as well as other intestinal disturbances, with corresponding flatulence and intestinal pain [6,7]. Extended exposure to persistent hyperglycemia can result in several complications impeding the neurological, visual, renal, and cardiovascular systems [8]. The mechanisms by which diabetic complications arise are not yet totally known, several biochemical routes engaged in relation with hyperglycemia have been established [8]. Of these pathways, polyol one was widely investigated [9].
Aldose reductase, an aldo-keto reductase, is the first enzyme of the polyol route. The rate of this latter is limited by that enzyme which acts as a cofactor to reduce glucose to sorbitol by utilizing nicotinamide adenine dinucleotide phosphate (NADPH) [9]. Sorbitol accumulation leads to modifications in the permeability of membranes, osmotic swelling, and oxidative stress resulting in tissue damage [10]. Experiments based on animal models showed that inhibition of aldose reductase could be efficient in the prevention of some complications [11]. Aldehyde reductase is an isoform of aldose reductase responsible for aldehyde reduction as well as for synthesis of ascorbic acid in mammals by D-glucuronate reduction [12,13]. It is also responsible for the metabolism of methyl glyoxal and 3-deoxyglucosone. These aldehydes which are the result of oxidative stress under pathological conditions (hyperglycemia) arise in large quantities and act as an intermediate for advanced glycation end-products [14]. Some aldose reductase and aldehyde reductase inhibitors have been developed to reduce diabetic complications; however, because of undesirable side effects or limited efficacy, none of them has achieved worldwide use [15].
Medicinal plants may be a credible source of glucosidases, aldose reductase and aldehyde reductase inhibitors thanks to relative safety and low cost [16]. Piper umbellatum (P. umbellatum) and Persea americana (P. americana) are traditionally used in Cameroon against diabetes. From the literature, three alkaloids named piperumbellactams A-C isolated from P. umbellatum branches showed moderate inhibition of 毩 -glucosidase [17]. Hypoglycemic activity of aqueous leaf extract as well as that of hydroalcoholic leaf extract of P. americana Mill were reported by Muchandi [18] and Lima et al [19] in alloxan-induced and streptozotocin-induced diabetic rats respectively. But, to the best of our knowledge, the mechanisms by which they exert their activities have not yet been reported. Thus, this work aimed to determine the potential of methanolic leaf extract of P. umbellatum and P. americana to inhibit 毩 -glucosidase, 毬 -glucosidase, glucoamylase, aldose reductase and aldehyde reductase activities. In addition, gas chromatography-mass spectrum (GC-MS) analysis of these extracts was also performed in order to determine compounds that can be reponsible for their enzymes inhibition activity.
Plant
Leaves of P. umbellatum and P. americana were harvested in February 2017 respectively from Bazou and Bangang-Fokam, west region in Cameroon. The authentification was done at the Cameroon National Herbarium in comparison to the registered specimen under reference 2854/SFR/Cm and 57756/HNC, respectively. After dryness, the leaves were ground for extraction.
Obtention of plant extracts
Powder from leaves of P. umbellatum and P. americana (200 g) was macerated for 48 h at room temperature in methanol (600 mL) and filtered using Wattman paper No 1. Methanol was removed from the extract using a rotary evaporator at 45 曟 under reduced pressure.
Exploration of extracts by GC-MS
GC-MS of extracts from P. umbellatum and P. americana leaves was done using TurboMass GC System, fitted with an Elite-5 capillary column (30 m long, 0.25 mm inner diameter, 0.25 µm film thickness and highest temperature of 350 曟 ), combined with a Perkin Elmer Clarus 600C MS. The helium served as gas carrier at a steady flow rate of 1 mL/min. The injection, transfer line as well as ion source temperature were 280 曟 while the energy of ionization was 70 eV. The oven temperature was adjusted from 40 曟 (hold for 2 min) to 280 曟 (hold for 10 min) at a frequency of 5 曟 /min. The crude extracts were solubilised using ethyl acetate and filtered with syringe filter (Corning, 0.45 µm). A volume of 1 µL of the crude extracts was injected with a split ratio of 1:20. The data were obtained by collecting mass spectrum within 50-550 m/z. Chemical compounds present in analysed extracts were identified based on gas chromatography retention time and mass spectra matching those of standards available in NIST library 2014.
Isolation of intestinal maltase-glucoamylase enzyme
Intestinal maltase-glucoamylase was extracted following the literature reported procedure [20] with some modifications. The enzyme was extracted from white male rats (1-2 months) weighing 150-250 g and starved for 12 h before sacrifice. This latter was performed by cervical dislocation. Whole intestines were gently removed and washed with ice-cold 0.9% NaCl. For extraction, intestines were cleaned and cut longitudinally, and mucosal scrapings of 5-6 rats were combined and homogenized in 50 volumes of 5 mM EDTA; pH 7.0. The centrifugation of homogenized intestines was performed at 15 000 rpm at 4 曟 for 45 min and the upper phase was thrown away. The obtained pellet was re-suspended in 90 mL of ice-cold water followed by addition of 5 mL of 0.2 M potassium phosphate (pH 7) containing 0.1 M EDTA, and 5 mL of 0.1 M cysteine. The incubation of the obtained mixture was done at 37 曟 for 30 min. Pellet was collected following centrifugation at 15 000 rpm for 45 min while supernatant was discarded. Intestinal suspension was re-dissolved in potassium phosphate buffer (10 mM), pH 6.8, to which 4 mg of papain and 0.4 mg cysteine were added. Incubation was performed at 37 曟 for 40 min with constant shaking. Afterward the mixture was centrifuged at 15 000 rpm for 90 min. The upper phase was collected and precipitated with ammonium sulphate to 80% saturation. After this process, mixture was centrifuged at 15 000 rpm for 30 min and the upper phase was thrown away while pellet was collected and redissolved in 4 mL of 10 mM-potassium phosphate (pH 7). This homogenate was dialyzed overnight against distilled water with three changes of water (40 vol every time). The extracted enzyme was kept at -80 曟 until further use for total protein determination and inhibition studies.
Maltase-glucoamylase inhibition assay
Maltase-glucoamylase inhibition assay was carried out using p-nitrophenyl 毩 -D-glucoside (substrate) based on reported procedure [21] with some modifications. Reaction mixture contained 70 µL of phosphate buffer (70 mM, pH 6.8), 10 µL of extracted enzyme (25.0 µg of protein) and 10 µL of test extracts (1 mg/mL).. After incubation at 37 曟 for 5 min, 10 µL of p-NPG (10 mM, prepared in assay buffer) was dispensed in all the wells of a 96 well, then incubation was done at 37 曟 for 30 min. The activity of the test extracts against maltase-glucoamylase was noted by measuring increase in absorbance of p-nitrophenol at a wavelength of 405 nm. Acarbose served as reference drug for positive control while negative control contained 10 µL of dimethyl sulfoxide (DMSO) 10% instead of extracts. The percent inhibition was calculated as follows: Percent inhibition (%) = [1-(Absorbance sample /Absorbance control )] 暳 100
Isolation of aldehyde reductase
Kidneys were separated from the calf soon after slaughtering. The cortex area of the kidney was dissected carefully and liquefied in 3 volumes of 10 mM sodium phosphate buffer at 7.2 pH containing 2.0 mM EDTA dipotassium salt, 0.25 M sucrose as well as 2.5 mM 毬 -mercaptoethanol for dissolution and homogenization of tissue. The homogenate was further centrifuged at 12 000 暳g at 4 曟 for 30 min, afterward the precipitate was thrown away as it contained some insoluble lipids. Collected supernatant was subjected to 40% salt (ammonium sulphate) saturation to isolate aldehyde reductase. This 40% saturated liquid was centrifuged in the same conditions as above. The precipitate was again discarded and the supernatant was subjected to 50% saturation with salt followed by centrifugation as described above. At last step of centrifugation, 75% saturation was obtained by adding powdered salt to the obtained supernatant followed by centrifugation as previously described, resulting in enzyme precipitation. The supernatant was thrown away while the pellet was collected and redissolved in 10 mM sodium phosphate (pH 7.2) containing 2.5 mM 毬 -mercaptoethanol and 2.0 mM EDTA dipotassium salt. In the dialysis membrane, the obtained suspension was submitted to dialysis overnight using the above buffer. After that process, the extracted aldehyde reductase was aliquoted then stored (-80 曟 ) until used for total protein determination and inhibition studies [22].
Isolation of aldose reductase
Isolation of aldose reductase was performed as described by Iqbal et al [22] with some changes. Briefly, aldose reductase was isolated from calf lenses. Calf lenses were detached from the eyes immediately after slaughtering and were frozen until use. A mass of 100 g of lenses was homogenized in 3 volumes of cold distilled water, afterward the homogenate was centrifuged at 10 000 暳g for 15 min at 4 曟 to remove insoluble constituents which was thrown away as it contained lipids. The supernatant was collected and ammonium sulphate salt was added to make 40% saturation. The mixture was centrifuged as previously decribed followed by discard of the precipitate. The ammonium sulphate concentration was increased up to 50% saturation in order to remove additional inert proteins. Aldose reductase was precipitated upon addition of ammonium sulfate salt to the previous obtained supernatant to reach 75% saturation. The obtained supernatant following centrifugation, was thrown away and the pellet (enzyme) was redissolved in 4 mL of 50 mM NaCl. The sample was submitted to dialysis overnight against 500 mL of 50 mM NaCl. The volume of the sample was measured after dialysis, kept in 1 mL aliquots (in eppendorf tubes) in freezer at -80 曟 for total protein determination and inhibition studies [22].
Inhibition assays of aldehyde and aldose reductases
UV spectrophotometer at 340 nm was used in order to evaluate the inhibition potential of aldehyde reductase and aldose reductase by measuring the decrease in NADPH absorbance. Each well of the 96well microplate contained assay mixture which was made up of 10 µL of 100 mM phosphate buffer pH 6.2, 35 µL of enzyme (210 µg of protein), 10 µL of 1 mg/mL tested extracts and 20 µL of substrate (D,L-glyceraldehyde for aldose reductase or sodium glucuronate for aldehyde reductase). The preincubation of the mixture was performed for 5 min at 37 曟 for the enzymatic reaction to run properly, afterward 25 µL of 0.1 mM NADPH (cofactor) was added. Reading using ELISA plate reader was immediately taken at 340 nm and the mixture was incubated and read in the same conditions as previously described. For positive and negative control, 10 µL of 10 mM valproic acid (for aldehyde reductase) or genistein (for aldose reductase) and DMSO 10% was used respectively [22]. The reaction was run in triplicate with a total volume of 100 µL per well. Absorbances were recorded and percentage of inhibition calculated as follows: Percent inhibition (%) = [1−(Absorbance of test well/Absorbance of control)] 暳 100
毩-glucosidase inhibition studies
This assay was performed following the method used by Raza et al [23] with some changes. In short, solutions of commercial 毩 -glucosidase obtained from Saccharomyces cerevisiae (Sigma-Aldrich) as well as p-nitrophenyl 毩 -D-glucopyranoside (substrate) were prepared in 70 mM phosphate buffer, pH 6.8 while extract solutions were prepared in DMSO 10%. The inhibition assay was performed by adding 10 µL of extract solution to 70 µL of buffer and 10 µL of 2.5 unit/mL enzyme solution followed by preincubation for 5 min at 37 曟 . After preincubation, 10 µL of 10 mM substrate was added to the mixture in order to start reaction. Incubation of the reaction mixture was done as indicated above for preincubation. Acarbose served as reference for positive control. Enzyme activity was evaluated by measuring increase in absorbance of p-nitrophenol released from p-nitrophenyl 毩 -D-glucopyranoside at 405 nm using an Elx 800 Micro plate reader. The percentage of inhibition was determined as follows: Percent inhibition (%) = [1−(Absorbance of test well/Absorbance of control)] 暳 100
毬 -glucosidase inhibition studies
This study was performed according to the previously described method [23] with some modifications. 毬 -glucosidase extracted from sweet almonds (Sigma-Aldrich) and p-nitrophenyl 毬 -D-glucopyranoside (10 mM) used as substrate were prepared in 0.07 M phosphate buffer, pH 6.8. The inhibition assay was carried out by adding extract solution (10 µL) to 70 µL of the above buffer and 10 µL of 2.0 unit/mL enzyme solution followed by preincubation for 5 min at 37 曟 . After preincubation, 10 µL of substrate was added to the mixture to start the reaction. The incubation of the reaction mixture was done for 30 min at 37 曟 . Castanospermine was considered as reference drug and was used for positive control, while 10 µL of DMSO 10% was used for negative control. Each test was carried out in triplicate. The percent inhibition was determined as follows: Percent inhibition (%) = [(1-(Absorbance sample /Absorbance control )] 暳 100
Ethical statement
All studies involving animals were conducted according to the ethical guidelines of the Committee for Control and Supervision of Experiments on Animals (Registration no. 173/CPCSEA, dated 28 January, 2000), Government of India, on the use of animals for scientific research.
Data analysis
Enzyme activity in presence of extracts was expressed as a percentage of uninhibited enzyme activity, and plotted versus inhibitor (extract) concentration. Enzyme inhibition was considered as the reciprocal value of the measured enzyme activity and was expressed as a percentage. The non-linear regression was done using GraphPad Prism software. To evaluate the inhibitory potency of extracts, concentration needed to inhibit 50% of enzyme activity (IC 50 ) was determined.
Statistics analysis
Differences in calculated percent inhibition and IC 50 values were analyzed using unpaired t-tests as well as one-way ANOVA followed by Tukey-Kramer post-hoc analysis in order to compare data sets. GraphPad Prism was used to this end. Differences among means were considered considerable at the probability threshold of 5%.
Antidiabetic activity
The inhibition potency of crude extracts against aldose reducase and aldehyde reductase was demonstrated in Table 2. From Table 2, it appeared that both extracts were potent inhibitors of two enzymes, P. americana extract of which was the most active ( Table 2). Against aldose reductase, both extracts showed better activity than the reference drug (Genistein) while only P. americana extract exerted significantly higher inhibitory effect against aldehyde reductase when compared to sodium valproic acid (The reference substance).
Discussion
Many natural substances have been tested for glucosidase, aldose reductase and aldehyde reductase inhibitory activities [7,13,24]. In the present study, the tested extracts showed inhibitory effects against 毩 -glucosidase and maltase-glucoamylase. This could be due to the presence of antidiabetic compounds in these extracts. In fact, from GC-MS analysis of extracts, it appeared that both extracts contain aromatic compounds and according to Crane et al [25]; and Yin et al [6], currently available treatments for diabetes and particularly type 2 diabetes include the administration of insulin through subcutaneous route and the use of various oral agents among which were benzoic acid derivatives. Moreover, Patel et al [26] showed that the electron donating groups on sterically hindered benzene aromatic ring, the effector region, increase antidiabetic activity of compounds. In our study, the aromatic identified compounds contain releasing groups and this could justify the important activity observed in these crude extracts. P. americana extract appeared to be the most important inhibitor of both glucosidases and maltase-glucoamylase. Knowing that both extracts possess aromatic compounds, the difference in their inhibitory effects may be due to a difference in the concentration of active principles among benzene aromatic compounds in these extracts or to antagonistic effects among compounds in the less active extract [27]. Both extracts showed more significant inhibitory effects compared to the tested acarbose. This suggests that these plant extracts, especially P. americana extract, can be a promising source of antidabetic drug. Both extracts showed less than 16% inhibition effect against 毬 -glucosidase, which indicated the selectivity of their activities.
Chronic secondary complications are the main cause of morbidity and mortality in diabetic patients [28]. Structurally different compounds such as phenols, spirohydantoins, flavonoids, benzopyrans, quinones, and alkaloids have all been highlighted as having the ability to inhibit aldehyde and aldose reductases with distinct degrees of efficacy and specificity [7,10,29]. Sorbinil, alrestatin, tolrestat, statil, ALO1576 and epalrestat are some of those enzymes inhibitors that have been well-investigated and clinically tested. However, up to date, none of the available synthetic aldose and aldehyde reductases has proved clinically effective and in fact some have had severe side effects. Moreover, in recent times, there is an increased interest to plants as natural sources of antidiabetic substances, especially because most of the medicinal plants and medicinal plant products are free from adverse side effects and are being used as a source of diet or traditional medicine [24]. The medicinal use of P. umbellatum and P. americana has a very long tradition. Results of the current study that show interesting inhibition of aldose and aldehyde reductases by these extracts could be ascribed among others to benzene aromatic compounds identified in these extracts. Substances that can considerably delay or prevent the diabetic complications onset and development would offer many advantages when involved in glycemic control. In principle, P. umbellatum and P. americana extracts may be included in this category. The results of the present work are a step forward in the developpment of such agents.
This work along with findings strongly provides basis for developing good alternatives to available synthetic drugs for glycemic management from P. umbellatum and P. americana. Further work is still needed for the identification of specific antihyperglycemic constituents in these extracts and evaluation of their pharmacological potentials. | 2019-04-09T13:08:52.408Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "bf6c9b3d1c253c53d9e65ddf88112337b6e87307",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2221-1691.227997",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "7819cecb7d2e01a3796b3285dd28b44f607ef4be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
3281988 | pes2o/s2orc | v3-fos-license | Preoperative prognostic nutritional index is a powerful predictor of prognosis in patients with stage III ovarian cancer
Many established inflammation- and nutrition-related factors have been investigated as potential independent prognostic factors in various cancers, including the C-reactive protein/albumin ratio (CAR), lymphocyte/monocyte ratio (LMR), modified Glasgow prognostic score (mGPS), body mass index (BMI), and prognostic nutritional index (PNI). This study was performed to estimate the prognostic value of these factors in predicting survival and platinum resistance in ovarian cancer (OC), especially according to stage. Kaplan-Meier and multivariate analyses were performed to plot the survival curve and determine the independent prognostic factors. Additionally, the area under the receiver operating characteristic curve (AUC) was used to predict platinum resistance and prognosis by comparing the predictive ability of PNI and cancer antigen (CA)-125. In all patients, decreased PNI was significantly associated with platinum resistance and poor overall survival (OS) and progression-free survival (PFS). Regarding tumor stage, decreased PNI was significantly associated with poor PFS and OS only in stage III OC. Furthermore, the PNI also showed a significantly higher AUC value than CA-125 for predicting mortality and platinum resistance in all OC patients, but not in stage III patients. In conclusion, decreased PNI is a powerful predictor of a poor prognosis in OC, and especially for stage III cases.
C-reactive protein/albumin ratio (CAR), lymphocyte/monocyte ratio (LMR), albumin and lymphocyte count combined into the prognostic nutritional index (PNI), and CRP-and albumin-related factors of the modified Glasgow prognostic score (mGPS) [9][10][11][12] . As an efficient, simple, and convenient novel prognostic factor, the PNI is calculated according to the following formula: serum albumin value (g/L) + 0.005 × lymphocyte count (per mm 3 ) in peripheral blood 11 . Recently, PNI has been reported to be an independent prognostic factor for survival in different malignant carcinomas, including colorectal cancer, gastric cancer, lung cancer, and pancreatic cancer [13][14][15][16] . However, the prognostic importance of PNI for OC still needs to be elucidated, especially according to tumor stage. Although Miao et al. 17 reported that PNI was an independent prognostic factor in OC patients, they did not assess the combination of PNI with other established prognostic factors, such as CAR, LMR, and mGPS. Thus, it is meaningful to combine the PNI and other established nutrition-and inflammation-related prognostic factors to obtain optimal independent prognostic scores for predicting the chemoresistance and clinical outcomes of OC patients at different stages. Correlation between PNI and clinicopathological parameters. The relation between preoperative PNI and the clinicopathological characteristics of patients with OC is shown in Table 1. Decreased PNI was significantly associated with advanced FIGO tumor stage (P < 0.001), maximum residual tumor (P < 0.001), histological subtype (P = 0.001), malignant ascites (P < 0.001), cancer antigen (CA)-125 ≥ 35 U/ml (P < 0.001), platinum resistance (P < 0.001), lower LMR (P < 0.001), and higher CAR (P < 0.001) and mGPS (P < 0.001). However, there were no significant associations between PNI and age (P = 0.066), grade (P = 0.237), or body mass index (BMI) (P = 0.460). Among tumor stage III patients, decreased PNI was also significantly associated with residual tumor mass (P = 0.023), histological subtype (P = 0.005), malignant ascites (P < 0.001), CA-125 ≥ 35U/ ml (P = 0.006), lower LMR (P < 0.001), and higher CAR (P < 0.001) and mGPS (P < 0.001), but not with platinum resistance (P = 0.095).
When patients were stratified by FIGO tumor stage, high-PNI patients had significantly longer PFS than low-PNI patients only for cases of FIGO tumor stages III (P < 0.001) and IV (P = 0.005) (Fig. 2D,E). Similarly, high-PNI patients had significantly longer OS than low-PNI patients only in stages III (P < 0.001) and IV (P = 0.010) (Fig. 3D,E). However, the multivariate Cox regression model demonstrated that the PNI was an independent predictive factor of poor PFS (HR 1.815, 95% CI 1.113-2.958, P = 0.017) and OS (HR 1.699, 95% CI 1.035-2.789, P = 0.036) only in FIGO tumor stage III OC patients, as were residual tumor mass and chemosensitivity. All these findings show that the PNI is an independent risk factor for poor PFS and OS in OC patients, especially those at stage III.
Comparison of predictive ability. The receiver operating characteristic curve (ROC) and area under the receiver operating characteristic curve (AUC) values were used to compare the predictive ability among CA-125, PNI, and their combination for OS and platinum resistance ( Fig. 4 and Table 4). With respect to predicting mortality, the PNI had a significantly higher AUC value than the CA-125 (0.677 vs. 0.567, P = 0.044). The combination of the PNI and CA-125 had a higher AUC value than either alone, although the difference was not significant (P > 0.05). Regarding platinum resistance, the PNI showed a higher AUC value than CA-125 (0.699 vs. 0.560, P = 0.006) and their combination (0.699 vs. 0.692, P = 0.847). The combination of the PNI and CA-125 had a significantly higher AUC value than CA-125 (P = 0.007). However, the PNI did not have a significantly higher AUC than CA-125 with respect to OS (0.649 vs. 0.547, P = 0.388) or platinum resistance (0.618 vs. 0.520, P = 0.094) among FIGO tumor stage III patients. Furthermore, the combination of PNI and CA-125 also did not have significantly higher AUC value than either one alone.
Discussion
To date, no widespread nutrition-or inflammation-related factor has been found to index chemoresistance or prognosis in OC patients, especially according to tumor stage. Although the association between PNI and prognosis has been clarified in other cancers 18 , the impact of PNI on platinum resistance and clinical outcomes in OC, especially according to tumor stage, has not been clarified.
Laky et al. 19 showed that about 20% of newly diagnosed gynecologic cancer patients have malnutrition. More than 20% of cancer patients die from malnutrition rather than the cancer itself 20 . Due to the metabolic effects of tumor mass, malignant ascites, and small bowel obstruction, OC patients are more likely to present with malnutrition and cachexia 21 . Furthermore, the tumor is more prone to develop chemoresistance in malnourished OC patients 22 . Recently, Matassa et al. 23 also observed that oxidative metabolism drives inflammation-induced platinum resistance in OC. Lymphocytes were also reported to play a major role in immune responses by mediating the immunologic damage caused by various cancers 24 . As components of the PNI, both the albumin count and the lymphocyte count are closely related to inflammatory responses in cancer patients, which are independent predictors of long-term outcomes in OC 25,26 . According to the prognostic association between PNI and albumin and lymphocyte counts, it seems that PNI is a reflection of systemic inflammation, which may influence cancer growth and metastasis 17 . Thus, both inflammation-and malnutrition-related prognostic factors may induce chemotherapy resistance and predict the OS of OC patients.
Consistent with previous studies, our study demonstrated that FIGO tumor stage was an independent prognostic factor in OC patient 27 . To estimate the clinical outcomes of OC patients better, many inflammationand malnutrition-based markers, such as BMI, LMR, mGPS, and CAR, have been investigated as potentially important prognostic and predictive factors in OC patients [28][29][30] . Similar to the study by Miao et al. 17 demonstrated that the independent prognostic factor best predicting the OS of OC patients was PNI rather than BMI, CAR, LMR, or mGPS. The chi-square test determined that a PNI < 47.2 was not only associated with advanced FIGO tumor stage, maximum residual tumor, malignant ascites, platinum resistance, and lower LMR but also with higher CAR and mGPS. However, our study further showed that when patients were stratified by FIGO tumor stage, stage III patients showed the most significant association between PNI level and the outcome of the disease. Furthermore, ROC and AUC analyses showed that PNI was significantly superior to CA-125 in predicting mortality and platinum resistance in all-stage OC patients, but not in stage III cases. These results suggest that as an easily available laboratory hematological marker, PNI is superior to other nutrition-and inflammation-related prognostic factors in predicting survival in OC patients, especially for FIGO tumor stage III patients. Furthermore, PNI may also predict the platinum-based chemotherapeutic response of all-stage OC patients. This study provides further support for the proposition that elevated preoperative PNI is associated with a good prognosis in OC patients. A study by Liu et al. 31 showed that the CAR had superior prognostic ability compared to other established inflammation-related prognostic indices, such as the PNI, mGPS, neutrophil/ lymphocyte ratio (NLR), and platelet/lymphocyte ratio (PLR) in 200 OC patients. The reason for this difference may be that our study included LMR, BMI, and platinum resistance. In addition, the current study not only further assessed the correlation between PNI and tumor stage but also compared the predictive ability of CA-125, the PNI, and their combination with respect to OS and platinum resistance, according to ROC and AUC values. Nevertheless, both Liu et al. 31 and the present study used retrospective, single-center studies, and the number of patients was small in both. Therefore, more studies are needed to confirm these results. Furthermore, the mechanisms linking the PNI, poor prognosis and platinum resistance must be clarified.
Materials and Methods
Patients. In total, 237 newly diagnosed OC patients, treated with cytoreductive surgery and platinum-based chemotherapy between January 2007 and December 2015 at Nanfang Hospital of Southern Medical University, were identified. Pathological parameters, clinical data, and survival times were extracted from medical records. Patients who had active infection, coexisting hematologic malignancies, or other hematologic or autoimmune disorders were excluded. The primary endpoint of the study was PFS, which was calculated from the date of treatment to the date of recurrence or progression. OS was defined as the time from treatment to the date of death or last follow-up. All OC patients were followed up every 2-4 months for the first 2 years, and every 3-6 months thereafter until December 2016. At each visit, the patients were assessed by clinical and imaging examinations and the serum levels of CA-125 of patients were assessed. This study was approved by the medical ethics committee Table 4. Comparison of the diagnostic performance in predicting mortality and chemoresistance. AUC, area under the receiver operating characteristic curve; PNI, prognostic nutritional index.
of Southern Medical University. All methods were performed in accordance with the relevant guidelines and regulations. Written informed consent was obtained from each patient. All of the following data were obtained from medical records: age, BMI, FIGO stage, massive ascites, surgery, residual tumor mass, tumor (histology, grade), chemosensitivity, and clinical characteristics (CA-125, CRP, albumin, lymphocyte, and monocyte levels). Based on previous studies, optimal debulking was defined as a maximum diameter of residual tumor after surgery of ≤2 cm 32,33 . Patients were defined as platinum-resistant if the disease progressed within 6 months after completing first-line platinum-based chemotherapy, while all other patients were defined as platinum-sensitive 34 . PNI was calculated according to the following formula: serum albumin (g/L) + 0.005 × lymphocyte count (per mm 3 ) in the peripheral blood 11 . CAR was calculated by CRP (mg/L)/ albumin (g/L) ratio 35 . LMR was defined as the absolute lymphocyte count/absolute monocyte ratio 36 . The mGPS encompassed both the CRP and albumin concentrations. Patients with both CRP > 10 mg/L and albumin < 35 g/L were allocated a score of 2. Patients with both CRP ≤ 10 mg/L and albumin ≥ 35 g/L were allocated a score of 0. Patients with only one of these abnormal levels were given a score of 1 12 . BMI, CAR, and LMR were categorized into two groups according to the cutoff values of ≥18.5 kg/m 2 , ≥0.5, and ≥3.82, respectively 9,37,38 .
Statistical analysis. Statistical analyses were performed with SPSS software (ver. 20.0; IBM Corp., Armonk, NY, USA). Comparisons between categorical variables were performed using the chi-square test. The optimal cutoff value for PNI was determined via a web-based application, programmed in R by Budczies et al. (http:// molpath.charite.de/cutoff/) 39 . Significant prognostic variables in univariate analyses were included in multivariate Cox regression models to determine independent prognostic factors, using a forward stepwise method. Differences in survival among classification groups were analyzed using Kaplan-Meier curves and log-rank tests. ROC curves were calculated for PNI and CA-125, alone and in combination. The AUC values were compared using MedCalc software (ver. 15.2.1; MedCalc Software bvba, Ostend, Belgium). A two-sided P value < 0.05 was considered statistically significant. | 2018-04-03T01:31:22.283Z | 2017-08-25T00:00:00.000 | {
"year": 2017,
"sha1": "350f13477db389aab2c443f30c9bc5c81ec08ed4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-10328-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1139e842127999d6769747a0ccf8f3cbec726012",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8423457 | pes2o/s2orc | v3-fos-license | Potential Biomarkers of Colorectal Adenoma–Dysplasia–Carcinoma Progression: mRNA Expression Profiling and In Situ Protein Detection on TMAs Reveal 15 Sequentially Upregulated and 2 Downregulated Genes
Background: As most colorectal cancers (CRC) develop from villous adenomas, studying alterations in gene expression profiles across the colorectal adenoma–dysplasia–carcinoma sequence may yield potential biomarkers of disease progression. Methods: Total RNA was extracted, amplified, and biotinylated from colonic biopsies of 15 patients with CRC, 15 with villous adenoma and 8 normal controls. Gene expression profiles were evaluated using HGU133Plus2.0 microarrays and disease progression associated data were validated with RT-PCR. The potential biomarkers were also tested at the protein level using tissue microarray samples of 103 independent and 16 overlapping patients. Results: 17 genes were validated to show sequentially altered expression at mRNA level through the normal–adenoma–dysplasia–carcinoma progression. Prostaglandin-D2 receptor (PTGDR) and amnionless homolog (AMN) genes revealed gradually decreasing expression while the rest of 15 genes including osteonectin, osteopontin, collagen IV–alpha 1, biglycan, matrix GLAprotein, and von Willebrand factor demonstrated progressively increasing expression. Similar trends of expression were confirmed at protein level for PTGDR, AMN, osteopontin and osteonectin. Conclusion: Downregulated AMN and PTGDR and upregulated osteopontin and osteonectin were found as potential biomarkers of colorectal carcinogenesis and disease progression to be utilized for prospective biopsy screening both at mRNA and protein levels. Gene alterations identified here may also add to our understanding of CRC progression.
Introduction
Colorectal cancer (CRC) is one of the most frequent cancers in the world with a very high mortality rate even after surgical resection, radio-and chemotherapy [5]. It seems evident that CRC frequently follows based molecular analysis of malignancy in colon adenoma and CRC samples were described using 36 [27], 32 [35,40], 10 [1,20], 9 [23,24], 4 [30], 2 [21] and 1 [28] adenoma samples compared to adenocarcinoma and normal colonic tissues. Consistent with the observation that APC mutation is an early event in colon carcinogenesis, MYC and claudin 1 transcripts displayed an increased expression in adenomas [20]. In general, expression of Wnt target genes was found elevated in adenomatous samples compared to normal tissue [35]. The genes markedly upregulated in adenomas compared to normal tissues were generally associated to pathways of mitosis, DNA replication and spindle organization. Downregulated genes were predominantly involved in host immune defense, inorganic anion transport, organ development, and inflammatory response [35]. Nosho et al. identified gene expression alterations involved in the adenoma-carcinoma sequence including the upregulation of insulin-like growth factor 2 (IGF2), IGF1, Ki-67 and the downregulation of p21, heat shock protein 90, and caspase-7 genes [28,29]. According to cDNA microarray and immunohistochemical analyses of Mori et al. increased JAK3 kinase, matrix metalloproteinase 13, heat shock protein 60 and MDM2 mRNA and protein expression correlates with the progression of CRC [26]. Several abnormalities (such as upregulated MGSA, BIGH3, matrilysin and downregulated guanylin, hevin) present in the carcinomas were already detectable in adenomas suggesting that these genes and their transcripts may play a role at a relatively early stage of colorectal carcinogenesis [20,30]. A number of genes discriminating carcinoma from adenoma were either relevant to hypoxia [24], or involved in apoptosis regulation and tumor suppression [21].
Genome-wide mRNA expression profiling studies using microarrays may have the potential to reveal the molecular background and support the diagnostic, prognostic and treatment decisions in human disorders including cancers. In the gastrointestinal tract, biopsy samples are routinely taken during the endoscopic examination with minimal intervention [11,12,14,35,40]. These are suitable for mRNA expression analysis for identifying potential biomarkers of early signs of malignant transformation and progression. Gene expression profiles of the whole colorectal normal-adenomadysplasia-carcinoma sequence using biopsy samples and whole genomic microarrays have not been analyzed yet.
The aim of this study was to identify differentially expressed genes associated with colorectal cancer development and progression.
Patients and samples
After obtaining informed consent of untreated patients -who received neither chemo-nor radiotherapy at the time of sample collection -colon biopsy samples were taken during endoscopic intervention at the 2nd Department of Internal Medicine, Semmelweis University, Budapest, Hungary. Altogether 327 tissue samples of 141 patients were analyzed in this study. Some of the datasets of the 38 patients' samples including 15 tubulovillous/villous adenomas, 15 CRC, and 8 healthy controls, hybridized to Affymetrix microarrays, were used and published in an earlier study investigating different aspects of colorectal disease and cancer [11] and are available in the Gene Expression Omnibus database (series accession number: GSE4183). Furthermore, formalin-fixed and paraffin-embedded tissue samples of 103 independent and 16 overlapping patients were also analyzed using tissue microarrays. The diagnostic groups and the number of patients are summarized in Table 1. Detailed patient specification is described in Supplementary Table 1. Tween 20), herring sperm DNA (0.1 mg/ml Promega, Wisconsin, US), acetylated BSA (0.5 mg/ml, Invitrogen), sensitivity controls (1.5 pM BioB, 5 pM BioC, 25 pM BioD and 100 pM Cre), and 50 pM B2 oligonucleotide orientation controls. The slides were washed and stained using Fluidics Station 450 and antibody amplification staining (using EukGE_Ws_2v4 fluidic protocol and 10 µg/ml streptavidin-phycoerythrin; Invitrogen/Molecular Probes, Carlsbad, US) according to the manufacturer's instructions (Affymetrix). The fluorescent signals were detected by a GeneChip Scanner 3000 (Affymetrix).
Statistical evaluation of mRNA expression
profiles Pre-processing and quality control. Quality control analyses were performed according to the suggestions of the Tumor Analysis Best Practices Working Group [39]. Scanned images were inspected for arti-facts, percentage of present calls (>25%) and control of the RNA degradation were evaluated. Based on the evaluation criteria all biopsy measurements fulfilled the minimal quality requirements. RMA background correction, quantile normalization and median polish summarization were applied. The datasets for further analysis are available in the Gene Expression Omnibus databank (http://www.ncbi.nlm.nih.gov/geo/), series accession number: GSE4183.
Determination of genes involved in the adenomadysplasia-carcinoma sequence. Kendall's rank correlation analysis was performed for quantification of association between the expression level and the disease stages in case of both microarray and Taqman quantitative RT-PCR data. The correlation of the two variables was described by Kendall's tau, its value is between −1 and 1. Higher proximity of the tau value to the terminal values indicates stronger correlation between the two variables. The p-value of tau under the null hypothesis of no association is computed by in the case of no ties using an exact algorithm described by Best and Gipps [2]. The R-environment was used for statistical analysis.
Taqman quantitative RT-PCR
TaqMan quantitative real-time PCR was used to measure the expression of increasing or decreasing expression tendency showing genes using an Applied Biosystems Micro Fluidic Card System. Using Taqman Reverse Transcription Kit, 400 ng/sample total RNS was reverse transcribed (Applied Biosystems, Foster City, US). The quality of the cDNA samples was checked by CK20/PBGD real-time PCR. The expression analysis of the selected genes was performed from 100 ng/sample cDNA template, using Taqman Low-Density Array for Gene Expression: Format 96a and Taqman Universal PCR Master Mix. The measurements were performed using an ABI PRISM ® 7900HT Sequence Detection System as described in the products User Guide (http://www.appliedbiosystems.com). For data analysis the SDS 2.2 software was used. The extracted delta Ct values (which represent the expression normalized to the ribosomal 18S expression) were clustered according to the histological groups. Then the Student's t-test was performed to compare the expression values between groups.
Tissue microarray analysis
Cores of 1 mm diameter were collected from selected areas of formalin-fixed, paraffin-embedded tissue blocks prepared from 37 colorectal adenoma, 44 dysplastic adenoma, 89 early CRC (stage Dukes B), 57 advanced CRC (stage Dukes C and D) and 53 normal colon samples of 119 patients and inserted into 4 recipient blocks taking 70 samples each. 5 µm thick tissue sections were cut from the blocks and immunostained. Protein products of the 17 significantly up-and downregulated genes (see later) were tested using commercially available antibodies. Four of them, which could be set up to work reliably in formol-paraffin sections following antigen retrieval, were chosen for systematic screening of normal-adenoma-dysplasiscarcinoma sequence on TMA. Slides were stained using rabbit anti-human antibodies for amnionless homolog (1:200 dilution, Atlas Antibodies, Stockholm, Sweden, code: HPA000817), prostaglandin D2 receptor (1:500 dilution, Chemicon, Temecula, US, code AB9255), osteopontin (1:2000 dilution, Chemicon, Temecula, US) and osteonectin antibody (1:1000 dilution, Chemicon, code AB1858) for 1 h at room temperature. Antibodies were detected by using the En-Vision+ system (Dako, Glostrup, Denmark) followed by DAB-hydrogen peroxidase chromogen-substrate kit (Dako). Immunostained TMA slides were digitalized using high-resolution MIRAX DESK instrument (Zeiss, Gottingen, Germany), and analyzed with the MIRAX TMA Module software (Zeiss). Protein expression was evaluated using an empirical scale considering intensity and proportion of positive cytoplasmatic staining of epithelial/carcinoma cells. Scores were given for PTGDR: −2 for no staining; 0 for weak, 1 for moderate, 2 for strong diffuse immunostaining; and for AMN, osteopontin and osteonectin: −2 for no or weak staining, 0 for moderate apical cytoplasmatic, 1 for strong apical and weak basal cytoplasmatic, 2 for strong diffuse cytoplasmatic staining; Pearson's Chitest and Fischer exact test were done for revealing if the staining difference in progression groups was significant (p < 0.05). Also, contingency tables and association plots were constructed from the two categorical variables (group and score) [25].
mRNA expression microarray screening
Relative expression values were determined for each gene using the preprocessed microarray data in each disease group (normal, low-grade dysplastic adenoma, high-grade dysplastic adenoma, early stage CRC, advanced stage CRC). The association between gene expression level and disease group was demonstrated with the Kendall's rank correlation analysis using 0.4 as a cut-off value (when absolute value of tau was 0.4 or higher). Along the transition of colorectal adenomadysplasia-carcinoma sequence, the gradual downregulation of 382 genes (p < 0.002) and progressive overexpression of 918 genes (p < 0.002) were detected. The list of these genes with complete annotation, tau and p-values are shown in Suppl. Table 2 (http://www.gub.ac.uk/isco/JCO).
Several sequential marker genes were represented on the microarray by more than one probe set including some of the progressively differentially expressed genes with different tau and p values (such as MCAM, collagen type IV-alpha 1, biglycan, interleukin 8, TIMP3, calumenin, SERPINE1).
Taqman quantitative RT-PCR validation of differentially expressed genes
TaqMan quantitative real-time RT-PCR was performed to measure the expression changes of a selected set of 20 genes (2 continuously downregulated and 18 continuously overexpressed genes). Selection criteria for the genes were the significant progressive under-or overexpression in oligonucleotide microarray analysis in association with the normal-adenomadysplasia-carcinoma sequence and the availability of validated TaqMan probes. Although the tau value of VEGF was found to be lower than 0.4, it was also selected, because of its previously reported role as a CRC related gene. The complete results of the Taq-Man measurements are presented in Suppl. Table 3 (http://www.gub.ac.uk/isco/JCO). The mRNA expression of 17 of the 20 selected genes showed a significant association with the disease stage in both systems, based on the 0.4 or higher absolute value of the tau (p < 0.01).
Fifteen genes were identified showing significant and progressively increasing gene expression along with the adenoma-dysplasia-carcinoma sequence progression in both mRNS expression analyses (p < 0.05) ( Table 2). The development and progression of colorectal cancer were characterized by the elevated expression of genes mainly involved in cell proliferation (vascular endothelial growth factor /VEGF/, CXCL1 chemokine ligand, tissue inhibitor of metal-
Protein expression and localization of selected markers in tissue microarray
Downregulated amnionless homolog and prostaglandin D2 receptor proteins were tested with immunohistochemistry in a TMA series of archived tissues. Strong diffuse epithelial cytoplasmatic PTGDR and AMN immunostaining was found in the epithelial cells in healthy colon tissue. In adenomas, the number of AMN and PTGDR positive cells gradually decreased near the luminal surface. During disease progression the intensity of immunostaining was further reduced in dysplastic epithelium followed by the appearance of only weak apical cytoplasmatic epithelial immunostaining in CRC samples ( Fig. 2A-F). Image quantification made on digital slides also confirmed the gradually reduced expression of these antigens along the disease progression (Fig. 3).
Moderate cytoplasmatic osteopontin and osteonectin staining was found in TMA samples at the apical cytoplasm of the epithelial cells in healthy colon tissue. In adenomas, the number of osteopontin and osteonectin positive cells gradually increased near the luminal surface accompanied with mild-moderate basal cytoplasmatic staining in some of them. The intensity of antigen expression was elevated further during disease progression in CRC samples (Fig. 2G-L).
Discussion
It is thought that colorectal cancer usually develops from villous adenomas, which transition is associated with obvious phenotypic changes due to altered gene and concomitant protein expression [9,17,20,22,26,28,29,35]. Revealing these molecular changes, they may be exploited for clinical applications, e.g. for predicting potential disease progression in prospective biomarker testing. In this study, we analyzed the gene expression profile of the adenoma-dysplasia-carcinoma sequence using 38 colorectal biopsy specimen hybridized on whole genomic HGU133 Plus 2.0 microarray system to identify disease associated progression biomarkers that show continuous quantitative alter- ations, and the results were validated with RT-PCR and on tissue microarrays.
In CRC development, we found a series of potential progression markers at the mRNA level, 17 of which altered significantly and gradually through the normal-adenoma-dysplasia-carcinoma sequence. PT-GDR and amnionless homolog mRNA were downregulated, while 15 others, including, osteonectin, osteopontin, collagen IV-alpha 1, biglycan, matrix GLA protein and von Willebrand factor, were upregulated, which was also verified with RT-PCR. The progressive alterations in PTGDR and amnionless homolog expression, and of those of osteopontin and osteonectin expression were also verified at the in situ protein level, suggesting that altered levels of these biomarkers may be associated with colorectal cancer development and progression.
The whole genomic microarray analyses of biopsy samples in this study provided highly standard and reproducible results regarding the array sensitivity, present percentage and GAPDH 3 /5 ratio. From the array data, only those transcripts were considered for further analysis as potential progression biomarkers, which could be validated with real-time PCR method. The expression of selected CRC-associated genes was validated also at the protein level, using immunohistochemistry on TMA collection of samples from 16 overlapping patients and an independent set of 103 pa- To measure the association of two variables (expression and disease stage) the Chi-square test statistic -the sum of the squared Pearson residualswas used. When the difference was statistically significant (p < 0.05), more detailed analysis was visualized on the basis of the Pearson residuals. The darker the intensity of the blue columns, the stronger the statistical significance at the given immunostaining intensity score. The height of the blue columns represents the number of cases belonging to the given score. Immunostaining scores: In case of PTGDR: −2 = no staining; 0 = weak staining; 1 = moderate staining; 2 = strong diffuse epithelial cytoplasmatic immunostaining; in case of AMN: −2 = no or weak apical epithelial cytoplasmatic staining; 0 = moderate apical cytoplasmatic staining, 1 = strong apical and weak basal cytoplasmatic staining; 2 = strong diffuse epithelial cytoplasmatic immunostaining. AMN = amnionless homolog, PTGDR = prostaglandin D2 receptor, CRC-CD = colorectal cancer Dukes C or D stage, CRC-AB = colorectal cancer Dukes A or B stages, AD-hdg = adenoma with high-grade dysplasia, AD-ldg = adenoma with low-grade dysplasia. tients making up a total of 289 samples. Using digital slides and a dedicated software for scoring, TMA technique allowed the efficient and standardized analysis of large number of samples. Nowadays, antibodies recognizing a wide range of proteins in formalin-fixed paraffin-embedded tissues are available which offer good chances for further validating new biomarkers, such as those we found in this study, in the prospective diagnostic setting.
Genes showing a progressively decreasing expression during colorectal carcinogenesis in this study have received relatively little attention so far. Prostaglandin D2 receptor (PTDGR) G-protein-coupled receptor has been shown to function as a prostanoid DP receptor. The activation of this receptor plays an important role in allergic inflammation by modulating immune cell functions [6]. It is expressed in a series of tissues including several types of leukocytes, the vasculature, retina, nasal mucosa, lung and intestine [36]. PTGDR has been demonstrated to have anti-proliferative activity against human CRC cells in vitro [16]. PTGDR mRNA expression has been detected in normal colon but not in several neoplastic colorectal epithelial cell lines [16]. Amnionless homolog transmembrane pro-tein has been hypothesized to modulate bone morphogenetic protein (BMP) receptor function by serving as an accessory co-receptor to either facilitate or hinder BMP binding [19]. It is known that the mouse AMN gene is expressed in the extraembryonic visceral endodermal layer during gastrulation but it was found to be mutated in amnionless mouse [19]. The function of AMN is otherwise unknown, however it is highly expressed in cubilin-expressing tissues, including the kidney, intestine, and mouse visceral yolk sac [10,19]. This is the first study to show a significant association of the gradually decreasing AMN and prostaglandin D2 receptor mRNA and protein expression with the colorectal adenoma-dysplasia-carcinoma sequence.
The protein expression of two genes showing progressively increasing mRNA levels during colorectal carcinogenesis were also validated. Osteopontin has been shown to bind to cells via integrins as well as CD44 [37]. Although the biological functions of osteopontin are not fully understood, it has been implicated in malignancy, immune function, and vascular remodeling, as well as in bone remodeling [31,33,37]. The matricellular glycoprotein SPARC (osteonectin) has been assigned a major role in the regulation of cell adhesion and proliferation, as well as tumorigenesis and metastasis. SPARC transcripts were found to be overexpressed in primary CRCs and their liver metastases compared to non-neoplastic mucosa using Northern blot analysis and in situ hybridization [34]. However, this is the first study to show a significant association of the progressively increasing osteonectin mRNA and protein expression with the colorectal adenomadysplasia-carcinoma sequence.
In line with published findings comparing normal and CRC samples, we also found CXCL1, osteopontin, osteonectin, collagen type IV alpha 1, biglycan, interleukin 8, thrombospondin 2, VEGF, von Willebrand factor and TIMP1 overexpressed in CRC [1,3,7,8,15,18,30,32,34,38,42]. However, most studies compared expression features only pairwised and not in the full sequence of disease progression. Only two papers tested the adenoma-dysplasia-carcinoma sequence specific gene expression alterations [1,21]. One of them, also showed the expression of osteopontin, osteonectin, biglycan, TIMP1, −3 and CXCL1 genes to progressively increasing with CRC stage [1]. As far as we are aware, the rest of the markers we identified and validated in this study including amnionless homolog, prostaglandin D2 receptor, collagen IV-alpha 1, matrix GLAprotein, and von Willebrand factor have not been published in relation to the sequence of colorectal adenoma-dysplasia-carcinoma progression.
By testing biopsy and surgical specimens Sabates-Bellver et al. and van der Flier et al. found overexpression of several target genes when using the same microarray platform as we did [35,40]. However, they compared normal vs. adenoma and normal vs. carcinoma samples separately and used disease adjacent normal tissue for control, where gene expression might be influenced by the diseased tissue, as opposed to our approach of including normal healthy control samples. Moreover, in most studies except that published by Sabates-Bellver et al. [35] the degree of dysplasia was not determined within the adenoma group and mixed tubulovillous and tubular adenomas where included together [24,27,28]. Unlike this practice, we used independent samples with determined degree of dysplasia of tubulovillous/villous adenomas to create low-and high-grade dysplasia groups.
Though histopathology is still the gold standard for differentiating stages of colorectal disease progression, the cut slide may represent only a fragment of the whole sample and disease features, such as dysplastic areas of flat adenomas, may remain hidden due to inappropriate sampling or tissue orientation. Following histological examination, a whole sample-based expression screening for biomarkers of disease progression with RT-PCR based real-time assay may improve diagnostic accuracy. Testing for alterations in biomarker expression at the protein level with prospective immunohistochemistry, may also have a great potential in routine diagnostic applications.
In summary, in this study we searched for potential biomarkers of colorectal adenoma-dysplasiacarcinoma progression using mRNA expression and tissue microarrays in biopsy specimen. 15 genes were identified and validated at mRNA level showing a significantly and progressively increasing expression during the normal-adenoma-dysplasia-carcinoma sequence progression (p < 0.01) including osteonectin, osteopontin, collagen IV-alpha 1, biglycan, matrix GLAprotein, and von Willebrand factor. Validated genes of significantly decreasing expression were prostaglandin D2 receptor and amnionless homolog (p < 0.01). In line with these, amnionless homolog and prostaglandin D2 receptor protein expression showed a strong negative correlation, while osteopontin and osteonectin protein expression showed a strong positive correlation with the colorectal normaladenoma-dysplasia-carcinoma sequence, in situ on TMAs. Our findings suggest that these factors may be used as biomarkers for predicting colorectal cancer progression, which needs further validation in prospective diagnostic setting. Considering the positive regulatory roles of osteopontin in cell invasion, in vascular remodeling and its immunological (it is a key cytokine regulating tissue repair and inflammation) effects, and of osteonectin in the regulation of cell adhesion and proliferation, these proteins might be candidates for anticancer target treatment. Investigating the regulatory pathways of AMN and PTGDR may also reveal novel molecular targets for the same purpose. | 2018-04-03T00:00:37.101Z | 2008-12-18T00:00:00.000 | {
"year": 2008,
"sha1": "b14020395a084bfaad74226f9b072ed8a495c574",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "734a4f2f391b00c805dca0520aff84af8d158062",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
5610352 | pes2o/s2orc | v3-fos-license | Proteome-wide survey of phosphorylation patterns affected by nuclear DNA polymorphisms in Arabidopsis thaliana
Background Protein phosphorylation is an important post-translational modification influencing many aspects of dynamic cellular behavior. Site-specific phosphorylation of amino acid residues serine, threonine, and tyrosine can have profound effects on protein structure, activity, stability, and interaction with other biomolecules. Phosphorylation sites can be affected in diverse ways in members of any species, one such way is through single nucleotide polymorphisms (SNPs). The availability of large numbers of experimentally identified phosphorylation sites, and of natural variation datasets in Arabidopsis thaliana prompted us to analyze the effect of non-synonymous SNPs (nsSNPs) onto phosphorylation sites. Results From the analyses of 7,178 experimentally identified phosphorylation sites we found that: (i) Proteins with multiple phosphorylation sites occur more often than expected by chance. (ii) Phosphorylation hotspots show a preference to be located outside conserved domains. (iii) nsSNPs affected experimental phosphorylation sites as much as the corresponding non-phosphorylated amino acid residues. (iv) Losses of experimental phosphorylation sites by nsSNPs were identified in 86 A. thaliana proteins, among them receptor proteins were overrepresented. These results were confirmed by similar analyses of predicted phosphorylation sites in A. thaliana. In addition, predicted threonine phosphorylation sites showed a significant enrichment of nsSNPs towards asparagines and a significant depletion of the synonymous substitution. Proteins in which predicted phosphorylation sites were affected by nsSNPs (loss and gain), were determined to be mainly receptor proteins, stress response proteins and proteins involved in nucleotide and protein binding. Proteins involved in metabolism, catalytic activity and biosynthesis were less affected. Conclusions We analyzed more than 7,100 experimentally identified phosphorylation sites in almost 4,300 protein-coding loci in silico, thus constituting the largest phosphoproteomics dataset for A. thaliana available to date. Our findings suggest a relatively high variability in the presence or absence of phosphorylation sites between different natural accessions in receptor and other proteins involved in signal transduction. Elucidating the effect of phosphorylation sites affected by nsSNPs on adaptive responses represents an exciting research goal for the future.
have profound effects on protein structure, activity, stability, subcellular localization and interaction with other biomolecules [4], and it can create binding sites for specific modular domains [9]. Interestingly, in the flowering plant Arabidopsis thaliana, the percentage of genes predicted to encode protein kinases (3% of the predicted proteome) is about twice as high as in mammals [10,11]. Protein phosphorylation events have been found to be connected with the plant's response to diverse intrinsic and extrinsic factors, such as light, invasion of pathogens, hormones, temperature stress, and nutrient starvation [6,12,13].
Recent progress in mass spectrometry (MS)-based technologies and phosphopeptide enrichment methods have allowed to map in vivo phosphorylation sites for a wide variety of organisms in a high throughput manner [12][13][14]. This progress has prompted the creation of dedicated web-resources in the plant field, such as PhosPhAt [15] and P3DB [16]. The availability of experimentally verified A. thaliana phosphorylation sites now enables in silico analyses of different phosphorylation site patterns on a proteome-wide scale. Recently, the conservation of protein phosphorylation sites within selected gene families could be shown in different plant species [7,17]. In non-plant species, where the databases of phosphorylation sites are much more comprehensive than in A. thaliana, the in silico analysis of phosphoproteomic data has already produced interesting insights into evolutionary features of protein phosphorylation [18][19][20][21].
A loss of a single phosphorylation site by a non-synonymous (ns) single nucleotide polymorphism (SNP) that mutates the amino acids S, T, or Y at a phosphorylation site into any other amino acid can have profound effects on the molecular properties of the corresponding protein. In particular, the disruption of phosphorylation sites by such non-synonymous mutations can be associated with human diseases such as cancer. For example, the phosphorylation of T286 in wildtype cyclin D1 by the kinase GSK3B initiates its nuclear export and subsequent degradation in the cytoplasm [22]. The authors suggested that the loss of this phosphorylation site by a somatic mutation is involved in causing nuclear accumulation of cyclin D1 in esophageal cancer and a generally increased oncogenic potential.
In A. thaliana, genomic DNA polymorphisms have been studied extensively during the last few years, initially using gene expression microarrays in order to identify single-feature polymorphisms (SFPs) [23]. Especially SNPs, in several inbreed accessions can be applied to study the effects of these polymorphisms on other genome-wide features, such as phosphorylation sites.
Given the importance of protein phosphorylation, we were interested to study how phosphorylation sites and their patterns are affected by natural variation, namely nsSNPs in A. thaliana. In this study we analyzed the distribution of all phosphorylation sites taken from the recent version of PhosPhAt (version 3.0) [15] in the proteome of A. thaliana and we related their position to nsS-NPs thus identifying losses of phosphorylation sites. For that purpose, we made use of A. thaliana SNPs identified recently by applying re-sequencing arrays [24,25] and by re-sequencing with ultra-deep sequencing technologies such as Illumina/Solexa [26]. Our aim was to analyze how the A. thaliana phosphorylation sites and their patterns are influenced by these nsSNPs.
Because the current dataset of experimental phosphorylation sites in A. thaliana is far away from covering the entire proteome, the results obtained from the experimental phosphorylation site dataset were contrasted with results produced by similar analyses of predicted phosphorylation sites in A. thaliana to attain more global hypotheses on protein phosphorylation patterns and the influence of nsSNPs on them.
We were able to confirm that the majority of phosphorylation sites (71%) occur outside conserved protein domains, as noted previously [34]. Specifically, pS and pT occurred within domains in only 22.4% and 36.8% of the cases. However, phosphorylated tyrosines were located inside protein domains in 49.8% of the cases. Similar behavior was observed for the set of predicted phosphosites (data not shown).
The set of predicted phosphorylation sites used in this study was also taken from PhosPhAt (version 3.0) [15]. It comprised 75,296 high-confidence phosphopeptides (score ≥1; see Methods), identified in 21,711 protein-coding loci. The relative frequency of phosphorylated S, T, Y residues in this set of predicted phosphorylation sites was 37.0% pS, 42.5% pT and 20.5% pY.
Based on our sets of experimentally identified as well as high-confidence predicted phosphorylation sites (score ≥1), we looked for under-and overrepresented plant GO Slim terms among different sets of phosphoproteins: i) all proteins containing at least one pS; ii) all proteins containing at least one pT; iii) all proteins containing at least one pY; iv) all proteins containing at least one pS or at least one pT (p[ST]); v) all proteins containing at least one phosphorylated residue (p[STY]) (Additional file 1 for experimental phosphorylation sites; Additional file 2 for predicted phosphorylation sites). A comparison of the GO annotation of the p[STY]-protein sets based on experimental and predicted sites is shown in the Figure included in Additional file 3. In both, experimental and predicted datasets, the terms "catalytic activity", "kinase activity", "transferase activity", and "protein modification process" were overrepresented among proteins containing p[STY] sites. Also, several stress-related terms were found overrepresented in both. In experimental as well as predicted p[STY]-protein sets, proteins with function in "translation", "RNA binding", "biosynthetic process" were determined as underrepresented. However, the plant GO Slim terms "transcription factor activity" and "transcription regulator activity" were found overrepresented among the predicted p[STY]-protein set, while underrepresented in the experimental set.
For both, experimental and predicted phosphorylation sites, we then computed the agreement of GO Slim terms between protein sets representing different sites or combination of sites. We evaluated the significance (corrected p-value, FDR ≤5E-2) of the correlation between the following pairs of datasets: pS-pT, pS-pY, pS-p . For each dataset in the pair we registered whether a given GO Slim term was present or absent, and then compared the obtained binary series using the Pearson's correlation coefficient. None of the compared profiles were correlated in the experimental dataset, but in the dataset of predicted phosphoproteins we found a moderate correlation of overrepresented plant GO Slim term profiles between pS-p[ST] and pY-pT, with correlation coefficients of 0.39 and 0.38, respectively.
Distribution of phosphorylation sites across proteins
Most of the A. thaliana phosphoproteins contained only a few experimental phosphorylation sites, whereas a few proteins were phosphorylation hubs with a large number of phosphorylation sites. Noticeably, there is a long tail of the distribution of the number of phosphosites per protein, i.e., the proportion of proteins with a large number of phosphorylation sites is higher than expected by chance alone. This is especially evident for the predicted phosphosites. Additionally, proteins with a single phosphorylation site appear more often than expected, for both the experimental sites and the predicted sites, highlighting the physiological importance of phosphorylation ( Figure 1; Additional file 4). Proteins with a large number of phosphorylation sites are of special interest as potential sites of integration (hubs) in regulatory pathways. Table 1 lists the 31 proteins with nine or more experimentally determined phosphorylation sites. One third of these proteins are involved in metabolic processes associated with nucleic acids (Gene Ontology term GO: 6139; from the biological process ontology), especially with RNA splicing (nine out of the ten proteins). Three out of those nine proteins were also identified to include hotspots of experimental phosphorylation sites for a window size of 10 amino acids (see below and Table 2), namely the following proteins: AT5G52040.1, AT5G64200.1, AT3G55460.1. The protein with the most phosphorylation sites was ATRSP41, an arginine/serinerich splicing factor 41 (AT5G52040.1) which had 22 experimentally verified phosphorylation sites. In a human study, the serine/arginine repetitive matrix protein 2 (SRRM2) with even 142 pS sites holds the top of the list of human proteins with the highest number of pS sites [18]. With progressing identification of A. thaliana phosphorylation sites in future studies and under various environmental conditions, we expect that also some A. thaliana proteins might be detected to bear even more phosphorylation sites than the 22 sites found in ATRSP41.
Phosphorylation hotspots within proteins
For proteins with many experimentally identified phosphorylation sites, we analyzed the location of those sites within the protein, to identify potential hotspots of phosphorylation.
Based on the experimental sites, we determined several hotspot-containing proteins using window sizes of 5, 10, 15, and 20 amino acids (Additional file 5). We then applied the phosphorylation hotspot analysis to the predicted phosphorylation site dataset to get a more global view on potential phosphorylation hotspots in the A. thaliana proteome. For predicted sites as well, we were able to determine several phosphorylation hotspots in proteins for window sizes of 10, 20, 30 and 40 amino acid residues (Additional file 6 and 7).
Using experimental phosphorylation sites, and for a window size of ten amino acids, 43 potential phosphorylation hotspots in 29 different proteins were identified ( Table 2). Figure 2 shows three hotspots with the highest number of experimental phosphorylation sites (six or five pS sites) in a window of 10 amino acids. These hotspots were identified in a protein of unknown function (AT4G07523.1), in a reticulon family protein (AT2G46170.1), and in a protein kinase (AT1G53165.1) based on TAIR7 annotation.
To identify over-or underrepresented biological functions among the proteins containing at least one phos-phorylation hotspot that consists of experimental phosphorylation sites, these proteins were tested against a reference set which contained all proteins with at least one experimental phosphorylation site. GO enrichment analyses revealed that the GO Slim term "nucleoplasm" (GO: 5654, from the cellular component ontology) was significantly overrepresented (p-value: 1.52E-3 for window size 10). This was confirmed by the proteins with hotspots consisting of predicted phosphorylation sites (pvalue: 4.60E-4 for window size 20). Interestingly, among the proteins with predicted phosphorylation hotspots, "catalytic activity" (GO: 3824) was significantly underrepresented (p-value: 2.40E-3 for window size 40).
Effect of SNPs on proteins
The mapping of SNPs onto coding sequences allowed us to evaluate their effects on the coded amino acids, and thus the expressed proteins. There were 12,285,899 amino acid residues in the non-redundant set of the protein models represented in the reference genome. In total, 314,705 amino acids and 594 stop codons were affected by SNPs (2.56% of all amino acid residues), either in a synonymous (160,634; 1.31%) or non-synonymous (155,311; 1.26%) way.
As expected, there is a moderate positive correlation between the proportion of synonymous substitutions (compared to all substitutions) for each amino acid and the number of codons encoding the respective amino acid (Pearson's correlation coefficient: 0.74; p-value: << 5E-2). In order to account for amino acid abundances throughout the proteome, we performed a 2-way contingency table analysis on the number of non-synonymous substitutions affecting each amino acid and the total number of a given amino acid in the whole non-redundant proteome ( Figure 3A, Additional file 8). The strong underrepresentation of L, F, G, W and Y residues being affected by substitutions suggests a stronger global functional or structural constraint on these amino acids. Most SNPs caused synonymous substitutions ( Figure 3B).
Interestingly, S and T, two phosphorylatable amino acids, are more likely to be affected by nsSNPs than expected. By contrast, Y residues were 3 to 5 times less frequently affected by SNPs than S or T residues (data not shown).
Effects of SNPs on phosphorylation sites
We tested whether there was any association between substitutions caused by SNPs affecting experimental phosphorylation sites compared to non-phosphorylation [STY] sites, but we did not observe any overall trend (Fisher's exact test p-value: >> 5E-2). We then looked at the effect of each individual amino acid substitution (Figure 4). We found that amino acid substitutions do rarely occur, when requiring two or three nucleotide substitu- To compute the expected distribution of phosphorylation sites per protein (black circles) we assumed that that every possible STY-site becomes phosphorylated based on a constant probability p, which is independent on the number of STY sites per protein and was obtained by dividing the total number of pSTY-sites by the total number of STY positions across all proteins in the data set, p = total number_pSTY/total number_STY. With p available, the expected number of phosphorylation sites per protein was computed as E(pSTY) x = p x Number of_SYT in protein X. The observed distribution of phosphorylation sites per proteins appears as red circles (A: Experimental phosphorylation sites; B: high-confidence predicted phosphorylation sites). tions (simultaneous changes of 2 or 3 bases in the underlying codon, i.e., substitution cost). The only observed cases were for S to D, S to K, T to Y, and Y to E, although even in these cases, the substitution only occurred in non-phosphorylated S, T or Y residues. We also found that in all cases substitutions were either more frequent in phosphorylated sites than in non-phosphorylated sites or vice versa, never equally affecting both types of sites.
In case of the predicted phosphorylation sites, we found for threonine a significant enrichment of non-synonymous substitutions in phosphorylated compared to non-phosphorylated threonine residues (data not shown). Analyzing the effect of each individual substitution in the predicted dataset, we observed several amino acid substitutions with a substitution cost of two that occurred in phosphorylation sites (S to D, S to K, T to E, T to Q, T to V, Y to E, Y to L and Y to T; Additional file 9 and 10). Regarding the enrichment and depletion of individual substitutions between phosphorylated and nonphosphorylated sites, we found that all substitutions were as likely to occur in predicted phosphorylation sites as in non-phosphorylation sites, except for the substitution T to N, which was more likely to occur in predicted phosphorylation sites than in non-phosphorylation sites, and the synonymous substitution T to T, which was more likely in non-phosphorylation sites. In that respect, it is important to note that asparagine (N) has been shown to mimic a phosphorylated serine, in a mutant of the A. thaliana K + channel AKT2 (AT4G22200.1). Asparagine is an uncharged amino acid, but it is larger than both S and T. Mimicking the phosphorylation site could arise due to steric effects as suggested by Michard et al. [40]. Thus, a substitution T to N might generate a constitutive phosphorylation site.
Losses of experimental phosphorylation sites
Among the experimental phosphorylation sites, we identified 86 sites (in 86 proteins) that were lost in at least one of the A. thaliana accessions by an exchange of a phosphorylation target amino acid S, T, or Y to any other amino acid (Additional file 11). For four proteins (AT1G01550.1, AT1G44800.1, AT1G62330.1, AT5G38310.1) the phosphorylated S was substituted to more than one other amino acid in the accessions analyzed. For example a pS (position 341) of the nodulin MtN21 family protein (AT1G44800.1) was exchanged to T in most of the accessions with the only exception of Bur-0, where an N was introduced by the nsSNP.
Interestingly, we found that among these 86 proteins, the GO Slim categories "receptor activity" was significantly overrepresented, but there were no underrepresented categories. Using a reference set comprising all A. thaliana proteins with at least one phosphorylation site, no over-or underrepresented categories could be identified.
Gains and losses of predicted phosphorylation sites
Using the dataset of predicted high-confidence phosphorylation sites for a more global analysis, we found 1,114 proteins in which predicted phosphorylation sites could potentially be lost in at least one of the A. thaliana accessions studied (Additional file 12). In 1,103 cases, the SNP Figure 4 Effect of SNPs, comparison between experimental phosphorylation sites and non-phosphorylation sites. We evaluated the enrichment and depletion of each substitution pair from an experimentally identified phosphorylation site to any other amino acid, by using 2-way contingency tables for each pair and evaluating the significance of an odds ratio different from 1 (corrected p-value, FDR ≤ 5E-2) with a Fisher's exact test. All ratios are statistically undistinguishable from 1 (Log Odds = 0). Only experimentally verified phosphoproteins were included in this analysis. Substitution of amino acids in bold were never found, neither in phosphorylation sites nor in non-phosphorylation sites. Substitution amino acids in red were present among non-phosphorylation sites, but absent among phosphorylation sites. The substitution cost is the minimal number of DNA substitutions that is required in order to change an amino acid into another (see Additional file 9).
caused an amino acid substitution to an amino acid that cannot be phosphorylated, while in the remaining 11 proteins, the score of the phosphorylation prediction changed from a high-confidence positive score (score ≥1) to a value smaller than or equal to -1, regardless of the type of putative phosphorylated amino acid at the given position in the neighborhood of the affected phosphorylation site. A prediction value of score ≤ -1 is taken as high-confidence prediction of the amino acid not being phosphorylated.
In contrast, we observed that 1,148 proteins gained a predicted phosphorylation site (Additional file 12). The majority of gained phosphorylation sites (1,136) emerged by the change of a non-phosphorylatable amino acid to an S, T, or Y residue and a resulting prediction score ≥1 of the newly generated putative phosphorylation site. In the remaining 12 cases, the gain of the phosphorylation site was based on the score alone, the prediction score increased from a negative score (score ≤ -1) to a score higher than or equal to 1 due to nsSNPs that resulted in amino acid exchanges in the neighborhood to S, T and Y residues thus creating a new phosphorylation site target motif.
We found that the GO categories "receptor activity", "binding" and "signal transducer activity" were significantly overrepresented in all three datasets regardless of the reference set used (see Methods; Additional file 13). Additionally, in proteins which gained a phosphorylation site, and in all proteins with a gain or a loss, the category "response to stress" was overrepresented, regardless of the reference set used (Additional file 13). In the dataset including gain and loss of phosphorylation sites, the categories "transporter activity", "catalytic activity", "metabolic process", "biosynthetic process" and "cell" were underrepresented using the set of predicted phosphoproteins as reference set (Additional file 12). The figure in Additional file 14 presents all over-and underrepresented GO Slim categories in the protein set with losses and gains of predicted phosphorylation sites.
Discussion
With regard to the isolation of phosphorylated proteins from complex samples as well as their mass spectrometric and computational analysis the progress that has been made in A. thaliana is quite impressive. However, the connection of protein phosphorylation with genetic variation in natural occurring ecotypes at a proteome-wide level in plant species has not been addressed so far. In this study we made use of experimental and predicted phosphorylation sites available in PhosPhAt [15] and mapped the data onto the A. thaliana genome annotation. This is currently the largest phosphorylation site dataset in A. thaliana comprising 7,178 experimentally verified unique phosphorylation sites assigned to 4,252 protein-coding loci. The combined data set represents phosphorylation sites identified from different tissues/cell types representing varied developmental stages and responses.
Since the assigned proteins cover only one-sixth of the predicted A. thaliana protein-coding loci, we extended our study by also including predicted phosphorylation sites to achieve better proteome coverage in our analyses, attaining a coverage of 80% of all protein-coding loci in A. thaliana.
The relative frequency of 70.7% pS, 20.7% pT, and 8.6% pY in our experimental set was similar to distributions reported previously in A. thaliana ([34]; 85.0% pS, 10.7% pT, and 4.3% pY) and in humans [41,42]. In the predicted dataset, the fraction of tyrosine residues was much higher (25.0%) compared to the experimental set (8.6%). This discrepancy may result from a bias in the training set of experimental phosphopeptides that has been used for the phosphorylation site predictor or from a lack of accuracy of the predictor. In addition, we cannot exclude that the experimentally identified phosphorylation sites may in general be biased by experimental restrictions such as specific phosphopeptide enrichment methods [43] or by a focus on specific stress conditions or subcellular compartments. Thus, this distribution may still shift by including new phosphorylation sites from future studies. However, since in general experimental and predicted data show similar distribution among and within the proteins, the inclusion of predicted data in the scope of this study is justified.
Multisite phosphorylation in proteins has been discussed as points of integration of different signal transduction pathways [1,44]. However, a major difficulty in studying multisite phosphorylation from a system's perspective has been the uncertainty whether simultaneous or successive phosphorylation occurred at the different sites on the same protein and whether the multiple sites were phosphorylated by the same or different protein kinases. Dependencies among phosphorylation sites in a single protein can be intricate [45], and the position and the number of phosphorylated residues can affect the biological outcome [46].
In the experimental dataset, we observed that the identified number of phosphorylation sites per protein was largely in agreement with the expected values, except for large values of phosphorylation sites and a single phosphorylation site. This observation was more noticeable for the predicted dataset. This suggests that there are at least three discrete regions in the distribution: i) a region of overrepresentation of single phosphorylation sites, ii) a region where the number of phosphorylation sites is proportional to the total number of S, T and Y; i.e., the more sites can be phosphorylated, the more will and proportionally, and, iii) a region where phosphorylation sites appear more often than expected given the total number of S, T, Y. Similar behavior was previously suggested to result from a rich-gets-richer process for the accumulation of phosphorylation sites [18], however the low coverage of the experimental dataset in this study does not allow us to arrive at such a conclusion. Nevertheless, the data in Figure 1, strongly suggests that once a single phosphorylation event happened in a protein, further phosphorylations will accumulate depending only on the abundance of phosphorylatable residues, until reaching a threshold whereafter further phosphorylations will happen even more rapidly than dictated by the abundance of phosphorylatable residues (longer tail).
Our finding that the phosphorylation hotspots occur preferentially outside conserved domains suggests that, indeed, they may serve as sites of signal integration as they are outside of regions such as catalytic domains or protein-protein interaction domains. Multisite phosphorylation occurring outside structured regions was shown for single nuclear proteins in several studies. This is the case for the human protein Ets-1. Multiple Ca 2+ -dependent phosphorylation sites in an unstructured flexible region of this transcription factor act additively to produce graded DNA binding affinity [47]. Our result that nucleus-related GO terms are overrepresented in hotspot-containing proteins is in line with studies that indicate a central involvement of many nuclear proteins as integration hubs for phosphorylation-dependent signaling [48].
In an MS-based phosphoproteomics study in A. thaliana, it was suggested that the mRNA splicing machinery is a major target of protein phosphorylation [29]. Our results support this hypothesis. One third (10/31) of the proteins with nine or more experimentally determined phosphorylation sites are involved in metabolic processes associated with nucleic acids, especially with "RNA splicing" (Table 1).
Disease resistance genes, S-locus proteins and receptors had previously been shown to display a high variability between different wild varieties of A. thaliana [23]. In combination with our findings this indicates that receptors belong to the more variable proteins in A. thaliana accessions, and gains or losses of phosphorylation sites in rapidly evolving and variable regions of receptors could facilitate the evolution of kinase-signaling circuits [49]. The importance of specific phosphorylation sites in A. thaliana receptor proteins for receptor dimer formation and activation of signaling events was concluded from experiments on the BRI1/BAK1 receptors [50]. Similarly, in different human receptor proteins the importance of their site-specific phosphorylations could be demonstrated, for example in receptor tyrosine kinases of the ErbB family [51].
The potential contribution of the identified losses and gains of the phosphorylation sites by nsSNPs to adaptive responses of the various natural accessions will be an interesting field to be analyzed in future association studies.
Conclusions
By mapping nsSNPs onto phosphorylation sites, we identified losses and gains of phosphorylation sites, which can be important in adaptive responses of the natural accessions in their different environments. Especially receptor proteins were affected by losses of experimental phosphorylation sites. Based on the observed gains and losses of predicted phosphorylation sites it can be expected that beyond receptor proteins also other proteins involved in signaling and stress response are affected by changes, whereas proteins involved in metabolism, catalytic activity and biosynthesis are less affected. These findings suggest a relatively high variability of signal transductionrelated proteins and receptors and more conserved regulation in metabolism. Since receptors and signaling processes are primarily involved in recognition and response to environmental cues, the overrepresentation of phosphorylation sites (gain or loss) in these functional classes indeed supports the view of nsSNPs as evolutionary means of adaptation.
Genome annotation and identification of protein domains
The A. thaliana Columbia-0 genome was sequenced in the year 2000 [52]. In this study, we used the genome annotation provided by The Arabidopsis Information Resource, release 7.0 (TAIR7) [53]. Protein domains were identified using the Pfam v23 library of Hidden Markov Models [54]. 2,902 Pfam HMMs have significant hits in 19,904 protein-coding loci (23,760 protein models) in TAIR7.
Predicted phosphorylation sites
The predictions of phosphorylation of S, T, and Y were extracted from PhosPhAt. PhosPhAt uses a Support Vector Machine (SVM) in order to reliably predict phospho-rylation sites in A. thaliana. The SVM has been trained on experimentally verified phosphorylation sites from A. thaliana with the goal of specifically capturing the properties of plant phosphorylation sites and was shown to predict plant phosphorylation sites with a considerably better performance than other available predictors usually trained on non-plant species [15].
SNP datasets
Three datasets of SNPs, with polymorphisms detected in the A. thaliana accessions listed in Additional file 15 were used in this study. The first large scale SNP study was published in 2005. We refer to it here as the Nordborg2005 dataset. 20,667 non-redundant SNPs were identified in this study in 96 accessions [25]. Clark et al. used re-sequencing arrays to identify 1,126,176 nonredundant SNPs in 20 accessions; 637,522 non-redundant SNPs belong to the high-confidence set and were kept for further analysis (Clark2007 in the following) [24,55]. Ossowski et al. used ultra-deep sequencing and identified 860,154 non-redundant SNPs in three accessions (Ossowski2008 in the following) [26,56]. SNP positions determined in the datasets Nordborg2005, Clark2007 and Ossowski2008 were combined into a non-redundant dataset that comprised 1,247,284 SNPs ( Table 3). 25% of these SNP positions can be mapped onto coding sequences, and around 50% of those lead to an amino acid substitution in at least one of the A. thaliana accessions studied ( Table 3). The merged SNP dataset used in this study, including SNPs mapping onto A. thaliana cDNAs (TAIR7), is publicly available and downloadable via The GABI Primary Database (GabiPD; http:// www.gabipd.org/) [57]. Moreover, SNPs can be interactively visualized in the cDNA sequences in the A. thaliana Gene GreenCards in GabiPD, where accessionrelated SNP configuration is provided.
Mapping SNP positions onto the TAIR7 genome release and annotation
The datasets Nordborg2005 and Clark2007 were originally mapped onto previous versions of the A. thaliana genome sequence, thus the first step in the present study was to bring all three datasets to refer to the same reference coordinate system, i.e., TAIR7. We took the 30 basepairs (bp) of right and left flanking sequences of each SNP position, together with the SNP base, from the reported A. thaliana genome versions. Using MEGABLAST (W = 39, D = 3) [58,59] we mapped them onto the TAIR7 genome sequence, allowing only 100% identical matches. None of the polymorphic positions were changed between the different releases of the A. thaliana genome. Subsequently, using the genome annotation provided by TAIR7, we determined which of these polymorphic positions occurred inside processed mRNAs and protein coding sequences. The latter is necessary to determine if the SNPs caused a synonymous or non-synonymous substitution, and allowed us to determine, which phosphorylation site positions were affected by SNPs.
Evaluating enrichment of gene ontology terms in sets of proteins
The gene ontology (GO) provides a set of controlled and structured vocabularies, i.e., ontologies, in three domains of molecular biology: cellular component, biological process and molecular function [60]. A large proportion of A. thaliana genes in TAIR7 have been annotated with at least one GO term. In this study, we used a subset of these GO terms, known as the plant GO Slim, which provides a high level view of the above mentioned ontologies. Overand underrepresentation analysis of plant GO Slim terms was carried out using the plugin BiNGO v2.3 [61] for the software package Cytoscape [62]. Statistically significant categories, either over-or underrepresented, were
Effect of SNPs on phosphorylation sites Differences between the effects of SNPs on phosphorylation sites and non-phosphorylation sites
We evaluated the differences between SNPs affecting phosphorylation-and non-phosphorylation sites by comparison of their distributions. For each natural accession from A. thaliana, we mapped all SNPs separately on the respective protein sequences. Subsequently experimental sites were mapped onto the protein sequences, in which at least one SNP was found. In a third step, the distribution of SNPs mapping to phosphorylation sites and non-phosphorylation sites was evaluated for protein sequences with at least one SNP mapping onto an experimental phosphorylation site. For each of the potentially phosphorylated amino acid S, T and Y (phosphorylation sites and non-phosphorylation sites), we counted the number of synonymous and nonsynonymous substitutions to all 20 amino acids and to stop-codons. We also counted the number of S, T, and Y phosphorylation sites and the total number of S, T, and Y in all proteins with at least one SNP mapping onto an experimental phosphorylation site. Based on this information, we created 2 × 2 contingency tables for each substitution pair and evaluated the significance (corrected pvalue, FDR ≤ 5E-2) with a Fisher's exact test for each contingency table. Finally, we represented the enrichment and depletion of substitutions between phosphorylation sites and non-phosphorylation sites as the odds ratio between their probabilities.
The same procedure was applied to the sets of predicted phosphorylation sites.
Losses of experimental phosphorylation sites
The number of losses of experimental phosphorylation sites was computed by determining SNPs mapping to experimental phosphorylation sites and resulting in an exchange to a non-phosphorylatable amino acid.
Gains and losses of predicted phosphorylation sites
SNPs of predicted phosphorylation sites resulting in exchange of S, T or Y to a non-phosphorylatable amino acid were defined as "loss", and SNPs leading to an exchange of a non-phosphorylatable amino acid with a S, T, Y predicted to be phosphorylated with high confidence (decision value ≥1) were defined as "gain". Additionally, we also determined gains and losses of phosphorylation sites by SNPs considering the sequential context between -6 and +6 amino acids around the central residue (S, T, or Y). This sequential context can be affected by SNPs thus changing the probability of phosphorylation of the central S, T or Y. We used only positions with highly confident decision values ≥1 for phosphorylation sites and ≤ -1 for non-phosphorylation sites in the TAIR7 protein sequence considering the accession specific sequence. Thus, there are two types of gains/losses of predicted phosphorylation sites, (i) based on score, where the score of a phosphorylatable amino acid changes from ≥1 to ≤ -1 (loss) or vice versa (gain) due to an amino acid substitution within the phosphorylation site recognition motif, (ii) based on a change from one amino acid residue with high-confidence phosphorylation prediction score into a non-phosphorylatable amino acid (loss) or the creation of a peptide with a confidence prediction score greater than 1 (gain).
Three different datasets were used for an analysis of over-and underrepresentation of plant GO Slim terms among the proteins affected by gain or loss of phosphorylation sites: the first dataset contained only proteins that lost a predicted site; the second dataset consisted of proteins, which gained a predicted site; and the third dataset included proteins with both, gain and loss of predicted phosphorylation sites. These three datasets were compared against two reference sets, (i) a reference set that comprised all proteins containing a predicted phosphorylation site (score ≥ 1) and (ii) a reference set that comprised all A. thaliana proteins.
Identification of phosphorylation hotspots
Hotspots were computed based on two datasets: (i) experimentally verified phosphorylation sites, (ii) predicted phosphorylation sites. A hotspot was defined as a window of a given length, which (i) contains a significantly increased number of phosphorylation sites (for experimental sites) or (ii) has a significantly increased windows score (for predicted sites) compared to an empiric background distribution. Analyses were run using the following window sizes: (i) windows of 5, 10, 15, 20 amino acids length for experimental sites and (ii) windows of 10, 20, 30, 40 for predicted sites.
To generate background proteomes, proteins were sampled according to the length distribution and amino acid composition of all proteins in the A. thaliana proteome. Resulting proteins were randomly phosphorylated at S, T, and Y residues with probabilities as identified after mapping all experimental phosphorylation sites onto A. thaliana protein sequences in case of experimental sites. For predicted sites, a score was randomly assigned to the S, T, and Y residues by sampling a score from the corresponding distribution of prediction scores in the A. thaliana proteome. In this way, 10,000 background proteomes of size of the experimental set were generated for the experimental sites, 1,000 background proteomes of size of the A. thaliana proteome were generated for the predicted sites.
To generate empiric background distributions, each background proteome was analyzed by scanning each protein with a window of fixed size. Window scores were computed by using (i) the number of contained phosphorylated amino acid residues for experimental sites, or (ii) the sum of all scores in a window for predicted sites. For a given window size the empiric background distribution is formed by the distribution of window scores. Windows containing the same S, T, and Y residues of a protein were counted only once.
The proteome of A. thaliana with mapped experimental or predicted sites was scanned and window scores were determined accordingly. For predicted sites, only windows containing at least the expected number of S, T, and, Y sites based on the determined amino acid frequency in the A. thaliana proteome were taken into account for further analysis (in contrast to the background distribution).
The null-hypothesis of statistical testing of hotspots states: the score of a candidate window is derived from the described background distribution. The alternative hypothesis states: If the p-value of the score from a candidate window is less than the significance level (5E-2), then it is assumed that the window results from an unknown hotspot distribution. The right tail of background distribution is considered for testing. P-values were determined and Bonferroni corrected for multiple testing. Windows which had scores with corrected pvalue less than 5E-2, were saved for further analysis.
Overlap of hotspots with protein domains
We tested the hypothesis whether hotspots are uniformly distributed across proteins, independently of protein domains. For the statistical test, (i) the number of hotspots overlapping with a protein domain was computed for the real data, (ii) the same number of hotspots of a given size was sampled onto the proteins, (iii) the hotspots overlapping with domains were counted and saved. This procedure was repeated for 10E7 times and the number of overlaps of the real data was compared to the empirically generated background distribution. The null-hypothesis was tested analogous as described in the previous section.
Additional material
Additional file 1 This file contains seven worksheets with the list of over-and underrepresented plant GO Slim terms in the set of experimentally identified phosphoproteins, determined by BiNGO analysis. The worksheets S, T, Y, ST and STY contain the tables witht over-and underrepresented GO Slim categories with the following columns: GO term; pvalue; corrected p-value; x: number of genes in the query dataset annotated to a certain GO term; X: the total number of genes in the query dataset, genes without any annotation are discarded; n: number of genes in the reference dataset annotated to a certain GO term; N: total number of genes in the reference dataset, genes without any annotation are discarded. The worksheets "underrepresented" and "overrepresented" include a presence-/absence comparison of the significant GO terms in the different datasets. Every pair of binary series was compared using the Pearson's correlation coefficient.
Additional file 2 This file contains seven worksheets with the list of over-and underrepresented plant GO Slim terms in the set of highconfidence predicted phosphorylation sites (score ≥ 1), determined by BiNGO analysis. The worksheets S, T, Y, ST and STY contain the tables witht over-and underrepresented GO Slim categories with the following columns: GO term; p-value; corrected p-value; x: number of genes in the query dataset annotated to a certain GO term; X: the total number of genes in the query dataset, genes without any annotation are discarded; n: number of genes in the reference dataset annotated to a certain GO term; N: total number of genes in the reference dataset, genes without any annotation are discarded. The worksheets "underrepresented" and "overrepresented" include a presence-/absence comparison of the significant GO term in the different datasets. Every pair of binary series was compared using the Pearson's correlation coefficient. Additional file 6 This file contains all phosphorylation hotspots, which result from the hotspot analysis of all predicted phosphorylation sites in A. thaliana protein sequences. The analysis was run for different window sizes (10, 20, 30, 40 amino acids), available as different worksheets. AGI (TAIR7): A. thaliana gene identifier code according to TAIR 7.0; position (aa): start position of significant window in protein sequence in amino acids (aa), winsize: size of the analyzed window in amino acids, #STY: number of S,T,Y in significant window; # sigwin: number of significant windows in the AGI; score sum: sum of all prediction scores (SVM decision values) for phosphorylatable sites in the window; sequence: sequence of the significant window; aa(position):score: scored amino acids with related position and score are shown; function (TAIR7): gene function according TAIR7 annotation; function (MapMan): gene function according MapMan annotation. Additional file 7 This file contains all phosphorylation runs derived from the hotspot analysis based on prediction for the analyzed window sizes (10, 20, 30, 40 amino acids), available as different worksheets. Runs were generated by merging the identified hotspots to nonredundant contiguous sequences (runs). A run is defined as the amino acid sequence of a hotspot if there is no adjacent/overlapping other hotspot present. In cases of two or more overlapping/adjacent hotspots, a run is defined as the sequence representing the combination of those hotspots in the corresponding protein. Each run generated from hotspots is given with the corresponding AGI, start and stop position (in amino acids) as well as the run sequence. Additional file 8 This file contains a single worksheet with the data used to create Figure 3A. | 2017-06-20T04:42:53.927Z | 2010-07-01T00:00:00.000 | {
"year": 2010,
"sha1": "0ad2bf8dca98256719849640748b23bfd4a197e4",
"oa_license": "CCBY",
"oa_url": "https://static-content.springer.com/esm/art:10.1186/1471-2164-11-411/MediaObjects/12864_2010_3005_MOESM3_ESM.PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bad130b4f3cee1198d465c2b0ce2b302f0dcd8bb",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
11777306 | pes2o/s2orc | v3-fos-license | Building the first step: a review of low-intensity interventions for stepped care
Within the last 30 years, a substantial number of interventions for alcohol use disorders (AUDs) have received empirical support. Nevertheless, fewer than 25% of individuals with alcohol-related problems access these interventions. If several intensive psychosocial treatments are relatively effective, but most individuals in need do not access them, it seems logical to place a priority on developing more engaging interventions. Accordingly, after briefly describing findings about barriers to help-seeking, we focus on identifying an array of innovative and effective low-intensity intervention strategies, including telephone, computer-based, and Internet-based interventions, that surmount these barriers and are suitable for use within a stepped-care model. We conclude that these interventions attract individuals who would otherwise not seek help, that they can benefit individuals who misuse alcohol and those with more severe AUDs, and that they can facilitate subsequent help-seeking when needed. We note that these types of low-intensity interventions are flexible and can be tailored to address many of the perceived barriers that hinder individuals with alcohol misuse or AUDs from obtaining timely help. We also describe key areas of further research, such as identifying the mechanisms that underlie stepped-care interventions and finding out how to structure these interventions to best initiate a program of stepped care.
Introduction
Within the last 30 years, a number of psychosocial interventions for alcohol use disorders (AUDs) have received empirical support. Evaluations of these interventions have employed well-controlled randomized trials involving large numbers of individuals and substantial followup periods. These trials provide strong support for such interventions as motivational enhancement, cognitivebehavioral treatment, and 12-step facilitation treatment [1]. In addition, it is now clear that individuals who obtain timely help for AUDs have better outcomes than those who do not [2,3].
Despite these advances, only about 25% of individuals with AUDs access any form of help, empirically supported or otherwise [4][5][6][7]. When help is sought, it often occurs 10 or more years after the onset of symptoms of disorder [8]. If several intensive psychosocial treatments are relatively effective, but most individuals in need do not access them, it seems logical to place a priority on developing more engaging interventions. Accordingly, after briefly describing findings about barriers to helpseeking, we focus on identifying a palatable array of innovative and effective low-intensity intervention strategies that surmount these barriers and are suitable for use within a stepped-care model.
Key barriers to help-seeking
Empirical studies of help-seeking over the last two decades highlight a number of reasons why individuals with AUDs delay or never seek treatment. One key set of factors involves individuals' perceptions of negative concomitants of treatment, including stigma [9,10], dislike of the prevalent group format and the emphasis on spirituality in treatment and self-help groups [10], lack of privacy [10], concern that treatment is ineffective [11], and disinterest in abstinence goals [10,12].
Other common reasons individuals cite for not entering treatment involve a desire for autonomy or a wish to "handle problems more on their own" [9,11,12] and the belief that their alcohol problems are not serious or may improve on their own [10,12,13]. Factors such as the need for childcare [14], the problem of arranging transportation or traveling long distances to care [15,16], and the cost of treatment and lack of adequate insurance coverage also hinder help-seeking [17]. Finally, the time commitment for standard alcohol treatment is high, ranging from nine hours a week for intensive outpatient care to full-time for residential care. Many individuals report a lack of willingness to dedicate such substantial amounts of time to treatment and to accept the resulting interference with responsibilities to family or work [10].
Stepped-care models Stepped-care models [18] provide one important method for capitalizing on the appealing qualities of lowintensity interventions, such as their accessibility and autonomy, while providing the opportunity to refer individuals to more intensive treatment when needed. The development of low-intensity initial entrees into treatment is consistent with naturalistic studies of the help-seeking process, which show that individuals often engage in self-quit attempts prior to entering formal or informal treatment [19][20][21]. Lack of success in a lowintensity intervention could further heighten individuals' perceptions of the severity of their drinking problems and spur interest in treatment entry, which is readily available within a stepped-care program.
Low-intensity interventions
A number of low-intensity interventions are suitable for use as a first step of a stepped-care intervention. We focus here on low-intensity interventions that do not require face-to-face interaction. Although there is considerable support for the effectiveness of screening and brief intervention (SBI) in nonspecialty settings such as primary care [22,23], widespread implementation remains elusive. Some of the reasons for low levels of implementation in these settings include lack of training, lack of clinician time, and inadequate reimbursement [24], and the widespread reluctance of providers who are not addiction specialists to talk with patients about alcohol use [25].
We examine three main questions about the suitability of these interventions as candidates for stepped interventions. First, do low-intensity interventions attract individuals who would otherwise not seek help? Second, do individuals who are engaged in heavy alcohol use benefit from these interventions and, importantly, can individuals with moderately or relatively severe AUDs benefit from them? We categorize samples as "moderately to relatively severe" if they include patients diagnosed with alcohol dependence or if their average scores on the Alcohol Use Disorders Identification Test (AUDIT) are >19 and in the range of likely alcohol dependence [26]. Finally, does engagement in a lowintensity intervention inspire subsequent help-seeking when needed? We operationalize "inspiring subsequent help-seeking" as studies where patients seek treatment in the year following the low-intensity intervention. We summarize the evidence presented by these studies, examine their limitations, and discuss issues related to implementation of stepped-care models.
Search procedure
We considered for inclusion peer-reviewed Englishlanguage articles that examined alcohol interventions delivered via bibliotherapy, telephone (including Short Message Service [SMS]), computer, or the Internet. Both original articles and meta-analyses were deemed suitable. Our review was performed using the electronic databases of the US National Library of Medicine (PubMed) and the American Psychological Association (PsycInfo) to identify relevant articles published from 1990 through September 2011. The term "alcohol" was combined with the following search terms: web-based intervention, Internet intervention, online intervention, bibliotherapy, text message, book intervention, telephone intervention, remote intervention, computer intervention, and selfhelp intervention (excluding Alcoholics Anonymous [AA] or 12-step). The term "intervention" was added to most search terms to identify the most relevant articles. In addition to searching electronic databases, we examined potentially relevant articles cited in the identified articles' reference section [27].
Results
Using these procedures, we identified a total of 686 articles via PubMed and an additional 382 articles via Psy-cInfo (1068 unique articles). The titles of all identified articles were examined for correspondence with the study inclusion criteria. This led to the identification of 77 articles of potential interest for further review. The content of these 77 articles was carefully examined by two reviewers who further excluded studies that had the following characteristics: provision of face-toface care, an insufficient follow-up rate to evaluate the intervention including failure to account for missing data, participant age <18, college samples (which are less representative of clinical samples [28]), or use of highly specialized clinical samples (e.g., a trial targeting pregnant women to decrease incidence of fetal alcohol syndrome). The reviewers identified 18 that met study inclusion and exclusion criteria. These articles are described below in the text and are summarized in Table 1. Mean weekly alcohol consumption at follow-up *Attract Individuals = X indicates the intervention appeared to attract individuals who might otherwise not seek help (defined as those who had not previously sought treatment or who expressed disinterest in formal treatment). **Positive Outcomes = X indicates the intervention significantly reduced alcohol use; XX indicates the intervention significantly reduced alcohol use in more severe drinkers (alcohol dependence diagnosis or AUDIT >19). †Inspire help-seeking = X indicates the intervention appeared to be associated with future help-seeking.
Study effect-size determination
Effect sizes presented (Cohen's d or η p 2 ) were obtained directly from the original study calculations in all but four cases [29][30][31][32]. Calculations for these four studies were derived from available study statistics using the Effect Size Determination Program from the Toolkit for Practical Meta-Analysis [33].
Bibliotherapy interventions
Bibliotherapy is the provision of written self-help materials to motivate or guide the process of changing drinking behavior. Bibliotherapy may be presented in the form of brief information and education such as in a pamphlet or in the form of a self-guided book or workbook. A meta-analysis by Apodaca and Miller [34] evaluated the effectiveness of a range of bibliotherapeutic interventions. These interventions involved self-guided learning of behavioral and cognitive-behavioral skills aimed at achieving either abstinence or reduction in drinking to nonhazardous levels. Interventions included components such as monitoring alcohol intake, identifying triggers of alcohol use, and setting drinking goals. The vast majority of self-referred participants were recruited via media outlets, and many indicated disinterest in formal treatment options, indicating that the intervention attracted individuals who had not previously sought help. Compared with control conditions, self-referred individuals who participated in bibliotherapy tended to improve more on problem drinking. Further, there were no significant differences between bibliotherapy and more extensive face-to-face interventions for self-referred individuals, even for interventions that offered up to 12 face-to-face sessions. The authors categorized participants in these studies as problem drinkers without severe dependence, suggesting that severity ranged from alcohol abuse to mild/moderate alcohol dependence. A number of studies noted that individuals who had previously not entered formal treatment or mutual-help groups did so after participating in bibliotherapy. However, it is not possible to establish definitively the effects of bibliotherapy on help-seeking, as the studies did not report subsequent help seeking separately for intervention and control conditions. Studies involving participants who were opportunistically screened (e.g., who were identified by random digit dialing) yielded more heterogeneous outcomes, only some of which tended to support the use of bibliotherapy.
Sobell and colleagues [35] used media outlets (newspapers, television, radio) to recruit participants who had never sought formal treatment and mailed them two versions of an intervention. Participants reported average consumption of >12 drinks per week or consuming five or more drinks on five or more occasions and average AUDIT scores of 20 (low range of alcohol dependence). Individuals in this study were randomized to a self-change condition that included personalized feedback on drinking or to a control condition (receipt of educational materials focused on safe levels of drinking and consequences of harmful drinking). The selfchange condition provided personalized feedback describing drinking levels relative to other drinkers, high-risk situations, and motivation to change, and the bibliotherapy version contained information about the effects of alcohol, low-risk drinking guidelines, risky drinking conditions, and drinking logs. No differences were found by intervention type (self-change versus bibliotherapy) at one-year follow-up; individuals in both groups reduced total weekly drinking by an average of 28.3% (p < 0.001) and reduced heavy/binge-drinking days by 33% (p < 0.001). Moreover, almost 25% of participants in both the self-change and the control conditions had sought treatment. Thus, the current study cannot specify that personalized feedback inspired subsequent helpseeking, but the provision of bibliotherapy and repeated assessments present in both conditions may have led participants to seek help in the subsequent year after entering the study.
Cunningham and colleagues [36] assessed the effectiveness of a self-help book and a personalized assessment-feedback intervention both separately and in combination with each other in a general population study. Individuals with AUDIT scores of ≥8 (86 in total) were recruited via use of random-digit dialing and then randomized into conditions including no treatment, selfhelp book, personalized feedback, or both self-help book and personalized feedback. The self-help book called Drink Wise: How to Quit Drinking or Cut Down [37] was chosen based on demonstrated success in prior evaluations [38]. At six-month follow-up, participants randomized to the book and feedback condition achieved better drinking outcomes compared with those randomized to just one of the interventions or to no treatment at all. Specifically, interaction analyses comparing those in the combined group to those in the singleintervention groups report significantly fewer drinks per week (F = 5.4, 1/75 df, p < 0.03; effect size, 0.07) and days per week of five or more drinks per drinking day (F = 19.6, 1/75 df, p < 0.001; effect size, 0.21). This study is one of the few to find a synergistic effect of using feedback in conjunction with self-help materials.
A study by Bamford and colleagues [29] examined the effect of a six-page preparatory leaflet mailed to participants (N = 361) prior to entering treatment. The study randomized individuals scheduled to enter specialty alcohol treatment to an intervention condition that received the preparatory pamphlet or to a no-pamphlet control group. The preparatory leaflet was based on the FRAMES acronym (Feedback, Responsibility, Advice, Menu of options, Empathy, and Self-efficacy) with a goal of motivating individuals to begin the drinking change process prior to initiating treatment. Rates of treatment entry were 10% higher for the leaflet than the no-leaflet group, although the effect was not statistically significant (x = 3.61, p = 0.057). This study is the only one included in this review to specifically investigate the impact of a low-intensity intervention on treatment entry.
Wild and colleagues [39] studied six-month outcomes in a sample of current drinkers (N = 1722) randomly assigned to either brief personalized feedback on drinking norms or delayed-treatment. The brief personalized feedback intervention, presented as a mailed pamphlet, invited drinkers to compare their alcohol consumption with that of men or women in the general population. Although there was no main effect of experimental condition on drinks per drinking day for the entire sample, individuals who drank hazardous amounts (>14 drinks per week for men or >9 for women) improved more than those who drank less heavily; that is, the hazardous drinking × intervention effect was significant (B = −0.124, t = 2.5, p < 0.01). Thus, the intervention impacted individuals who were drinking in a hazardous fashion more than those who were nonhazardous drinkers.
Kavanagh and Connolly [40] evaluated the impact of a mail intervention on 204 men and women with an alcohol use disorder (abuse or dependence). The intervention was a single-blind randomized trial with a cross-over design wherein participants receiving the intervention either immediately or delayed by three months. The intervention was cognitive behavioral in nature and involved motivation enhancement, challenging overly positive alcohol expectancies, specifying drink refusal skills, and maintaining nonalcohol-related social support. The intervention was divided into eight components delivered weekly for the first month and biweekly thereafter. Compared with participants in the delayed condition, those in the active condition had a more significant reduction in alcohol consumption (Wald x 2 (1) = 7.46, p = 0.006). Participants cut their drinking almost in half but continued to drink at fairly high levels, with men reporting 27 drinks per week and women 14 drinks per week. Even so, the reduction in drinking is notable given that the average AUDIT score at intake was 22.3, indicating moderate alcohol dependence.
Telephone interventions
Telephone interventions increase accessibility of care by eliminating the need for travel to a treatment center [41]. We identified two telephone studies that met inclusion criteria. One study randomized primary-care patients who screened positive for an AUD (alcohol abuse or dependence) to receive either a brief telephone motivational interviewing (MI) intervention (up to six sessions) or a four-page pamphlet on healthy lifestyles [31]. At three-month follow-up, men in the telephone condition experienced greater reductions in risky drinking days (30%) than men in the pamphlet (8%) condition (n = 201, p < 0.001); women experienced significant reductions in drinking in both the telephone (17%) and pamphlet (12%) conditions (n = 251; not significant). Participants who met diagnostic criteria for alcohol dependence improved as much as those who only met criteria for alcohol abuse.
A second telephone study [42] randomized emergencydepartment patients within five days of admission to a two-session telephone intervention or standard care (assessment only). Participants included individuals who screened positive for hazardous alcohol use (≥14 drinks per week for men or ≥7 for women, or ≥5 drinks per occasion for men or ≥4 per occasion for women). Both groups improved in drinking outcomes at three-month follow-up, but only the telephone group (mean change = −1.4; 95% CI, -3.0 to 0.2) reported significantly reduced impaired driving compared with the standard-care group (mean change = 1.0; 95% CI, -0.9 to 2.9) (p = 0.04). Similar to results of the bibliotherapy studies [36,39], individuals reporting heavier drinking experienced the most benefit from the telephone intervention.
Computer-based interventions
The benefits of computer-based interventions include the potential for remote access, the ability for individuals to choose content they prefer, and increased appeal of multimedia applications. Hester and coworkers [43] evaluated the "Drinker's Check-up," a brief computerbased MI intervention that includes assessment, personalized feedback, and a decision-making module that takes about 45-60 minutes to complete. Individuals were solicited by media announcements and needed to be drinking in a problematic fashion as indicated by an AUDIT score >8 (the mean AUDIT score was 20 for all participants). Subjects were randomized to the intervention or a four-week waitlist (control condition). At one-month follow-up, individuals randomized to the intervention condition reported significantly lower levels of drinking from baseline to eight weeks (F (6,43) = 2.667; p = 0.027). At one-year follow-up, both the intervention and delayed intervention groups reported a 50% decline in quantity and frequency of drinking, indicating strong support for the intervention. At 12-month follow-up, it was also found that 28 of the 61 participants subsequently engaged in formal treatment or had attended AA. This help-seeking may have been inspired by content within the computer program, which included treatment referral information. These findings seem to support the apparent effectiveness of the intervention on subsequent help seeking, despite the lack of a nointervention control.
Another study [32] evaluated a brief (15-20 minute) computer-based MI intervention for emergencydepartment patients. The study randomized 1139 individuals with AUDIT scores >5 to intervention or control conditions. The intervention consisted of computergenerated feedback about current drinking delivered on the computer and in a letter provided to the participant prior to leaving the emergency department. In addition to feedback on safe-drinking norms, the letter provided a FRAMES-based intervention [44]. Participants began the study at similar levels of hazardous drinking; however, at six months, 21.7% of the intervention group versus 30.4% of the control group met criteria for hazardous drinking (p = 0.008), and at 12 months, alcohol intake in the intervention group decreased by 22.8% compared with 10.9% in the control group (p = 0.02).
A third study [30] evaluated a brief (20-minute) computer feedback intervention known as "DrinkTest" for men recruited from a nationally representative panel in the Netherlands. Participants met Dutch criteria for hazardous alcohol use (≥15 drinks per week or ≥4 on a single occasion at least once per week) and were randomized to the intervention or to a control group that received an educational pamphlet. The intervention provided normative feedback on drinking, enumerated potential consequences of heavy drinking, and provided suggestions for reducing alcohol intake. At one-month follow-up, the intervention produced significant benefits, with 42% of those in the experimental condition reporting drinking within recommended limits compared with 31% in the control condition (x 2 = 6.67, p = 0.01). At sixmonth follow-up, the intervention effects were less strong, with drinking within recommended limits reported by 46% and 37% of the intervention and control conditions, respectively (x 2 = 3.25, p = 0.07).
Internet-based interventions
The Internet offers another method for reaching individuals with AUDs. One form of Internet-based intervention involves assisting individuals in assessing and evaluating their own drinking. Cunningham and colleagues [45] tested the Internet-based "Check Your Drinking" intervention in a random sample of drinkers who met criteria for hazardous alcohol use (score of ≥4 on the three-item AUDIT-C). Participants were randomly assigned to either the "Check Your Drinking" intervention, which provided brief personalized normative feedback (approximately 10 minutes), or to a no-feedback control condition. Individuals scoring >11 on the AUDIT-C were categorized as problem drinkers. Problem drinkers assigned to the feedback condition reported a significant reduction in drinking at three-month followup (p < 0.05) and an additional reduction at six-month follow-up (p < 0.05), whereas problem drinkers in the control condition did not show significant reductions in drinking.
A study by Pemberton and colleagues [46] compared the effectiveness of two web-based alcohol interventions, "Alcohol Savvy" and "Drinker's Check-up," which were adapted for use within a military population. "Alcohol Savvy" is an alcohol-misuse prevention program that uses education about the risks of drinking and the benefits of moderate alcohol consumption to create motivation for individuals to make better decisions regarding alcohol use. The version of "Drinker's Check-up" used this study was an online version of the intervention described previously in the computer section (MI for high-risk drinkers). Study participants (N = 3070) were active-duty military personnel who were voluntarily recruited through a variety of means at eight different installations. Participants did not need to meet any screening criteria for hazardous alcohol use. Randomization to either of the active conditions or a waitlist (control) condition was quasi-randomized; participants who lacked access to high-speed Internet were assigned to "Drinker's Check-up" as their active condition due to the technical requirements of the "Alcohol Savvy" intervention. At onemonth follow-up, participants who completed the "Drinker's Check-up" reported significant reductions in average number of drinks per occasion (p > 0.05) compared with controls. The comparison between "Alcohol Savvy" and the control group was not significant.
Complex web-based alcohol treatment involves more elaborate forms of engagement beyond feedback interventions, such as communication between users, communication with clinical personnel, and/or content meant to be perused over weeks or months. The web-based "Drinking Less" intervention [47] includes interactive materials based on cognitive-behavioral and self-control principles and a moderated peer-to-peer discussion forum. In a study to assess its effectiveness, participants were block-randomized to either the intervention or a control condition involving a web-based psychoeducational brochure about the negative impacts of unhealthy alcohol use. Problem drinkers in the study included men who consumed ≥14 drinks per week or ≥4 drinks in one day, and women who consumed ≥10 drinks per week or ≥3 drinks in one day. Among individuals in the intervention condition, 17% decreased drinking to safe levels at six-month followup compared with only 5% in the control condition (OR = 3.66; CI 1.3-10.8; p = 0.006). Individuals in the intervention condition also reported an average reduction of weekly drinking of 11 drinks at six-month follow-up, compared with the control group's reduction of only two drinks (OR = 5.86; CI, 5.86-18.10; p = 0.001). The vast majority (88%) of users of the website had never entered professional treatment, indicating that interventions of this nature may appeal to problem drinkers uninterested in traditional alcohol-treatment services.
An e-therapy alcohol intervention [48] included assessment, goal-setting, and regular interaction with counselors via email for up to three months. Participants (N = 156) who met the study's criterion for problem drinking (≥15 drinks per week for men and ≥11 for women) were recruited via online advertising and randomized to three months of access to the website or to a waitlist (control condition). The intervention was facilitated by email contact from a counselor and occurred in two stages: a motivational stage that involved assessment of drinking consequences and feedback and a second phase that involved completing modules based on cognitive behavioral therapy for alcohol problems. At three-month follow-up, participants randomized to the intervention reduced their weekly alcohol intake by a 26 drinks compared with those in the control group, who decreased weekly alcohol intake by only two drinks (mean difference 95% CI, 15.69-35.80; p < 0.001). Almost 80% of participants met the Diagnostic and Statistical Manual of Mental Disorders, 4 th revision (DSM-IV) criteria for mild alcohol dependence, and 76% reported never having received treatment for their alcohol problems.
Another intervention study of problem drinkers, conducted by Blankers and colleagues [49], evaluated the effectiveness of an Internet-based alcohol intervention (therapy alcohol online [TAO]), and Internet-based selfhelp (self-help alcohol online [SAO]). Participants scored >8 on the AUDIT (mean score for all participants, 19.5) and reported drinking >14 drinks per week on average. At total of 205 participants were randomized to TAO, SAO, or a waitlist (control condition). The SAO condition was a stand alone, fully automated, self-guided intervention based on cognitive-behavioral therapy and MI techniques. Participants in the SAO group also received support from other SAO participants in the form of an Internet-based forum. The TAO condition was a synchronous online intervention based on the same SAO treatment protocol but also included 40minute text-based therapy chat sessions. Contact between TAO participants and participants could occur synchronously during chat sessions or asynchronously via email. At three-month follow-up, generalized regression models indicated significantly lower alcohol consumption for the TAO group (p = 0.002) and the SAO group (p = 0.03) relative to the waitlist group. From baseline to three-month follow-up, the TAO group reduced their weekly alcohol consumption by an average of 24 drinks compared with a reduction of 16 drinks in the SAO group and 12 in the waitlist group. The mean reduction in weekly drinking between the TAO and SAO groups was not significantly different at threemonth follow-up, but a significant difference favored the SAO group (p = 0.03) at six months.
A study of the "Down Your Drink" web-based intervention [50] compared an interactive website employing elements of MI with cognitive behavioral techniques with a more static, text-based version of the site that focused on harms caused by excessive alcohol consumption. Interactive components of the "Down Your Drink" website were divided into three stages focusing on individual responsibility for change, deciding on change, and maintenance of change, and included e-tools such as a "thinking drinking diary" in which users could record their alcohol consumption along with emotional and behavioral triggers. Hazardous drinkers who had AUDIT-C scores >5 (N = 7935) were randomized to either the enhanced "Down Your Drink" website or to the textbased site (control condition). At three-month followup, the intervention group reported a substantial reduction in alcohol consumption (46.3 to 26.4 drinks per week) as did the control group (45.7 to 25.6), but the difference between groups was not significant (OR, 1.03; 95% CI, 0.97-1.10.) Changes were maintained in both groups at 12 months.
The "Drinking Less" intervention described earlier was also tested via a television-supported platform [51]. Problem drinkers (N = 181) were recruited using the "Drinking Less" website and assigned to receive weekly DVDs (five in total, 25 minutes each) and a self-help manual or to a waitlist (control condition). The content of the DVDs paralleled that of the "Drinking Less" website but also showed two problem drinkers who underwent and completed the intervention with a trained addiction coach. At five-week follow-up, 40% of individuals in the intervention condition were engaging in low-risk drinking compared with only 7% of individuals in the waitlist condition (x 2 (1) = 28.3; p = 0.001; OR, 9.4; 95% CI, 3.7-23.9). The reductions in drinking remained at three-month follow-up.
Discussion
A variety of low-intensity interventions can engage individuals and effectively reduce drinking. Moreover, these strategies offer easier access and flexibility to individuals who misuse alcohol and circumvent some of the barriers to entry into traditional treatment. They also offer the potential for greater privacy, although strong encryption and other safeguards are needed to ensure that individuals' data remain private and confidential for online interventions. In the following section, we summarize three key findings from existing studies of low-intensity interventions and identify three important areas for future research on stepped-care intervention models.
Key findings about low-intensity interventions
Low-intensity interventions attract individuals who have not previously sought help. Several of the studies discussed herein included participants who had never sought treatment previously or who expressed disinterest in formal treatment options but accepted the lowintensity intervention. Mail [35], Internet-based [47,48], and bibliotherapeutic [34] interventions can attract individuals who have never previously sought treatment, including those who meet criteria for alcohol abuse or dependence [34,35]. None of the other studies reviewed above reported on participants' prior help-seeking history, so it is not possible to determine whether any one type of intervention is especially attractive to individuals with AUDs. Moreover, we are unaware of studies that have offered a low-intensity intervention or any treatment to individuals who report disinterest in formal or informal treatment options. More comparative information is needed about which low-intensity interventions are most attractive to individuals with AUDs who are unlikely to attend more traditional formal treatment. We also need to know whether some modalities are more popular than others for specific groups of individuals. In this vein, less educated individuals were more likely to be drawn to the "Drinking Less" intervention when it was delivered via television [51] than when it was delivered via the Internet [47]. Another important question for future studies is whether low-intensity interventions, offered in the context of a stepped-care intervention, are more likely to attract problem drinkers relative to offering more intensive treatment.
Low-intensity interventions can benefit individuals with more severe AUDs. Meta-analytic studies suggest that brief interventions conducted in settings such as primary care tend to be more effective for individuals with less severe AUDs [23]. Similar to face-to-face brief interventions, all of the low-intensity interventions reviewed herein significantly reduced alcohol use among participants. However, several of the studies also reported significantly reduced alcohol use among patients with low to moderate alcohol dependence. For example, Brown and colleagues [31] found that a telephone-based intervention produced equivalent outcomes for participants who met criteria for abuse or dependence. A bibliotherapeutic intervention reduced drinking among participants meeting criteria for alcohol abuse or dependence by 50% [40], another bibliotherapy intervention reduced alcohol use by 30% [35], and two online interventions reduced drinking by 50% [47][48][49]-despite the fact that participants in these four studies reported average AUDIT scores of >19. Thus, low-intensity interventions appear to significantly reduce drinking among hazardous alcohol users and can also engage and reduce drinking among individuals with more severe alcohol-related problems.
Low-intensity interventions may lead to subsequent help-seeking. Low-intensity interventions may inspire a self-change attempt which, if unsuccessful, leads to subsequent help-seeking [35]. Almost half of the individuals who participated in the computer-based "Drinker's Check-up" sought some form of additional help within the next 12 months [43]. Almost one-quarter of the individuals in the mail-based intervention [35] sought some form of help for their alcohol use in the subsequent year, including "control" participants, potentially inspired by completion of study assessments. A number of studies reviewed by Apodaca and colleagues [34] indicated that individuals who had previously not entered formal treatment or mutual-help groups did so after participating in bibliotherapy. A majority of individuals who engaged in e-therapy indicated that they would consider seeking treatment, and some decided to participate in face-toface therapy [48]. Thus, consistent with prior findings showing that unsuccessful quit attempts often precede help-seeking [19][20][21], low-intensity interventions may help individuals engage in subsequent treatment even when the interventions themselves are unsuccessful at precipitating change. It is not possible to establish definitively the effects of these interventions on help-seeking, as participants in the intervention and control conditions either sought subsequent help at equal rates, subsequently received the same intervention, or results were not reported separately for the different groups.
Assessing subsequent help-seeking in the context and aftermath of low-intensity interventions could provide valuable data to inform stepped-care interventions. We need to know whether individuals seek help after a lowintensity intervention because they resumed problematic drinking (failed to meet goals) and/or whether lowintensity interventions help dispel concerns about treatment and thereby increase individuals' motivation to enter into it.
Key questions for implementing stepped-care interventions
How can low-intensity interventions be structured to initiate a stepped-care intervention? In general, lowintensity interventions appear well-suited as the first step of a stepped-care intervention [35,47,48]. However, a number of important questions need to be addressed regarding how best to integrate low-intensity interventions into standard treatment. For example, should individuals who participate in a low-intensity intervention but are unsuccessful at reducing their alcohol use be actively referred to more intensive treatment, or should they just be provided with referral information? Is it more appealing for a low-intensity intervention to be remotely accessible and independent than for it to be specifically tied to an alcohol treatment center? Stepped-care models are a potentially efficient method for titrating care for individuals with AUDs, but attention should also be directed to whether or not any given model successfully increases the reach of alcohol treatment.
Another potential implementation pathway for lowintensity interventions is as population-based interventions that focus more on the overall impact on groups of individuals than on efficacy for each individual. Within this framework, the "impact" of an intervention is not only its efficacy, such as a 5% decrease in hazardous alcohol use, but the efficacy multiplied by the number of participants [52]. The efficacy of online interventions such as "Drinking Less" in the Netherlands, which decreased the rate of hazardous alcohol consumption by 17%, may appear modest compared with most treatment outcome studies that report abstinence rates of 30-40% [53]. However, with the ability to reach a large number of homes, the "Drinking Less" website could have a substantial population-based impact on drinking.
In countries such as the US, where higher intensity treatment options are widely available, a variety of lowintensity interventions can be included as part of population-based stepped intervention, as currently exists with "AlcoholScreening.org" [54]. Similarly, the computerbased "Drinker's Check Up" [43], which combined a lowintensity intervention with referral information, resulted in 45% of the participants seeking additional help. Combining low-intensity intervention with referral to intensive treatment in the US is feasible given the existence of online resources such as the Substance Abuse Treatment Facility Locator [55] and mutual-help meeting locators for groups such as AA [56], Smart Recovery [57], and Life Ring [58].
The success of smoking quitlines offers another model for population-based alcohol interventions [59].
Smoking quitlines have strong empirical support [60] and additional advantages such as convenience, relative anonymity, and ease of creating a structured counseling protocol. Lichtenstein and colleagues [59] identify problem drinking as well-suited to the quitline model, given that the disorder is highly prevalent, that suitable protocols for intervention currently exist, and that the widespread negative impacts of hazardous drinking provide governments with a stake in funding such an enterprise. Alcohol quitlines could follow problem drinkers long enough to determine the success or failure of the intervention, and then, per the stepped care model, offer referral to more intensive treatment to individuals who remain motivated to reduce their level of drinking.
What are the mechanisms that underlie the benefits of low-intensity interventions? Gaining a better understanding of how low-intensity interventions work, particularly within the process of seeking help, can inform the creation and implementation of stepped-care interventions. We know that alcohol-related problems, particularly multiple problems, predict help-seeking and reductions in drinking. All of the low-intensity interventions in this review provided online or telephone feedback about negative consequences associated with drinking [31,48], automated initial assessments [45], or self-help materials [43,47]. With one exception [35], all of the studies found better alcohol outcomes when personalized feedback and informational advice were provided than when they were not provided. However, to more fully understand the impact of personalized feedback on subsequent drinking, we need to examine the extent to which the feedback changes the perceived severity of alcohol problems or another potential mediator, and whether any such changes are tied to changes in drinking [61]. Support for such a causal chain could substantiate the effectiveness of personalized feedback and contribute to a better understanding of "why" or "how" low-intensity interventions work.
We also need to examine whether certain aspects of low-intensity interventions, such as feedback about negative consequences of drinking or providing information about normative drinking, are especially likely to lead to subsequent treatment seeking. Is an initial assessment and evaluation sufficient to motivate help-seeking, or is participation in the intervention and an unsuccessful quit attempt the key to help-seeking? Gaining a better understanding of how low-intensity interventions help individuals recognize the severity of their alcoholrelated problems offers the hope of telescoping the normal help-seeking process and thereby averting a considerable degree of alcohol-related harm.
Are some low-intensity interventions more beneficial for some groups (e.g., men or women) than others, and are there additional low-intensity interventions that should be considered? Design of stepped-care interventions should take into account the potential for lowintensity interventions to be more or less effective for some groups. More information is needed about the extent to which low-intensity interventions are effective across diverse population groups. As noted, Brown and colleagues [31] found that a telephone-based intervention was more effective than a pamphlet-only condition for men but not for women. More generally, there is a need to examine whether low-intensity treatment interventions are differentially effective across gender and sociodemographic and racial/ethnic groups. For example, less well-educated individuals were more likely to be drawn to the "Drinking Less" intervention when it was delivered via television [51] than when it was delivered via the Internet [47]. It may be appropriate to adopt theory-based, data-driven cultural adaptation techniques [62] to modify low-intensity interventions for different cultural groups. In addition, given that almost 20% of the US population speaks a language other than English [63], future research should develop and test low-intensity interventions for AUDs in languages other than English.
A recent report by the Pew Research Center's Internet and American Life Project [64] indicates that 83% of American adults own cell phones, and 73% send and receive text messages. Most studies evaluating the impact of text messages on health care focus on health activities such as appointment reminders, but a growing number deliver health-behavior change messages. A recent review found positive health benefits from SMS-delivered messages targeting diabetes self-management and smoking cessation [65]. Thus far, only college students have been the focus of text-message studies targeting alcohol use [66]. However, with individuals aged 30-49 sending or receiving an average 27 of messages per day, textmessage interventions should be considered as a potential low-intensity intervention targeting adult alcohol use. In addition to sending alcohol-related health messages, an online text system could record drinking goals, ask users how well such goals are being met, and direct those not meeting drinking goals to websites providing treatment and mutual-help resources.
Conclusion
Rather than developing new forms of intensive treatments for AUDs when current treatments work reasonably well, more effort should focus on expanding the reach of treatment by developing more accessible interventions and exploring how to integrate them into existing alcohol treatment systems. Low-intensity interventions are flexible and can be tailored to address many of the perceived barriers that hinder individuals with AUDs from obtaining timely help. Substantial evidence indicates that low-intensity interventions can engage individuals who shun current treatment options, reduce problematic alcohol use, and may even motivate individuals who need it to engage in more intensive treatment. Given the existence of effective low-and higher intensity interventions to address AUDs and the low levels of treatment uptake, greater attention needs to be focused on implementation-oriented aspects of stepped-care interventions. Several issues regarding the implementation of stepped-care interventions still need to be addressed by the literature, such as identifying the best structure for stepped-care models, understanding the impact of stepped-care interventions on motivation for changing alcohol use, and comparing the effectiveness of stepped-care models across diverse populations. | 2017-07-13T19:41:55.094Z | 2012-12-11T00:00:00.000 | {
"year": 2012,
"sha1": "79d093114e760eabfd47f1e73e94d7260010753c",
"oa_license": "CCBY",
"oa_url": "https://ascpjournal.biomedcentral.com/track/pdf/10.1186/1940-0640-7-26",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e3d18e4a6d32e07d23e5d1c0c3843b491e650f2",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248154921 | pes2o/s2orc | v3-fos-license | Preparation, characterization, and biological activity study of thymoquinone-cucurbit[7]uril inclusion complex
In this study, the formation of a host–guest inclusion complex between cucurbit[7]uril (CB[7]) and thymoquinone (TQ) was investigated in aqueous solution. The formation of a stable inclusion complex, CB[7]–TQ, was confirmed by using different techniques, such as 1H NMR and UV-visible spectroscopy. The aqueous solubility of TQ was clearly enhanced upon the addition of CB[7], which provided an initial indication for supramolecular complexation. The complexation stoichiometry and the binding constant of the inclusion complex were determined through a combination of two sets of titration methods, including UV-visible and fluorescence displacement titrations. Both methods suggested the formation of a 1 : 1 stoichiometry between CB[7] and TQ with moderate binding affinity of 3 × 103 M−1. Density functional theory (DFT) calculations were also performed to verify the structure of the resulted host–guest complex and to support the complexation stoichiometry. The theoretical calculations were in agreement with experimental results obtained by 1H NMR spectroscopy. Most importantly, the cytotoxic effect of the CB[7]–TQ complex was investigated against cancer and normal cell lines. The results showed that the anticancer activity of TQ against MDA-MB-231 cells was enhanced by the complexation with CB[7], while no significant effect was observed in MCF-7 cells. The results also confirmed the low toxicity of the CB[7] host molecule that supports the use of CB[7] as a drug carrier.
Introduction
Nigella sativa (also called black seed or black cumin) is a promising medicinal plant, especially in the Mediterranean region and Western Asian countries including India, Pakistan, and Afghanistan. 1,2 It has been used to treat a range of health conditions for many years, such as lung diseases, arthritis, and hypercholesterolemia. The biological activity of nigella sativa is mainly related to one of its main active components, namely, thymoquinone (TQ, 2-isopropyl-5-methylbenzo-1,4-quinone, Fig. 1). 3 Several studies have been established to investigate the therapeutic effect of TQ. These studies showed strong indications that TQ has a very interesting and signicant therapeutic effect in cancer, inammation, and as an antioxidant agent through different modes of action. For example, TQ is able to arrest tumor cells at different stages of their progression. 2 Recently, TQ showed a potential inhibition activity against coronavirus infections. 4,5 Despite the promising therapeutic efficiency of TQ, the lack of bioavailability, pharmacokinetic parameters, and formulation problems deferred the usage of TQ in the clinical development. 3 The bioavailability problem (for example, poor solubility in water) is mainly related to the fact that TQ is a hydrophobic molecule. In addition, it has been reported that TQ has stability problems in solution; in particular, TQ is very sensitive to heat and light. 6 Many attempts have been done to [7]uril (CB [7], right). overcome the bioavailability and stability difficulties. The most important attempt was established by encapsulating TQ within different nanomaterials (vehicles/carriers). For example, several drug delivery systems were used, such as liposomes, 7 lipids, 8 cyclodextrins (CDs), 9,10 nanoparticles, 11 etc. These studies showed that encapsulation of TQ within the new environments (carrier/vehicle) can improve its bioavailability, stability (thermal/light), and activity as an anti-cancer agent.
In this work, we are interested in introducing another and new host molecule to TQ, for the purpose of dening possible improvements in the performance of TQ pharmaceutical properties when associated with this host. Then we compare the obtained results with the previous studies that used different host molecules. Cucurbit[n]urils (CB[n], Fig. 1) are another attractive class of water soluble macrocyclic host molecules, which, like CDs and calixarenes, have a cavity that can accommodate different sizes of hydrophobic guest molecules. 12 CBs are composed of a different number of glycoluril units (n ¼ 5-10) joined by pairs of methylene bridges. 13,14 The members of this family have a pumpkin shaped, highly symmetrical and rigid structure. They also possess a hydrophobic cavity and two identical partially-negatively charged carbonyl portals (hydrophilic). 15 The binding of guest molecules to CB[n] can be driven by ion-dipole, and dipole-dipole interactions (suitable for cationic moiety parts), or through the hydrophobic effect (suitable to bind neutral and hydrophobic residues). 16 Cucurbit [7]uril (CB [7], Fig. 1) is the most soluble member of the CB[n] family. 14,17 Recent studies start to give notable interest to the CB[n] family in the biological and medicinal elds. 18 Particular focus has been given to the CB[n] family in the drug delivery area, and many studies have been using CB[n]-type molecular hosts as a drug delivery vehicle for a large number of drugs. 19 Based on the previous studies, it has been suggested that CB[n] showed signicant advantages over the CD host molecule, which is the most common choice in the drug delivery area. For example, the binding constants (K) of CB[n]-guest complexes are higher than those of CDs in aqueous medium. 20 Also, the encapsulation of drug molecules in CB[n] provides a method for slow drug release, facilitate drug targeting, improve chemical and thermal stability, reduction in toxicity, and enhance the aqueous solubility of poorly soluble drugs. Most importantly, CBs are a promising drug delivery system, being non-toxic and highly biocompatible. 20,21 Herein, we report for the rst time the formation of a stable inclusion complex between TQ and CB [7] in aqueous solution. The complexation is investigated by 1 H NMR and UV-visible spectroscopy. The binding constant is determined by using UV-visible and uorescence displacement titrations. In addition, a theoretical evaluation of the CB[7]-TQ inclusion complex stoichiometry and energy measurements are performed. Moreover, the antiproliferative effect of the CB [7], TQ, and CB [7]-TQ complex is investigated against two breast cancer cell lines, MDA-MB-231 and MCF-7, and compared to the effect against human dermal broblast cell line, HDF, to evaluate retaining toxicity of TQ aer complexation with CB [7]. A previously reported bCD-TQ complex was used for comparison with CB[7]-TQ.
Chemicals and instruments
Thymoquinone, acridine orange base (AO), b-cyclodextrin (bCD) and D 2 O (99 atom % D) were purchased from Sigma-Aldrich (USA). Cucurbit [7]uril (CB [7]) was prepared according to literature. 22 All reagents and chemicals were used without further treatment. 1 H NMR spectra were performed on a Bruker AVANCE-III 400 MHz NanoBay FT-NMR spectrometer, in D 2 O and referenced in ppm with respect to a tetramethylsilane (TMS). The UV-visible absorption spectra were measured on Cary-100 Bio instrument. Fluorescence experiments were performed using Jasco spectrouorometer (FP-6500).
H NMR titration of TQ and CB[7]
CB [7] stock solution (host) was prepared by dissolving 0.01 g of CB [7] in 0.7 mL D 2 O, heating and sonication were used to speed the dissolution; the CB [7] concentration was corrected to 80% content. The CB [7] solution was then ltrated using syringe lter (0.45 mm, nylon). Different aliquots of CB [7] stock solution were then added into a solution of TQ (9.7 mM) in D 2 O to obtain 1 : 1 ratio of CB [7] : TQ.
UV-visible titration of TQ and CB[7]: complexation stoichiometry and binding study
Stock solution of CB [7] was prepared rst (11 mM), different aliquots of CB [7] stock solution were added to a solution of TQ (1 mM). The UV-visible spectrum was recorded aer each addition until getting a 1 : 1 ratio of CB [7] : TQ. The complexation stoichiometry and the binding constant of CB [7]-TQ inclusion complex were determined using the Benesi-Hildebrand equation based on the data from UV-visible spectrophotometry. Eqn (1) and (2) were used for 1 : 1 and 1 : 2 binding models.
According to the design of Benesi-Hildebrand experiment, A is absorbance of the free TQ, while A is absorbance at different CB [7] concentrations, A 0 is absorbance at the maximum concentration of the host. The association constant (K) is calculated from the Benesi-Hildebrand equation, aer plotting (1/A À A ) against (1/[host]) for 1 : 1 (host : guest stoichiometric ratio), or (1/[host] 2 ) for 1 : 2 (host : guest stoichiometric ratio). Binding constant is then calculated as: Fluorescence titration of CB [7] with AO dye and binding constant of CB [7]-AO inclusion complex An aqueous solution contains CB [7] and AO (1 mM CB [7] and 1 mM AO dye) was prepared rst. Different aliquots of this solution were added into a solution of AO (1 mM). The emission peak of AO at l em ¼ 521 nm (l ex ¼ 470 nm) was followed to study the effect of adding CB [7] on the emission of AO. The additions were stopped aer obtaining a plateau in the experimental uorescence data. Binding constant of CB [7]-AO inclusion complex was obtained by tting the titration data 20,21 at l em ¼ 510 nm by using Origin program.
Displacement titration and binding constant of CB[7]-TQ inclusion complex
An aqueous solution contains 10 mM CB [7] and 1 mM AO dye, was prepared rst. Then, a solution of 10 mM CB [7], 1 mM AO dye, and 1.6 mM TQ was prepared. For displacement titration, different aliquots of TQ solution were added into the rst solution (CB [7]-AO). The emission peak of AO encapsulated species at l em ¼ 510 nm (l ex ¼ 470 nm) was followed to study the effect of adding TQ on the emission. The additions were stopped aer obtaining a plateau in the experimental uorescence data. Binding constant of CB [7]-TQ inclusion complex was obtained by tting the titration data 22,23 at l em ¼ 510 nm by using Origin program.
Quantum chemical calculations
Density functional theory (DFT) calculations were performed using Gaussian 09. 23 The dispersion-correct DFT method (wB97XD) 24 was used with the 6-31G* basis set. All calculations were performed in the gas phase.
Cell viability assay
MDA-MB-231, MCF-7 and HDF (5 Â 10 3 cells per well) were seeded in 96-well plates. Aer 24 h, the cells were treated with serial dilution of each of the CB [7], bCD, TQ, bCD-TQ complex, CB [7]-TQ complex in a concentration range of 0 to 1000 mM for 72 h (the host-guest solutions were prepared in 1 : 1 molar ratio). Aer 72 h of incubation at 37 C, treatment was removed from the wells, followed by adding 15 mL of 3-(4,5-dimethyl-2thiazolyl)-2,5-diphenyltetrazolium Bromide (MTT) solution and 100 mL of the medium. Aer 3 h of incubation, the medium was removed, then 50 mL of dimethyl sulfoxide (DMSO) was added for dissolving the formazan. The absorbance was measured at a wavelength of 570 nm using Glomax microplate reader (Promega, USA).
CB[7]-TQ inclusion complex
The inclusion complex between TQ and CB [7] (structures shown in Fig. 1) was investigated using several spectroscopic methods. In particular, NMR is a very powerful technique for studying guest-host complexation since it is highly inuenced by any changes in the microenvironment of the nucleus under study in addition to its chemical identity capabilities. Several NMR experiments are useful in gaining insights on type and location of interaction between molecules such as the chemical shi (Dd), relaxation time (T 1 ) and diffusion measurements (DOSY experiments). However, simple 1 H NMR experiments to observe chemical shi changes upon complexation (or interaction) are very efficient. The 1 H NMR spectrum of free TQ in D 2 O is shown in Fig. 2a, the protons' signals are broadened due to the low solubility of TQ in water, and the possible aggregation. Upon the addition of different aliquots of CB [7] to TQ solution, the proton of the guest experienced complexation induced chemical shis (Dd), and the shape of the signals changed suggesting the formation of CB[7]-TQ inclusion complex. As shown in Table 1, a signicant upeld shi was noticed in H A , H B and H C signals of TQ (+Dd ¼ 0.66, 0.89, 0.84 ppm, respectively), this indicated that these protons are accommodated deep inside the CB [7] cavity, and that the observed upeld shi is related to the shielding effect of the hydrophobic cavity of CB [7]. 16,25 On the other hand, protons H D and H E are positioned out of the cavity of CB [7], and close to the carbonyl rims, as indicated by the downeld shi (ÀDd). 25 In addition, the shape of TQ protons in the 1 H NMR spectra was largely affected by CB [7] additions. The signals (protons B, C, D and E) in D 2 O were broad (Fig. 2a), indicating their presence in an aggregated state and not as free hydrated TQ molecules. This behavior has been reported in literature regarding self-assembly of small organic molecules in aqueous solutions. 26 Upon the addition of CB [7], an enhancement in solubility is achieved due to complexation and the aggregates start breaking giving sharp 1 H NMR signals ( Fig. 2b-g). 27 No further shi in the TQ peaks was noticed aer reaching 1 : 1 mole ratio (host:guest), which indicated the formation of 1 : 1 stable inclusion complex of CB[7]-TQ. The gradual disappearance of some signals of free TQ and the appearance of CB [7]-TQ inclusion complex signals, suggested slow exchange on the NMR time scale.
The formation of inclusion complex CB [7]-TQ was further investigated by using UV-visible spectroscopy. The free TQ showed two absorption peaks, at 334 nm and at 434 nm (as a shoulder) in aqueous medium. As shown in Fig. 3a, addition of CB [7] into the TQ solution (1 mM) resulted in gradually decrease of the absorption peak at $334 nm (l max of the free TQ) until almost disappeared at 1 : 1 mole ratio of CB [7] : TQ. Also, a decrease in the absorbance of the shoulder peak of TQ (l ¼ 434 nm) with bathochromic shi (from 434 nm to 447 nm) was observed upon CB [7] addition. These changes conrmed the formation of host-guest inclusion complex (CB[7]-TQ complex). 25
Complexation stoichiometry and binding constant determination
The UV-visible spectrophotometry was used to investigate the host-guest complex formation of CB [7]-TQ, the spectra obtained for successive additions of CB [7] to TQ in aqueous solution are shown in Fig. 3a. The complexation stoichiometry and the binding constant of CB [7]-TQ inclusion complex were determined using the Benesi-Hildebrand equation based on the titration data from UV-visible spectrophotometry. 28 A linear dependence of the type 1/(A À A ) vs. 1/[CB7] n , with n ¼ 1, indicated the presence of 1 : 1 stoichiometry of the CB [7] : TQ complex. Fig. 3b represents the Benesi-Hildebrand plot for CB [7]-TQ inclusion complex using UV-visible spectrophotometric titration for 1 : 1 stoichiometry. A Good linear correlation was obtained for n ¼ 1 (Fig. 3b), with a binding constant (K) of 2.5 Â 10 3 M À1 . Fitting the data according to nonlinear model also provided a K of 3 Â 10 3 M À1 .
The binding of TQ and CB [7] was also investigated using optical dye displacement titration. This method relies on the competitive displacement of a uorescent dye from a host molecule by the guest we intend to study. 29,30 Different dyes were used in the indicator displacement titration with CB [7]. 31,32 In this study, acridine orange dye (AO) was used for this purpose. Upon the addition of CB [7] to aqueous solution of AO, a blue shi (from l em ¼ 521 nm for free AO, to l em ¼ 510 nm aer adding CB [7]) and a large enhancement in the emission peak of AO were observed (Fig. 4a). The binding constant of CB [7]-AO inclusion was found to be 1.6 Â 10 5 M À1 based on the titration experiment (Fig. 4b), which is in line with the reported value. 15 For displacement titration, different aliquots of TQ solution were added into an aqueous solution of the encapsulated adduct CB [7]-AO, a continuous decrease in the emission spectrum of CB [7]-AO was observed (Fig. 4c). The addition of TQ was stopped aer reaching a plateau in the experiment data, which conrmed reaching the equilibrium between the encapsulated and the free TQ. By tting the data of uorescence displacement titration experiment (adding TQ to CB [7]-AO solution), the value of the binding constant of CB [7] and TQ was obtained (K ¼ 3.2 Â 10 3 M À1 ) (Fig. 4d). This value was almost similar to the value obtained from UV-visible titration (2.5 Â 10 3 M À1 ). Fig. 3 The UV-visible absorbance spectra of the titration experiment of TQ (1 mM) and CB [7] in water at ambient temperature (a), Benesi-Hildebrand plots of 1/DAbs versus 1/[CB7] for CB [7]-TQ inclusion complex using UV-visible titration data (b).
DFT calculations
The optimized structures of the free TQ, CB [7], and the CB [7]-TQ complex are shown in Fig. 5. The cavity volume of CB [7] (242 A 3 ) can accommodate only one residue of TQ (size 161Å 3 ), resulting in a packing coefficient (PC% ¼ guest size/cavity volume) of 67%, which veried the experimentally observed 1 : 1 binding stoichiometry. The calculations showed the formation of an inclusion complex was stabilized by À37 kcal mol À1 in the gas phase compared to the free components. The obtained structure of the CB [7]-TQ complex showed that the benzoquinone was fully encapsulated inside the hydrophobic cavity of CB [7], while the methyl substituent was positioned inside the cavity and close to the carbonyl rim. In contrast, the isopropyl substituent was partial excluded and more exposed to the surrounding. These structural results are in line with the 1 H NMR data.
Cytotoxicity
The cytotoxic effect of the CB[7]-TQ complex was investigated against breast cancer using MTT assay. Moreover, the toxicity of the CB [7], bCD, free TQ, and bCD-TQ complex were assessed and compared to CB [7]-TQ complex. To do so, triple-negative human breast cancer cell line (MDA-MB-231), estrogen receptor (ER)-positive human breast cancer cell line (MCF-7), and normal human dermal broblast cell line (HDF) were used. The three cell lines were treated with different concentrations of CB [7], bCD, free TQ, bCD-TQ complex, or CB[7]-TQ complex for 72 h. In general, the results revealed higher toxicity for free TQ, bCD-TQ complex, and CB [7]-TQ complex to both breast cancer cell lines (Fig. 6a, b) compared to normal broblast (Fig. 6c). Furthermore, based on IC 50 values (Table 2), it was noticed that the toxicity of bCD-TQ and CB[7]-TQ complexes was higher than the free TQ in MDA-MB-231 cells compared to no signicant differences in the cytotoxicity of TQ, bCD-TQ complex, and CB [7]-TQ complexes in MCF-7 cells (Table 2). Importantly, there was no signicant toxicity against the three tested cell lines obtained from bCD and CB [7] treatment. All of these results conrmed that the anticancer activity of TQ was almost not affected by the kind of the host molecule that used to form the inclusion complexes (based on IC 50 values, Table 2). An interesting nding in the current work was the similar antiproliferative effect of TQ when complexed with CB [7] or bCD counterparts. The similar effect is most likely due to the formation of 1 : 1 inclusion complexes and the ability of both CB [7] and bCD to deliver and release of TQ into the cells in a similar manner. Moreover, an intensive and comparative study of uptake and release of TQ from CB [7] and bCD is of high interest for more understanding.
The antiproliferative activity of TQ loaded into different nanocarriers have been described by several studies and have been shown either higher or equal activity in comparison to free TQ. 9,33-37 For instance, Bhattacharya et al. reported similar antiproliferative effect of TQ-loaded into polymeric nanoparticles against MCF7 when compared to free TQ. 37 Moreover, Ganea et al. reveled twofold higher toxicity of TQ-loaded into PLGA nanoparticles in MDA-MB-231 cells compared to free TQ. 35 In contrary, Odeh et al. showed that the encapsulation of TQ in bCD, enhanced its antiproliferative activity against MCF7. 9 Encapsulation of TQ in liposomes also enhanced its solubility and hence its bioavailability against MCF7 and MDA-MB-231 cell lines. 7 The TQ's mode of action on different cell lines is an important factor in deciding the suitable carrier aside from enhanced solubility and bioavailability. For example, Sunoqrot et al. reported the encapsulation of TQ in polymeric particles and tested their activity on various cell lines. 38 Their results showed no enhancement of the antiproliferative activity of TQ-polymer compared to free TQ, but the selectivity index and in vivo bioavailability were more prominent. TQ, either free, encapsulated in a nanocarrier or in combination with other chemotherapeutics, is very promising against many cancers including cancer stem cells. [39][40][41] Moreover, the results also conrmed the low toxicity of CB [7] that supports the use of CB [7] as a drug carrier. Previously described reports about toxicity and safety of CB[n] demonstrated the relative safety and acceptable toxicity in vitro and in vivo.42,43 For example, Oun et al. reported the lack of signicant tissue specic toxicity of CB [6] and CB [7] using ex vivo rat model. 42 Moreover, Hettiarachchi et al. revealed the very low toxicity of CB [5] and CB [7] and high cell tolerance at concentrations of up to 1 mM when tested against normal human cell lines originated from kidney, liver, and blood tissues. 43
Conclusion
The inclusion complex CB [7]-TQ was successfully prepared in 1 : 1 mole ratio of CB [7] : TQ with moderate strength binding (K of 3 Â 10 3 M À1 ) in aqueous solution. As a result of this inclusion process, the solubility of TQ was clearly enhanced (based on 1 H NMR spectroscopy study), which is useful in increasing the bioavailability of TQ in aqueous solution. The experimental and computational theoretical results are consistent with inclusion of a single TQ molecule within CB [7] cavity. The cytotoxic effect of the CB[7]-TQ complex was investigated against cancer and normal cell lines, which revealed that the anticancer activity of TQ can be enhanced by forming a stable inclusion complex with CB [7]. Results showed also that the anticancer activity of TQ was not affected by the kind of the host molecule that used in the complexation process. Overall, the results highlight the potential implementation of supramolecular complexation in medicinal chemistry and drug delivery applications.
Conflicts of interest
There are no conicts to declare. | 2022-01-14T16:39:11.603Z | 2022-01-12T00:00:00.000 | {
"year": 2022,
"sha1": "d0a5e7b25e0dec3140901816eeaed81b8d373ae9",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ra/d1ra08460g",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b03e36c4b7ddc727343d17dae09871c835985709",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248316732 | pes2o/s2orc | v3-fos-license | Exploring systems thinking competence of Finns in fostering sustainable transformation
: Systems thinking competence is one of the key sustainability competences to make the future more sustainable by focusing on individuals’ capability to analyse sustainability problems across different sectors and scales. The other competencies to foster systems thinking are futures thinking competence, values and critical thinking competence, action-oriented competence, and collaboration competence. In this study, we examined Finnish people’s systems thinking competence and its connections to sustainable transformation. The survey data collected from Finns (n = 2006) were analysed using principal component analysis (PCA) and hierarchical regression analysis. The study showed that the sustainability component loaded reliably into principal components. In particular, the Cronbach’s alpha (0.91) and Spearman–Brown (0.90) were high for systems thinking competence. The hierarchical regression analysis showed that Finns’ values, critical thinking, and individual action-oriented competence predict their systems thinking competence. The results indi-cate that Finns’ ideas of climate change and biodiversity loss mitigation arise from their individual values and opinions that actions are implemented in an ethically just way.
Introduction
The Intergovernmental Panel on Climate Change (IPCC) has emphasised that global warming should be limited to 1.5 degrees Celsius compared to pre-industrial times. There are less than 10 years left to complete the implementation of the climate action to achieve the global carbon targets [1]. The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services [2] pointed out that nature and its vital contributions to people, which together embody biodiversity and ecosystem functions and services, are vulnerable. We must ensure nature conservation and sustainability while other global societal goals are simultaneously met through urgent and concerted efforts fostering transformative change. To disseminate transformative education on climate change and biodiversity loss, school education alone will not be sufficient to reach the IPCC and IPBES targets. We must educate and engage citizens meaningfully and actively to respond to the current sustainability urgency. Despite the widespread realization of the unsustainability of the modern way of life and the urgency of mitigating sustainability issues, people face significant difficulties in making the necessary decisions and taking action. The reasons for these difficulties are complex and related to the complexity of the global issue itself, but some difficulties derive from shortages of individual and sustainability competencies. Individuals are involved in the process of rethinking future possibilities, how values are actualized, and how to build a sustainable way of living [3]. Therefore, sustainability education requires new values and modes of thoughts and actions that foster individuals' sustainability competences.
Accordingly, the OECD's [4] learning framework promotes the need for futures thinking by suggesting that the required competencies for engaging with the world are to be learned in a sequenced process of reflection, anticipation, and action. We define competence as a combination of skills, knowledge, and attitudes that enable a particular task to be performed or a problem to be solved [5][6][7]. In this study, we examined and analysed the following sustainability competences: systems thinking competence, futures thinking competence, values thinking competence, strategic thinking competence, and collaboration competence [7,8].
Sustainability is a normative concept meaning an ideal state of being in which humans are able to flourish within the ecological thresholds of the planet alongside other living entities for permanency [9]. There are two underlying beliefs of sustainability change: (1) the most important sustainability beliefs are that the world operates as a complex system and (2) that humans operate based on care rather than needs [9]. From a systems perspective, sustainability is the ability of systems to persist, adapt, transform, or transition under varying conditions [10]. In this study, systems thinking competence refers to an individual's capability to analyse sustainability problems across different sectors and scales and systems thinking characteristics. [11]. Individuals are capable of applying systems concepts, such as systems ontologies, features of systems elements, the interaction of elements, feedback loops, and structuration, including sustainable issues. In the perception and construction of knowledge related to phenomena, emergence is a core concept of systems thinking [12,13]. The law of emergence is as follows: when a large number of factors interact with events at the same time, and emergence is the result of lower-level interactions when the system is pushed out of equilibrium. For example, the interaction of forestry measures and natural processes produces properties that would not be possible on their own. Second, in systems thinking interconnectedness, social, economic, and ecological systems are important to recognise, and they are critical to achieving sustainability [14,15]. Individuals are also able to describe the need for systemic thinking in sustainability problem solving, such as for anticipating future trajectories from a systems perspective and for analysing sustainability transition strategies. Third, in this analysis, the understanding of interactions with feedback, nonlinearity, dynamics, and the emergence of complex behaviours over time is essential to systems thinking. Moreover, understanding feedback as an underlying governance mechanism can inform decision making [16]. Fourth, in systems thinking, the ability of an individual to maintain the basic structure and to manage resilience represents the adaptive capacity of the system [12,17]. It has been observed that when individuals adapt to systems thinking, competitiveness, resilience, and survival are improved [14]. Individuals may also build an adaptive capacity by engaging in transformative learning processes [18][19][20]. Sterling [20] pointed out 'that not only do current ways of thinking, perceiving and doing need to change in response to critical systemic conditions of uncertainty, complexity and unsustainability, but that old paradigms are the root of these conditions'. Therefore, transformative learning processes should include learning to deal with sustainability change, enhancing diversity, systems level learning, and creating conditions for self-organisation to emerge. Self-organisation is the fifth core concept of systems thinking. Williams et al. [10] pointed out that self-organising systems develop their own structure and behaviour spontaneously without being guided from the top down, therefore making systems thinking challenging. For example, for individuals, it would be difficult to outline the big picture of climate change and to find solutions in their own lifestyles because climate systems' internal structure and/or functions can change in response to many external circumstances. Transformative learning can create opportunities for self-organising processes towards sustainability [19].
In this sub-study of the broader study, we focused on the assessment of Finnish people's systems thinking competence and its connections between the other sustainability competences. Research shows that systems thinking is one of the key foundations of sustainability thinking [10,[21][22][23]. Systems thinking has become increasingly popular because it provides a 'new way of thinking' to understand and manage complex problems, whether they rest within a local or a global context. As Redman et al. [24] pointed out, the assessment of sustainability competence has not been a primary research interest. The assessment tools are not well-developed and are often inappropriately used. Thus, we focus here on moving forward to develop sustainability competence measuring for a larger audience, Finnish citizens, by further developing the theory and measurement related to sustainability competence [7,23,25,26].
The other sustainability competences are based on Wiek et al.'s [7,8] and Brundiers et al.'s [27] foundations. First, futures thinking competency is the 'ability to collectively analyse, evaluate, and craft rich "pictures" of the future related to sustainability issues and sustainability problem-solving frameworks' [7] (pp. 208-209). Here, we scrutinise Finnish people's ability to anticipate how sustainability problems might evolve or occur over time (scenarios), considering inertia, path dependencies, and triggering events. Moreover, we examine how Finns create sustainable and desirable future visions considering evidence-supported alternative development pathways and how they are also able to describe the need for informing strategy building, including prevention, mitigation, and adaptation responses, i.e., responding to scenarios. Second, values thinking competency is the 'ability to collectively map, specify, apply, reconcile, and negotiate sustainability values, principles, goals, and targets' [7] (p. 209). We focus on how Finns are also able to describe the need for values thinking in sustainability problem solving, such as for providing normative orientations to problem analyses, including the carbon footprint estimations and futures thinking activities, such as technological innovation and strategy building, for economic growth thinking. From an ethical point of view, we were also interested in Finnish people's ability to assess the sustainability impact and other activities to make a sustainable future. Third, collaborative competency is the 'ability to motivate, enable, and facilitate collaborative and participatory sustainability research and problem solving' [7] (p. 211). Therefore, we studied how Finns are able to initiate, facilitate, and support different types of collaboration, including teamwork and stakeholder engagement, in sustainability efforts. We describe collaboration as the ability to be aware of one's own feelings, desires, thoughts, behaviours, and personality, as the ability to regulate, motivate, and develop oneself for sustainability issues collaboratively with others. Fourth, in this study, we defined action-oriented competence, which integrates strategic thinking competency and integrated problem-solving competency. Wiek et al. [7] (p. 210) defined strategic thinking competence as follows: the 'ability to collectively design and implement interventions, transitions, and transformative governance strategies toward sustainability'. Integrated problem-solving competency means the ability 'to apply different problem-solving frameworks to complex sustainability problems and develop viable solution options' to 'meaningfully integrate problem analysis, sustainability assessment, visioning and strategy building [8] (p. 251). In the present study, we focused on how Finns are able to use transformational actions and transition strategies towards sustainability, such as actions to mitigate sustainability problems and make progress towards sustainability visions.
Aims
Based on previous studies, we know that Finns' average knowledge about climate change is rather good, and they can make realistic assessments of their own level of knowledge [28]; however, knowledge moves slowly from words to deeds [29]. Tackling sustainability crises requires enormous systems level changes in the energy sector, housing sector, and food sector [30]. It is important to note that individuals make changes, but we need more knowledge regarding how to develop solutions. Recognising and integrating multiple kinds of systems competence knowledge and know-how, we can help boundary spanning people, organisations, and tools to make easier to solve the sustainability crisis. We postulate that transitioning to sustainability requires transformative change. Therefore, we focus on how systems competences contribute to sustainability transfor-mation and how they can be explained through the development of futures thinking competence, collaboration competence, action-oriented competence, and values and critical thinking competence. Thus, we developed the following research question: • How does Finns' systems competence interact with other sustainability competencies?
Materials and Methods
Brundiers et al. [27] further developed Wiek et al.'s [7,8] model of sustainability competence. In this study, we considered these new ideas. In our questionnaire, the competencies to be added to the original model [7] were an integrated problem-solving competency that included the utilisation of combinations of the competencies in the model. Our questionnaire was used to identify and leverage the necessary problem-solving skills. Another competence to be added was intrapersonal competence. This is described as the ability to be aware of one's own feelings, desires, thoughts, behaviours, and personality as well as the ability to regulate, motivate, and develop oneself. The third modified competence in our questionnaire was solution competence, which refers to the collective ability to put plans and visions into practice and to understand the long-term and iterative nature of sustainable development projects.
Procedure and Participation
The target group consisted of 2006 Finnish people living in Finland, Åland excluded. Åland is a Swedish-speaking autonomous region belonging to Finland. The survey was only in Finnish, so Åland was excluded from the survey. The average age was 47.8 years, and the sample was composed of 52.1% females and 47.5% males. Nine respondents (0.4%) did not want to express their gender. We did not perform any statistical comparisons based on the respondents' background data. There were no missing data because answering the questionnaire required an expression of opinion for each question. The data collection was carried out as a web survey tool developed by Feedback Group. Web consumer research panels of the Cint Panel Exchange (CPX) network were used for the target group's definition. Respondents were selected from several different research panels, thus preventing a possible panel-specific structural skew. Respondents were recruited from various web panels using a registration form that asks the panellist about background information. Based on these backgrounds, respondents can be queried and quota-selected. Upon registration, the panellist also agreed that research invitations may be sent to his or her email. Thus, at the beginning of an individual study, consent to the study is no longer specifically requested as the panellist has already given his or her consent. Respondents were selected at the sampling stage based on the demographic structure of Finland. Email invitations to the survey were sent to all panellists who participated in the target group selection. During data collection, additional invitations and reminders were sent to those who did not respond. Each response was rated on a five-point Likert scale: strongly disagree = 1, disagree = 2, no disagreement or agreement = 3, agree = 4, or strongly agree = 5.
Measurements and Statistical Tests
To measure Finns' systems competence, the participants were asked to evaluate their skills related to climate change and nature loss using a systems point of view. Ten possible responses were provided (Table 1). A principal component analysis (PCA) was conducted for the calculation of the principal scores using a regression method. The Kaiser-Meyer-Olkin (KMO) value of 0.94 showed that the sample was suitable for performing the PCA. The principal component solution accounted for 55.8% of the total variance, and the factor loadings were satisfactory (0.50 or greater), α = 0.91, Spearman-Brown = 0.90. To measure Finns' futures thinking competence, the participants were asked to evaluate their anticipatory competence, and nine possible responses were provided ( Table 2). The PCA was conducted for the calculation of the principal scores using a regression method. The KMO value was 0.847, and a varimax rotation method was chosen. The total explanation of variance was 61.4%, and the factor loadings were satisfactory (0.50 or greater) (Table 2). Finally, two scales were created: structural skills (α = 0.81, Spearman-Brown = 0.84) and dynamic skills (α = 0.83, Spearman-Brown = 0.84). To measure Finns' values and critical thinking competence, the participants were asked to evaluate their competence, and 10 possible responses were provided ( Table 3). The PCA was conducted for the calculation of the principal scores using a regression method. The KMO value was 0.91, and a varimax rotation method was chosen. The total explanation of variance was 58.6%, and the factor loadings were satisfactory (0.50 or greater), (Table 3). Finally, two scales were created: criticality (α = 0.82, Spearman-Brown = 0.85) and responsibility (α = 0.80, Spearman-Brown = 0.80). To measure Finns' action-oriented competence, the participants were asked to evaluate their sustainability actions, and 16 possible responses were provided ( Table 4). The PCA was conducted for the calculation of the principal scores using a regression method. The KMO value was 0.90, and a varimax rotation method was chosen. The total explanation of variance was 55.0%, and the factor loadings were satisfactory (0.50 or greater), (Table 4). Finally, three scales were created: society (α = 0.82, Spearman-Brown = 0.80), individual (α = 0.81, Spearman-Brown = 0.83) and no car (α = 0.79, Spearman-Brown = 0.84). Finally, to measure Finns' collaboration competence, the participants were asked to evaluate their collaboration skills, and eight possible responses were provided ( Table 4). The PCA was conducted for the calculation of the principal scores using a regression method. The KMO value was 0.91, and a varimax rotation method was chosen. The total explanation of variance was 57.1%, α =.89, Spearman-Brown = 0.86, and the factor loadings were satisfactory (0.50 or greater), (Table 5).
Results
The principal component scores were calculated using regression methods. These scores were used for the hierarchical regression analysis. As shown in Table 6, taken in the first step, action-oriented competence individual actions alone were a significant predictor (β = 0.394, p < 0.001), (Table 6). For example, sorting and recycling waste are smallscale systems changes (0.697), but reducing consumption (0.652), the use of meat products (0.583), and air travel (0.591) are remarkable systems change that loaded rather strongly to the individual component (Table 4). It seems evident that society-related actions predict Finns' systems competence quite well (β = 0.318, p < 0.001), (Table 6). In particular, Finns' willingness to pay environmental taxes (0.719) and reduce their own salaries (0.713) loaded well into the society component (Table 4). Purchasing an electric car (0.583) and moving to a smaller apartment (0.572) were also considered a societal issue as they quite strongly loaded to the society component. Instead, avoiding cars predicted a low systems competence (β = 0.131, p < 0.001), (Table 6). Table 6. Hierarchical regression analysis of Finns' systems competence.
Systems Competence
Step 1 β Step 2 β Step 3 β Step In step two of the model, values and critical thinking competences were entered in the model, and this step increased the explanation of the regression model by 33%. Finns' critical sustainability competence predicted their systems competence well (β = 0.637, p < 0.001) ( Table 6). In particular, Finns' skills in assessing the social justice of sustainability issues (0.792) and their recognition of equality issues (0.769) loaded strongly to the criticality component (Table 3). Critical thinking toward economic growth (0.736) and raising the difficult issues and problems related to climate change (0.702) strongly predicted Finns' systems competence as well ( Table 3). The other values and critical thinking competence predicted Finns' systems competence (β = 0.137, p < 0.001) ( Table 6). The moral responsibility to reduce one's consumption to solve the sustainability crisis (0.844) and the responsibility to preserve biodiversity (0.841) loaded strongly to the responsibility component (Table 3). Overall, Finns' values and critical thinking competence explained their systems competence well. Finnish people's responses to the interactions between different system elements, included, for example, consumption and the world economy. The regression coefficient and the 33 % ΔR for the explanation of the model means that normative issues are clearly related to Finns' systems competence.
In step three, futures thinking was inserted, increasing the explanation of the regression model by 7% (Table 6). These skills consist of structural skills and dynamical skills. According to Levrini et al. [31], structural skills refer to learners' abilities to recognise temporal, logical, and causal relationships and to develop systemic views. Dynamical skills refer to learners' abilities to navigate scenarios, relating local details to global views, past to present and future, and individual to collective actions. Based on the aforementioned definition for this study, structural skills represent the respondent's confidence that the future will be better if sustainability actions are implemented at the systems level. Dynamic skills describe respondents' personal opinions towards the means or skills to make the future better.
Finns' dynamic skills predicted their systems competence well (β = 0.378, p < 0.001), (Table 6). In particular, Finns' beliefs based on different climate scenarios and their knowledge of the most effective climate measures loaded strongly to the dynamic skills components (0.863), (Table 2). Similarly, Finns' skills in evaluating how different climate measures affect the future of the Finnish climate system loaded well to the dynamic skills component (0.844), (Table 2). Overall, Finns' personal opinions towards the means or skills to make the future better seem to predict their systems competence well. Because structural skills did not strongly predict Finns' systems competence (β = 0.088, p < 0.001), there is evidence that Finns' own beliefs were more strongly associated than their systemslevel trust to making sustainability changes in their systems competence; however, it is noteworthy that the eigenvalue of structural skills was higher than that of dynamic skills (Table 2).
In this final step, individual action-oriented competence, critical competence, dynamic futures skills, and collaboration competence were all significant predictors of systems competence. The final step only increased the explanation of the model by 3% (Table 6). The proportion of variance accounted for by the full model was 70% of the variance, F (8, 1997) = 573.081; p < 0.001. A closer examination of collaboration competence indicated that it predicted Finns' systems competence well (β = 0.226, p < 0.001), (Table 5). Finns' beliefs that they can solve the problems related to climate change mitigation and adaptation with others strongly loaded to the interaction component (0.830). Finns' ability to discuss solutions proactively and constructively (0.810) and to guide discussions on climate change mitigation and adaptation related to consumption and the use of natural resources (0.809) and their constructive and solution-oriented participation in the social climate debate (0.809) loaded strongly to the interaction component.
Discussion
This study is part of larger project aiming to develop measurements for sustainability competence. To analyse the structure of the data collected through the scale, a PCA was conducted. Here, PCAs were used to explore the data and find the theory of Finns' sustainability competence. Later, when the theory is known, confirmatory factor analysis (CFA) and Structural Equation Modelling (SEM) can be carried out. However, the findings of PCA showed that the data are suitable for analysing sustainability competence. The validation of the measurements could be further analysed by CFA and SEM and they can be used to perform an extensive investigation into variables' relationship [32].
As systems competence is one of the key competences related to sustainability, we focused on Finns' systems competence interactions with their other sustainability competences. Developing a measurement that considers systems thinking and the complexity of the climate and nature crisis is crucial. Based on the Cronbach's alpha and Spearman-Brown coefficients, the instrument used to measure systems competence had a high reliability. Systems thinking is embedded in many of the existing theories, ontologies, concepts, and tools that are currently being used by different disciplines to address the importance of the transition to a sustainable future. Therefore, the systems thinking measurements should be developed as context-and case-specific. Ben-Zvi Assaraf and Orion pointed out [11] (p. 523) that there are eight characteristics in systems thinking. Here, we focused on Finns' ability to identify the components of a system and processes within the system, the ability to identify relationships among the systems components, the ability to organise the systems' components and processes within a framework of relationships by understanding the hidden dimensions of the system, and the ability to make generalisations. The recently published Greencomp [33] includes systems thinking as the core concept of sustainability competence. This study is an example of how systems competence related to sustainability issues can be measured.
The hierarchical regression analysis revealed that normative critical values thinking predicted Finns' systems competence related to sustainability. The result is similar to previous studies for which critical thinking addressed a core competency, and normative dialogue was a key competency crucial to the success of multi-stakeholder sustainability projects [34,35]. Based on Finns' futures thinking, there is evidence that Finns' individual smaller scale actions for a more sustainable future better predict their systems competence than their ideas of large-scale systems changes, such as changes in global diets. Aarnio-Linnanvuori [36] pointed out that small actions for the environment are possible for young people, but young people's involvement in society is generally viewed as a minor or future issue. The present study showed that similar thinking is notable for adults as well. Heimlich and Ardoin [37] found many challenges to analysing changing environmental behaviours, such as social learning theories, as we more fully consider the practice in which the behaviour will be used in a larger community. In this study, somewhat surprisingly, the action-oriented competence of Finns and their collaboration competence did not significantly increase the explanatory power of the regression model related to the systems competence of sustainability thinking. Ajzen's [38] Theory of Planned Behaviour (TPB) has led to the theory of reasoned action [37], which suggests that human behaviour is influenced by three belief constructs: (1) beliefs about consequences, (2) expectations of important others, and (3) things that may support or prevent the behaviour. More recently, Holdsworth et al. [39] found that sustainability education can be effective in establishing both the knowledge of sustainability and the will to act on these principles in workplaces. In the context of sustainability competence, the TPB should be studied in more detail in the future.
Sustainability competences have mainly been studied in the context of higher education [3,7]. In the era of a sustainability crisis, there is a need to consider sustainability issues in a broader context than only the education sector. Finland has set a goal to be climate-neutral by 2035 [40]. Attention has also been paid to halting the loss of nature, which is crucial to biodiversity and forestry [41]. From the global perspective, this study reveals how individuals' sustainability competences can also be studied in the context of the 2030 Agenda [42]. Sustainability development goals are fundamental issues in order to achieve real and effective sustainable development. Without sustainability competences (from all its perspectives), individuals are unable to address the complexity of the sustainability problems. This study showed that systemic competence is a challenging skill to develop, which should be addressed in the education of citizens, in particular to improve their understanding of the large-scale of environmental actions and the uncertainties associated with them [20]. In addition, it seems to be evident that looking critically at social sustainability issues, it may potentially reduce some of the uncertainty related to the solution of sustainability issues. As Berry et al. [43] pointed out, systems thinking is capable for organizing and interpreting the large and diverse body of information relevant to climate change and its factors and their causal linkages. Based on the present study, however, generalisation of results to global citizens is difficult to make but such a topic would be a very relevant subject of further study. In the future, it would be useful to use a similar survey approach to determine whether the answers would be different for other countries. The comparative results could be used to prevent problems in addressing sustainability issues, such as by developing sustainability workshops in the workplace that address the learning problems to make sustainable change.
Conclusions
The present study is one of the few studies in which sustainability competence has been examined in a fairly large population sample (n = 2006). The results showed that in the Finnish context, tackling climate change and biodiversity loss issues at the systems level arise from Finns' individual values and that actions are carried out in an ethically just way. Comparisons with other studies are difficult to make because similar sustainability competences have not been measured in this context. Based on our results, we have determined that three different tasks could enable people to improve their sustainability competences: (1) involvement in real-world problems that require critical thinking, (2) focusing on systems thinking competence, and (3) consideration of large-scale futures thinking for sustainability problems. The Finnish people's responses strengthen Ehrenfeld's [9] ideas that sustainability change requires knowing that the world operates as a complex system. Moreover, the idea that humans operate out of a care-not needs-was observed in the ethical responses of Finns. Institutional Review Board Statement: Ethical review and approval were waived for this study because none of the criteria for ethical review and approval defined by the Finnish Advisory Board on Research Integrity (TENK) were met in the research design. In Finland, ethical review in human sciences applies only to precisely defined research configurations.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. | 2022-04-22T15:27:01.938Z | 2022-04-19T00:00:00.000 | {
"year": 2022,
"sha1": "76210530ed0dba7333ba1d947a1e4f2d1fd57021",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-4060/3/2/15/pdf?version=1650450870",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bee65c826a6445a2ee3605947f0ca2f358a2c4e8",
"s2fieldsofstudy": [
"Education",
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": []
} |
234628685 | pes2o/s2orc | v3-fos-license | Preliminary numerical study of three-temperature model investigation of hypersonic oxygen flow under rotational nonequilibrium
The effect of rotational nonequilibrium on the macroscopic parameters of the flow behind a normal shock wave in oxygen gas flow has been examined. The electron thermal equilibrium was taken into account where the electron temperature was equal to the vibrational temperature according to Park’s assumption. Therefore, only the effect of rotational nonequilibrium on the translational and vibrational temperature was analyzed. Rotational and vibrational relaxation time for the O2-O2 and O2-O collisions proposed recently by Andrienko and Boyd are used. Also, the O2 dissociation rates proposed by Kim and Park are used. The results obtained with the three-temperature model well reproduce the data obtained in shock tube for the shock velocity of 4.44 km/s.
To date, many authors prefer to use in their hypersonic simulations the Park's twotemperature model which takes into account the approximation of the equilibrium between the translation and rotation modes [1][2][3][4]. This brief communication focuses on the effect of the rotational nonequilibrium phenomena behind a normal shock wave for the oxygen flow. To simulate this case, the flow was considered as a reactive mixture of the five species O2, O, O2 + , O + , and ewith eight elementary reactions including dissociation, charge exchange, associative ionization, and electron impact ionization reactions. The reaction rates are calculated using the modified Park93 chemical kinetic model [5] including the O2 dissociation rates proposed by Kim and Park [6]. The 1D Euler equations with rotational and vibrational nonequilibrium source terms are solved just behind the normal shock wave using the finite difference method. The conservation equations of the flow are: Equation of continuity, momentum, and energy: The equation of relaxation of the chemical species, the rotational and vibrational energy continuity: Ghezali where e is the total energy per unit mass which include translational, rotational, and vibrational energies, the enthalpies of formation of the species, the species electronic energy, the electron energy and the kinetic energy [4]. The model used for calculating the energy exchange between translational-vibrational modes VT,s ω and translational-rotational modes RT, s ω have the Landau-Teller form with high-temperature correction of Park [7]. where: The vibration-chemistry CV,s ω and rotation-chemistry CR,s ω exchange are employed by the Candler and MacCormack formula [8].
f is the fractional contribution of the rotation-to-vibration energy transfer to the total energy transfer. Kim and Boyd [9] suggested that the value of f is where p must be given in atmospheres. The rotational relaxation time for the O2-O2 collisions proposed by Hanquist and Boyd [11] is given by: 2 2 10 0.611 , 3.0173 10 For the O2-O collisions, the rotational and vibrational relaxation time proposed by Andrienko and Boyd [12] was used according to the following function: 1000 1000 1000 T a b l e 1 Coefficients of the vibrational and rotational relaxation time [12] x Figure 1a presents the evolution of the translational, rotational, and vibrational temperature just behind the shock wave using 2T and 3T model. The highest experimental freestream condition of [10] in which shock velocity of 4.44 km/s (M=13.4) with ambient pressure of 0.8 torr (106.66 Pa) is selected as a test case. Using the Rankine-Hugoniot relation, the translation temperature reaches 14000 K behind the shock wave then decreases slowly to reach the equilibrium value; in contrast a strong rotational and vibrational nonequilibrium was observed. This decrease is mostly the result of the absorption of the kinetic energy of the molecules due to their molecular rotation and vibration and also by the chemical reactions such as dissociation and ionization which are endothermic reactions. In the nonequilibrium region, the rotational relaxation time is faster than the vibrational relaxation time and, therefore, the rotational temperature is higher than the vibrational temperature. Concerning the comparison of the results, it should be noted that the dissociation rates for the three-temperature model are slower than those of two-temperature model and, therefore, the relaxation zone increases while the translation and vibration temperature becomes much closer to the measured values.
The correction of the vibrational relaxation was proposed by Park to take into account the diffusion nature that occurs at high temperatures (higher than 5000 K for N2). The bridging formula between the Landau-Teller model ( 1 s = ) and the diffusion model ( 3.5 s = [5]) for the nitrogen flow have the form s = s′exp (-5000/T ch ) with s′ = 3.5. According to this formula, the overall vibrational temperature decreases when s′ increases. For example, Fig. 1b shows that change of s′ from 3.5 to 2 increases considerably the vibrational temperature peak, leading to a good agreement with the experimental data. Figure 2 shows the molar-mass concentration of O2 and density profiles vs time. The initial molar-mass concentration of O2 in the study of [10] is equal to (0.0625/2=0.03125 mol/g) equivalent to (1/0.03125=32 g/mol) which is the molar mass of O2 and, therefore, the pure oxygen was used as a test case. The results show satisfactory agreement concerning the molarmass concentration. However, the calculated density deviates from measured values for both 2T and 3T model but the results can be considered acceptable due to the lower density. In the 2T model, the dissociation rate occurs faster than in the 3T model. According to [13], the vibrational and rotational degrees of freedom do not completely equilibrate before dissociation occurs at higher temperatures (above 5000 K). This means that a slow dissociation occurs under rotational and vibrational nonequilibrium (3T model) compared to the 2T model under vibrational nonequilibrium with rotational equilibrium.
Even in this medium hypersonic speed condition, the obtained results show that the rotational nonequilibrium slightly increases the thickness of the relaxation zone and, therefore, the translational and vibrational temperature becomes closer to experimental data. Consequently, a -calculations for T according to the 3T (1) and 2T (4) models and for T r according to the 3T model (2), as well as for T v according to the 3T (3) and 2T (5) models and experimental data [10] for T (6) and T v (7); b -calculation at s′ = 2 (1) and 3.5 (2) and experimental data [10] for T v (3).
rotational nonequilibrium is important for hypersonic flow and must be taken into account whatever the selected flow such as oxygen, nitrogen or air flow. Also, this work uses the threetemperature model assuming Boltzmann distribution of the vibrational level population which is the standard assumption. However, recent papers [14,16] show some deviations of the real vibrational level population from the Boltzmann one. In fact, in these experiments, population of the highest vibrational energy level were not taken into account. A more suitable numerical method using the state-to-state (STS) approach will be the scoop of a future work. | 2021-05-17T00:03:06.295Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "80b0b6b590c44500a3c2020c596cade7178f33de",
"oa_license": "CCBY",
"oa_url": "https://hal.archives-ouvertes.fr/hal-03172017/file/Ghezali2020.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "45275016358c73df574ade4faa129878aadd5ab3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
237997647 | pes2o/s2orc | v3-fos-license | Isolation and identification of a high-efficiency phenol-degrading bacteria and optimization of its degradation conditions
: Phenol is widely used in China, it not only pollutes the environment, but also accumulates toxic substances in the human body through the food chain, further harming humans. In this experiment, a strain of high-efficiency low-temperature degradation phenol bacteria B5 was selected from the soil contaminated by organic matter of Lanzhou. Through research methods such as Gram staining observation, DNA extraction, PCR amplification, sequencing and comparison, it was found that this strain was Pantoea agglomerans. Through the subsequent optimization of degradation conditions, it was found that the B5 strain can degrade 500mg/L of phenol to 24.8mg/L in 36h. The ability to degrade phenol is stronger between pH5.5-pH6.0, and the ability to degrade phenol is higher in a medium containing 4-8g/L sodium chloride. This research can provide certain theoretical guidance for phenol degradation.
Introduction
Phenol is a simple phenolic organic weak acid, which is a common organic pollutant in industrial wastewater such as petroleum, fertilizer, paper making, rubber [1] . Phenol has strong toxic effects on many organisms. It has carcinogenic, teratogenic and mutagenic effects, so it has been included in the list of key pollutants in many countries [2] . Phenol can be directly toxic to the human body through contact with the skin and mucous membranes. It can chemically react with the proteins in the cell protoplasm to cause cell inactivation. Lowconcentration phenol solution can only denature protein, while high-concentration phenol solution can coagulate protein and eventually cause tissue necrosis [3] .
In addition to industrial wastewater, feces and nitrogen-containing organic matter will also produce phenolic compounds during the decomposition process. Therefore, a large amount of fecal sewage discharged from cities is also an important source of phenol pollution in water bodies [4] . After the water quality is polluted by phenol-containing wastewater, it will have many serious consequences, such as the destruction of the oxygen balance of the water body, etc. The arbitrary discharge of phenol-containing wastewater leads to deterioration of water quality, which seriously affects people's physical and mental health, and also restricts the sustainable development of cities. Therefore, it is urgent to seek effective environmental protection measures to treat phenol-containing wastewater.
Biological treatment of phenol-containing wastewater mainly uses the metabolism of microorganisms to convert harmful phenolic pollutants into non-toxic substances [5,6] , without secondary pollution, and has a large amount of treatment. Therefore, it has become the first choice for the harmless treatment of phenol-containing wastewater in China. The phenol-degrading bacteria that have been obtained include algae (Alga Ochromonas danica) [7] , rhizobia (Rhizobia) [8] , yeast (Yeast Trichosporon cutaneum) [9] , and Pseudonomonas sp. [10] , etc. However, for the treatment of high-concentration phenol-containing wastewater, it is necessary to dilute the phenol concentration before entering the water to reduce the toxic effect on the organism, resulting in a substantial increase in the treatment cost. If the tolerance of microorganisms to phenol can be improved and the dilution factor can be reduced, it will be of great significance to the treatment of high-concentration phenol-containing wastewater. In this experiment, a microbial strain B5 that can efficiently degrade high-concentration phenol at low temperature was screened and isolated from the organic polluted soil of chemical plants, which has certain guiding significance for the treatment of high-concentration phenol-containing wastewater at low temperature in the future.
Source of strain isolation
The source of the strain used in this experiment came from a chemical plant in Lanzhou, which was polluted by organic matter throughout the year. Soil microorganisms are mainly distributed on the surface of the soil and are sensitive to changes in the surface soil environment, so this experiment takes a 0-10cm soil layer, removes the surface litter layer when sampling, and then uses the quarter method to sample. Put the collected soil sample into a sterile sealed bag and put it in the refrigerator at -20℃ for later use.
Screening and isolation of strains
Take 10 mL of contaminated soil sample in a 50 ml sterile centrifuge tube, add 30 ml of sterile water and mix thoroughly. After standing for a few minutes, on a sterile workbench, use a pipette to draw 200 μL of supernatant, spread it evenly on the selection medium with a spreading rod, and then place it at a constant temperature of 25°C and 200r/min Cultivate for 72 h in an incubator. Select a single colony according to the shape, size, color, gloss, softness, transparency, edge shape, swelling degree, and surface smoothness of the colony grown in the culture medium, and inoculate it with the plate streak method on the newly sterilized selection medium, culture for 72 h in a constant temperature incubator at 25°C. Further purify the strain, repeat the above step, and separate and purify the single colony again 3 times by the plate streaking method. Through multiple screening and separation, two pure cultures of phenol-degrading bacteria with the best growth and the largest colony area were finally selected.
Preservation of strains
There are many methods for preservation of bacteria, including: liquid paraffin method, low temperature regular transplantation method, sand preservation method, liquid nitrogen ultra-low temperature preservation method. In this experiment, the liquid paraffin method was used to preserve the selected phenol-degrading bacteria.
Firstly, on a sterile workbench, inoculate the most efficient phenol-degrading strains screened out on the rescreening medium, and incubate them in a shaker at 25°C and 200 r/min for 36 hours. Microscopic observation shows that the concentration of bacterial turbidity is about 2*10 8 CFU/ml, it can be made into bacteria liquid. Then, draw 1.0 mL of bacterial solution and 0.5 mL of 30% glycerol in a 1.5 mL cryostat tube for strains. Save three copies of each strain, label them and store them in a refrigerator at -80°C for subsequent experiments.
Molecular identification of bacterial species
(1) PCR amplification In this experiment, two universal primers (27F, 1492R) were used for PCR amplification, and the reaction system used was 25μL each. The PCR reaction system is shown in (2) 16S rDNA sequencing comparative analysis The PCR amplification product of the strain was sent to Xi'an Kinke Jersey Biotechnology Co., Ltd. for analysis and sequencing, and the 16S rDNA sequence fragment of the strain was obtained. The measured sequences were compared and analyzed in NCBI's BLAST program, and the strains with the highest similarity were selected.
Determination of phenol degradation ability
(1) Correlation test between cell concentration and phenol degradation ability The strain was cultured in the acclimation medium to the logarithmic growth phase (OD600=0.5-1), and the bacteria were collected by centrifugation at 7000rpm for 5min. Resuspend the bacteria in a re-screened liquid medium containing 500 mg/L of phenol according to the wet reuse of the bacteria, and the final bacteria content reached 1g/L, 2g/L, 4g/L, 8g/L, 16g/L. Then they were cultured in a constant temperature incubator at 25°C and 200 r/min, and samples were taken every 12 hours to monitor the phenol content.
(2) Correlation test of phenol degradation ability of strains under different pH conditions The strain was cultured in the acclimation medium to the logarithmic growth phase (OD600=0.5-1), and the bacteria were collected by centrifugation at 7000rpm for 5min. Resuspend the bacteria in the re-screened liquid medium with different pH (pH 4.5, 5.0, 5.5, 6.0, 6.5, 7.0) to make the content of the bacteria in the medium 8g/L. Then they were cultured in a constant temperature incubator at 25°C and 200 r/min, and samples were taken every 12 hours to monitor the phenol content. (
3) Correlation test of strain phenol degradation ability under different salt concentration conditions
The strain was cultured in the acclimation medium to the logarithmic growth phase (OD600=0.5-1), and the bacteria were collected by centrifugation at 7000rpm for 5min. Resuspend the bacteria with acclimation media containing different sodium chloride concentrations (sodium chloride concentrations are 0.5g/L, 1g/L, 2g/L, 4g/L, 8g/L, 16g, 32g/L), The cell content in the culture medium was 8 g/L. Then they were cultured in a constant temperature incubator at 25°C and 200 r/min, and samples were taken every 12 hours to monitor the phenol content.
Determination of phenol content
Through ultraviolet-visible spectrophotometric analysis and HPLC C-18 column, with methanol, formic acid, and double-distilled water in a ratio of 33:8:1 (volume ratio), preparatory experiments to determine the internal standard.
Results and analysis 4.1 Strain isolation and morphological observation
After separation and purification, 15 strains of phenoltolerant bacteria were obtained at 18°C. After further screening, a strain that grows best at low temperature and has good degradation performance on phenol was finally determined, and it was named BF5.
Strain identification
(1) Detection and analysis of PCR products of 16S rDNA The amplified PCR products were measured by an agarose gel electrophoresis instrument and observed with an ultraviolet gel imager. The gel electrophoresis pattern of PCR products is shown in Figure 3.5. The electrophoresis bands are clear and there is no dragging phenomenon, indicating that these PCR amplified products can be used in subsequent experiments. After the PCR amplification is completed, the amplified products are sent to Xi'an Kinke Jersey Biotechnology Co., Ltd. for analysis and sequencing, and the 16S rDNA sequence fragment of the strain is obtained. The measured sequence was compared and analyzed with 16S rDNA gene sequence on the BLAST program on the NCBI website, and it was determined that the degrading bacterium B5 was Pantoea agglomerans.
(3) Phylogenetic tree Use analysis software such as MEGA and Clustal X1.38 to analyze the sequence homology to construct a phylogenetic tree, as shown in the figure 2.
Optimization of phenol degrading bacteria conditions
(1) Correlation test between cell concentration and phenol degradation ability The strains were cultured in the acclimation medium to the logarithmic growth phase (OD600=0.5-1), and the bacteria were collected by centrifugation at 7000rpm for 5min. Resuspend the bacteria in a re-screened liquid medium containing 500 mg/L of phenol according to the wet reuse of the bacteria, and the final bacteria content reached 1g/L, 2g/L, 4g/L, 8g/L, 16g/L. Then it was cultured in a constant temperature incubator at 25°C and 200 r/min, and samples were taken every 12 hours to monitor the phenol content. Figure 3 shows that the B5 strain with an initial concentration of 8g/L can degrade 500mg/L of phenol to 24.8mg/L in 36h. The strains were cultured in the acclimation medium to the logarithmic growth phase (OD600=0.5-1), and the bacteria were collected by centrifugation at 7000rpm for 5min. Resuspend the bacteria in re-screened liquid media with different pH (pH 4.5, 5.0, 5.5, 6.0, 6.5, 7.0) to make the content of the bacteria in the medium 8g/L. Then it was placed in a constant temperature incubator at 25°C and 200 r/min, and samples were taken for 24 hours to monitor the phenol content. Figure 4 shows that the B5 strain has a strong ability to degrade phenol between pH 5.5 and pH 6.0. The strains were cultured in the acclimation medium to the logarithmic growth phase (OD600=0.5-1), and the bacteria were collected by centrifugation at 7000rpm for 5min. Resuspend the bacteria with acclimation media containing different sodium chloride concentrations (sodium chloride concentrations are 0.5g/L, 1g/L, 2g/L, 4g/L, 8g/L, 16g, 32g/L), The cell content in the culture medium was 8 g/L. Then it was placed in a constant temperature incubator at 25°C and 200 r/min, and samples were taken for 24 hours to monitor the phenol content. Figure 5 shows that the B5 strain has a higher ability to degrade phenol in a medium containing 4-8g/L sodium chloride, but has a relatively low ability to degrade phenol in a sodium chloride medium greater than 8g/L.
Conclusion
In this experiment, we isolated and screened a microbial strain B5 that can use phenol as the sole carbon source and has the ability to efficiently degrade phenol from the soil contaminated by organic matter all year round. Gram staining confirms that it is a Gram-negative cocci, and 16S rDNA sequence identification. It is determined that the bacteria is Pantoea agglomerans. Through the subsequent optimization of degradation conditions, it was found that B5 can degrade 500mg/L of phenol to 24.8mg/L in 36h, and the ability to degrade phenol between pH5.5-pH6.0 is strong, and the strain contains 4-8g/L. Sodium chloride has a higher ability to degrade phenol in the medium. This research can provide certain theoretical guidance for phenol degradation. | 2021-08-27T16:34:36.069Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "9ee12c930aeaa695061720c81083e77432adb4ab",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/69/e3sconf_gceece2021_01028.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "fcbdeff6b4de01afdb8d425c794aed94094248db",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
17446412 | pes2o/s2orc | v3-fos-license | Rhythmic Effects of Syntax Processing in Music and Language
Music and language are human cognitive and neural functions that share many structural similarities. Past theories posit a sharing of neural resources between syntax processing in music and language (Patel, 2003), and a dynamic attention network that governs general temporal processing (Large and Jones, 1999). Both make predictions about music and language processing over time. Experiment 1 of this study investigates the relationship between rhythmic expectancy and musical and linguistic syntax in a reading time paradigm. Stimuli (adapted from Slevc et al., 2009) were sentences broken down into segments; each sentence segment was paired with a musical chord and presented at a fixed inter-onset interval. Linguistic syntax violations appeared in a garden-path design. During the critical region of the garden-path sentence, i.e., the particular segment in which the syntactic unexpectedness was processed, expectancy violations for language, music, and rhythm were each independently manipulated: musical expectation was manipulated by presenting out-of-key chords and rhythmic expectancy was manipulated by perturbing the fixed inter-onset interval such that the sentence segments and musical chords appeared either early or late. Reading times were recorded for each sentence segment and compared for linguistic, musical, and rhythmic expectancy. Results showed main effects of rhythmic expectancy and linguistic syntax expectancy on reading time. There was also an effect of rhythm on the interaction between musical and linguistic syntax: effects of violations in musical and linguistic syntax showed significant interaction only during rhythmically expected trials. To test the effects of our experimental design on rhythmic and linguistic expectancies, independently of musical syntax, Experiment 2 used the same experimental paradigm, but the musical factor was eliminated—linguistic stimuli were simply presented silently, and rhythmic expectancy was manipulated at the critical region. Experiment 2 replicated effects of rhythm and language, without an interaction. Together, results suggest that the interaction of music and language syntax processing depends on rhythmic expectancy, and support a merging of theories of music and language syntax processing with dynamic models of attentional entrainment.
Music and language are human cognitive and neural functions that share many structural similarities. Past theories posit a sharing of neural resources between syntax processing in music and language (Patel, 2003), and a dynamic attention network that governs general temporal processing (Large and Jones, 1999). Both make predictions about music and language processing over time. Experiment 1 of this study investigates the relationship between rhythmic expectancy and musical and linguistic syntax in a reading time paradigm. Stimuli (adapted from Slevc et al., 2009) were sentences broken down into segments; each sentence segment was paired with a musical chord and presented at a fixed inter-onset interval. Linguistic syntax violations appeared in a garden-path design. During the critical region of the garden-path sentence, i.e., the particular segment in which the syntactic unexpectedness was processed, expectancy violations for language, music, and rhythm were each independently manipulated: musical expectation was manipulated by presenting out-of-key chords and rhythmic expectancy was manipulated by perturbing the fixed inter-onset interval such that the sentence segments and musical chords appeared either early or late. Reading times were recorded for each sentence segment and compared for linguistic, musical, and rhythmic expectancy. Results showed main effects of rhythmic expectancy and linguistic syntax expectancy on reading time. There was also an effect of rhythm on the interaction between musical and linguistic syntax: effects of violations in musical and linguistic syntax showed significant interaction only during rhythmically expected trials. To test the effects of our experimental design on rhythmic and linguistic expectancies, independently of musical syntax, Experiment 2 used the same experimental paradigm, but the musical factor was eliminated-linguistic stimuli were simply presented silently, and rhythmic expectancy was manipulated at the critical region. Experiment 2 replicated effects of rhythm and language, without an interaction. Together, results suggest that the interaction of music and language syntax processing depends on rhythmic expectancy, and support a merging of theories of music and language syntax processing with dynamic models of attentional entrainment.
INTRODUCTION
Music and language are both universal human cognitive functions, but the degree to which they share cognitive resources is a long-standing debate in cognition. Theorists have argued for a shared evolutionary origin (Mithen, 2006), as well as extensive structural similarities between music and language (Lerdahl and Jackendoff, 1983;Botha, 2009), while others have argued for significant differences between music and language processing and domain specificity of the two domains (Peretz and Coltheart, 2003). Although syntax usually refers to the rules that govern how words and phrases are arranged in language, syntactic structure also exists in other domains, such as music. Musical syntax can be understood as the rules that define how pitches are organized to form melody and harmony. Western tonal harmony, like language, is organized in hierarchal structures that are built upon discrete and combined elements (Lerdahl and Jackendoff, 1983). Syntax in Western music can be realized in the structured organization of the 12 chromatic tones into diatonic scale degrees within tonal centers, which form chords within harmonic progressions. Both musical and linguistic structures unfold syntactically over time.
One theory that has influenced research in the structures of music and language is the Shared Syntactic Integration Resource Hypothesis (SSIRH), which postulates an "overlap in the neural areas and operations which provide the resources for syntactic integration" (Patel, 2003). The hypothesis reconciles contrasting findings between neuropsychology and neuroimaging studies on syntax processing, by suggesting that the same syntactic processing mechanisms act on both linguistic and musical syntax representations. The SSIRH predicts that the syntactic processing resources are limited, and thus studies with tasks combining musical and linguistic syntactic integration will show patterns of neural interference (Patel, 2003). While topics of ongoing debate concern the nature of the resources that are shared (Slevc and Okada, 2015) and the extent to which such resources are syntaxspecific (Perruchet and Poulin-Charronnat, 2013), convergent studies do provide evidence for some shared processing of music and language, with evidence ranging from behavioral manipulations of syntactic expectancy violations in music and language (e.g., Fedorenko et al., 2009;Slevc et al., 2009;Hoch et al., 2011) to cognitive neuroscience methods such as ERP and EEG studies that track the neural processing of syntax and its violations (e.g., Koelsch et al., 2005;Steinbeis and Koelsch, 2008;Fitzroy and Sanders, 2012).
One piece of evidence in support of the shared processing of musical and linguistic syntax comes from a reading time study in which musical and linguistic syntax were manipulated simultaneously (Slevc et al., 2009). Reading time data for a self-paced reading paradigm showed interactive effects when linguistic and musical syntax were simultaneously violated, suggesting the use of the same neural resources for linguistic and musical syntax processing. In this self-paced reading paradigm, linguistic syntax was violated using garden path sentences, whereas musical syntax was violated using harmonically unexpected musical chord progressions.
As both musical and linguistic syntax unfold over time, the timing of both musical and linguistic events may affect such sharing of their processing resources. Rhythm, defined as the pattern of time intervals in a stimulus sequence, is usually perceived as the time between event onsets (Grahn, 2012a). As a pattern of durations that engenders expectancies, rhythm may represent its own form of syntax and thus be processed similarly to both musical and linguistic syntax in the brain (Fitch, 2013). It has also been suggested that rhythm is an implicitly processed feature of environmental events that affects attention and entrainment to events in various other domains such as music and language (Large and Jones, 1999). Specifically, the Dynamic Attending Theory (DAT) posits a mechanism by which internal neural oscillations, or attending rhythms, synchronize to external rhythms (Large and Jones, 1999). In this entrainment model, rhythmic processing is seen as a fluid process in which attention is involuntarily entrained, in a periodic manner, to a dynamically oscillating array of external rhythms, with attention peaking with stimuli that respect the regularity of a given oscillator (Large and Jones, 1999;Grahn, 2012a). This process of rhythmic entrainment has been suggested to occur via neural resonance, where neurons form a circuit that is periodically aligned with the stimuli, allowing for hierarchical organization of stimuli with multiple neural circuits resonating at different levels, or subdivisions, of the rhythm (Large and Snyder, 2009;Grahn, 2012a;Henry et al., 2015). One piece of evidence in support of the DAT comes from Jones et al. (2002), in which a comparative pitch judgment task was presented with interleaving tones that were separated temporally by regular inter-onset intervals (IOIs) that set up a rhythmic expectancy. Pitch judgments were found to be more accurate when the tone to be judged was separated rhythmically from the interleaving tones by a predictable IOI, compared to an early or late tone that was separated by a shorter or longer IOI, respectively. The temporal expectancy effects from this experiment provide support for rhythmic entrainment of attention within a stimulus sequence.
Both SSIRH and DAT make predictions about how our cognitive system processes events as they unfold within a stimulus sequence, but predictions from SSIRH pertain to expectations for linguistic and musical structure, whereas those from DAT pertain to expectations for temporal structure. The two theories should converge in cases where expectations for music, language, and rhythm unfold simultaneously.
Aims and Overall Predictions
The current study aims to examine the simultaneous cognitive processing of musical, linguistic, and rhythmic expectancies. We extend the reading time paradigm of Slevc et al. (2009), by borrowing from the rhythmic expectancy manipulations of Jones et al. (2002), to investigate how the introduction of rhythmic expectancy affects musical and linguistic syntax processing. Rhythmic expectancy was manipulated through rhythmically early, on-time, or late conditions relative to a fixed, expected onset time. As previous ERP data that have shown effects of temporal regularity in linguistic syntax processing (Schmidt-Kassow and Kotz, 2008), it is expected that rhythmic expectancy does affect syntax processing. The current behavioral study more specifically assesses how rhythmic expectancy may differentially modulate the processing of musical and linguistic syntax.
EXPERIMENT 1
Methods Participants read sentences that were broken down into segments, each of which was paired with a chord from a harmonic chord progression. Linguistic syntax expectancy was manipulated using syntactic garden-path sentences, musical expectancy was manipulated using chords that were either in key or out of key, and rhythmic expectancy was manipulated by presenting critical region segments early, on time, or late.
Participants
Fifty six undergraduate students from Wesleyan University participated in this study in return for course credit. A recording error resulted in the loss of data for 8 out of the 56 total students, and so 48 participants' data were used in the final analysis. Of the remaining participants, all reported normal hearing. Twenty eight participants (58.3%) reported having prior music training, averaging 6.8 years (SD = 3.4). Twenty five (52%) participants identified as female, and 23 as male. Thirty eight (79.1%) reported that their first language was English, three were native speakers of English and one other language, and seven had a language other than English as their first language. Other than English, participants' first languages included Chinese (Mandarin), Arabic, Thai, Japanese, Spanish, French, German, Vietnamese, and Bengali. Sixteen participants (33.3%) spoke more than one language. All participants had normal or corrected-to-normal vision and reported being free of psychiatric or neurological disorders. Informed consent was obtained from all subjects as approved by the Ethics Board of Psychology at Wesleyan University.
Materials
All experiments were conducted in Judd Hall of Wesleyan University. An Apple iMac and Sennheiser HD280 pro headphones were used for the experiments, with MaxMSP software (Zicarelli, 1998) for all stimulus presentation and response collection.
Stimuli
The current study used 48 sentences from Slevc et al. (2009). These sentences were divided into segments of one or several words, and presented sequentially on the iMac screen using MaxMSP. Twelve of the sentences were syntactic garden paths, which were manipulated to be either syntactically expected or unexpected at the critical region (by introducing a garden path effect-see Figure 2). Reading time (RT) comparisons between different conditions were controlled for length of segment because the critical regions are always the same number of words (as shown in Figure 1) in the different conditions. Sentence segments with the paired harmonic progression were presented FIGURE 1 | Experiment design: Schematic illustration of experimental design and stimuli presented in one trial.
Frontiers in Psychology | www.frontiersin.org at a critical region, either on-time (at the regular inter-onset interval of 1200 ms) or "jittered" to be either early or late. The early jitter was 115 ms earlier than the on-time presentation, and the late jitter was 115 ms later than the on-time presentation. Thus, the IOIs were either 1200-115 = 1085 ms (early), 1200 ms (on-time), or 1200 + 115 = 1315 ms (late; Figure 2). 115 ms was selected as the temporal jitter based on pilot testing and the IOIs used in Experiment 2 of Jones et al. (2002) in their manipulation of temporal expectancy. Accompanying chord progressions were played in MIDI using a grand piano timbre. These 48 different progressions were also from Slevc et al. (2009) and followed the rules of Western tonal harmony, and were all in the key of C major. Out-of-key chords violated harmonic expectancy given the context, but were not dissonant chords by themselves (Figure 1). A yes-or-no comprehension question was presented at the end of each trial (sentence). Participants' task was to press the spacebar on the keyboard as soon as they had read each sentence segment, and to answer "yes" or "no" to the comprehension questions. Ninety six unique comprehension questions, two for each sentence, were written so each sentence would have one comprehension question written to have a correct answer "yes, " and another to have a correct answer "no." The comprehension questions are now given in the Supplementary Materials accompanying this manuscript.
Twelve unique experimental modules were created in order to counterbalance the experimental design. Each module contained all 48 sentences, with violation and filler conditions rotated through the sentences in order to control for systematic effects of content, length, and sentence order. Each module contained: 4 rhythmic violation trials (2 early and 2 late), 3 musical syntax violation trials, 1 linguistic syntax violation trial, 5 musical syntax plus rhythmic violation trials, 1 linguistic plus musical syntax violation trial, 2 linguistic syntax plus rhythmic violation trial, 2 trials with all 3 violations, and 30 sentences with no violations. Therefore, in a given module only 37.5% of trials contained any violation. Half of the sentences in a given module were assigned a "yes" question, the other half were assigned a "no." The order of the trials was randomized for each subject.
Procedure
Before beginning the experiment, the participants gave informed consent and completed a short background survey. The participants were then instructed to pay close attention to the sentences being read, rather than the chord progressions that were heard over the headphones. Then, the participants ran through a set of practice trials. After the practice trials, in the actual experiment the experimenter selected one of the 12 possible experimental modules at random. Participants were instructed to press the spacebar on the keyboard as soon as they had read the sentence segment, and then wait for the next segment to be presented. Pressing the spacebar caused the current sentence segment to disappear and an indicator button labeled "I read it" to light up. The following segment appeared at a fixed IOI regardless of when the current segment disappeared. After the end of each sentence, a yes-or-no comprehension question was displayed, at which point participants answered the question by pressing Y or N on the keyboard. Answering the comprehension question cued a new trial. The experiment lasted ∼20 min. Examples of different types of trials are shown in a video demo in the Supplementary Materials accompanying this manuscript.
Data Analysis
RT and response data were saved as text files from MaxMSP, and imported into Microsoft Excel and SPSS for statistical analysis. RTs were log-transformed to normal distribution for statistical testing. Only RTs pre-critical, critical, and post-critical regions for each trial were used for analysis. Filler trials were, therefore, excluded from analysis (21 trials per subject). Of the remaining trials, trials with RTs that were two or more standard deviations FIGURE 2 | Rhythmic effects on music and language: RT differences between critical region and pre-critical region for linguistically and musically expected and unexpected conditions during rhythmically early (A), on-time (B), and late (C) conditions. Error bars show standard error.
Frontiers in Psychology | www.frontiersin.org from the mean of log-transformed critical region RTs were excluded as outliers, resulting in a range of 102.76-816.74 ms. These criteria led to the exclusion of 92 (7.20%) of observations from critical regions in Experiment 1.
No significant differences were observed in log-transformed RTs between native English speakers (n = 41) and non-native English speakers [non-native n = 7, t (46) = 0.42, n.s.]. Similarly, no significant differences were observed between participants who reported musical training (n = 29) and those who reported no musical training [n = 19, t (46) = 1.53, n.s.]. To check for interactions between linguistic syntax and native English speaker experience, an ANOVA was run on the dependent variable of log-transformed RT with the fixed factor of linguistic syntax (congruent vs. incongruent) and the random factor of native English speaker status (native vs. non-native English speaker). No significant interaction between native English speaker status and linguistic syntax was observed [F (1, 92) = 0.53, MSE = 0.01, p = 0.47]. Similarly, to check for interactions between musical syntax and musical training, an ANOVA with the fixed factor of musical syntax (congruent vs. incongruent) and the random factor of musical training (musically trained vs. no musical training) showed no interaction between musical syntax and musical training [F (1, 92) = 0.091, MSE = 0.008, p = 0.764]. As we observed no main effects or interactions that were explainable by native English speaking experience or musical training, results were pooled between native and non-native English speakers, and between musically trained and untrained subjects.
A Three-way ANOVA on the dependent variable of logtransformed RT during the critical region (log_RT_CR) was run with fixed factors of language (two levels: congruent and incongruent), music (two levels: congruent vs. incongruent), and rhythm (three levels: early, on-time, and late), with subject number as a random factor. Results showed a significant threeway interaction among the factors of linguistic, musical and rhythmic expectancies [F (2, 52) = 5.02, MSE = 0.008, p = 0.01], as well as a significant main effect of language [F (1, 54) = 12.5, MSE = 0.006, p = 0.001] and a significant main effect of rhythm [F (2, 99) = 13.2, MSE = 0.01 p < 0.001] and a marginally significant effect of music [F (1, 53) = 3.7, MSE = 0.01, p = 0.059]. Means and SDs of RTs are given in Table 1 for each condition, and in Table 2 for each cell.
To investigate any possible interactive effects between music and language syntax at different rhythmic conditions, an RT difference was computed between RTs for critical region and for pre-critical region. Two-way ANOVAs with fixed factors of language and music were used to test for interactions between music and language at each of the three rhythm conditions (early, on-time, and late). Results showed that for the rhythmically ontime condition, there was an interaction between language and music [F (1, 170) = 4.9, MSE = 4776.9, p = 0.027]. In contrast, the interaction between language and music was not significant at the rhythmically early condition [F (1, 170) = 0.27, MSE = 12882.0, p = 0.603] or the rhythmically late condition [F (1, 170) = 2.34, MSE = 5155.2, p = 0.127] (see Figure 2). These results suggest that the interaction between linguistic and musical syntax varies by rhythmic expectancy.
Further investigation of the degree to which factors interacted at the critical region required comparing RTs across the pre-critical, critical, and post-critical time regions. For this comparison, difference scores of linguistically congruent from linguistically incongruent RTs were calculated, and these difference scores were compared for musically in-key and outof-key trials across time regions for each rhythmic condition (see Figure 3). We found a significant effect of time region: RT was longer in the critical region in the rhythmically early condition only [F (2, 92) = 4.67, p = 0.012]. In the rhythmically late condition only, musical syntax violations produced larger difference scores at the critical region; however this difference was not significant. In the rhythmically early condition and ontime conditions, musically in-key trials yielded larger difference scores than musically out-of-key trials at the critical regions, although these differences were not significant (see Figure 3).
Discussion
Experiment 1 tested to see how rhythmic expectancy affected the processing of musical and linguistic syntax. Results from log-transformed RTs during the critical region ( Table 2) and RT differences between critical and pre-critical regions (Figure 2) showed significant main effects of language and rhythm, a significant three-way interaction of language, music, and rhythm, and a significant two-way interaction between linguistic and musical syntax in the on-time condition only. These findings extend the results of past research (Slevc et al., 2009) to show that the sharing of cognitive resources for music and language appear specific to rhythmically expected events.
In contrast to critical region RTs, however, RT differences between linguistically incongruent and congruent trials (Figure 3) showed slower RTs within the critical region only during rhythmically early trials. The interaction patterns between musical and linguistic syntax over different time regions were inconclusive. This differs from the original findings of Slevc et al. (2009), who observed a synergistic interaction between musical syntax and time region on the reaction time difference between linguistically congruent minus incongruent trials, suggestive of a language and music interaction specifically during the critical region, when rhythm was not a factor. The less robust effect of critical region in this experiment may arise from spillover effects of linguistic incongruence that last beyond the critical region.
While neither SSIRH nor DAT makes specific predictions about this possible spillover effect, the main findings of a three-way interaction among language, music, and rhythm is generally consistent with both theoretical accounts and does suggest that any synergy or sharing of neural resources between music and language depends on rhythmic expectancy. Violations in rhythmic expectancy may disrupt the shared resources that are generally recruited for syntax processing, such as cognitive control (Slevc and Okada, 2015). As music and language both unfold over time, it stands to reason that our expectations for rhythm-defined here as the pattern of time intervals within a stimulus sequence (Grahn, 2012a)-would govern any sharing of neural resources between music and language, as is consistent with the DAT (Large and Jones, 1999), as well as prior behavioral data on rhythmic entrainment (Jones et al., 2002) and studies on the neural underpinnings of rhythmic entrainment (Henry et al., 2015) and their effects on linguistic syntax processing (Schmidt-Kassow and Kotz, 2008).
The three-way interaction between language, music, and rhythm is accompanied by significant main effects of language and rhythm, and marginally significant main effect of musical expectancy. The main effect of rhythm is similar to Jones et al. (2002) and others, in which perturbed temporal expectations resulted in longer RTs. Incongruent garden-path sentences elicit longer RTs during the critical region compared to their counterparts. This is consistent with Slevc et al. (2009) and Perruchet and Poulin-Charronnat, 2013) as well as with previous uses of the self-paced reading time paradigm (Ferreira and Henderson, 1990). The main effect of musical expectancy was only marginally significant. While it is worth noting that Slevc et al. (2009) also did not report a significant main effect of musical expectancy, this weak effect may also be due to task instructions to pay close attention to the sentence segments rather than to the chord progressions heard over headphones. To determine whether music generally taxed cognitive or attentional resources FIGURE 3 | Reading time differences: RT differences between linguistically congruent and incongruent conditions for musically expected and unexpected conditions at different time windows (pre-critical, critical, and post-critical) during rhythmically early (A), on-time (B), and late (C) conditions. Error bars show standard error. away from subjects' monitoring of the sentence segments, it was necessary to compare comprehension accuracy with and without musical stimuli. This was a motivation for Experiment 2, in which the experiment was re-run without musical stimuli.
While previous studies that used a self-paced reading paradigm (Ferreira and Henderson, 1990;Trueswell et al., 1993;Slevc et al., 2009;Perruchet and Poulin-Charronnat, 2013) required subjects to activate the next sentence segment as part of the task, in order to implement a factor of rhythmic expectancy our design featured a fixed inter-onset interval of sentence segments, and subjects were asked instead to press a button to indicate that they had read each segment. To our knowledge this type of implementation is new for psycholinguistic studies. One of the goals of Experiment 2 is to check for the validity of this type of implementation by testing for an effect of linguistic congruency with fixed IOI presentations of sentence segments, even in the absence of musical stimuli.
EXPERIMENT 2
Our modification of the standard self-paced reading paradigm resulted in fixed IOIs with the task of indicating that subjects had read the displayed sentence segment. This was a different task from the standard self-paced reading paradigm in which subjects' task was to advance the following sentence segment, and our task had yet to be confirmed as effective in detecting effects of linguistic syntax, even without the presence of musical stimuli. Furthermore, it was possible that the three-way and two-way interactions from Experiment 1 resulted from the complexity of our experimental design, and that the processing of multiple violations could affect attending and development of expectancy to task-irrelevant stimuli, as well as syntax processing per se. Experiment 2 thus follows up on Experiment 1 by investigating effects of rhythmic violations on comprehension and the processing of linguistic syntax stimuli, removing the variable of musical stimuli. A significant effect of linguistic syntax as well as rhythmic expectancy could validate the current manipulation of the self-paced reading paradigm, and a significant interaction between language and rhythm would suggest that the two domains tap into the same specific neural resources whereas no interaction might suggest more parallel processing.
Methods
In experiment 2, participants again read sentences broken down into segments. Linguistic syntax expectancy was manipulated using syntactic garden-path sentences, and rhythmic expectancy was manipulated by presenting critical region segments early, on-time, or late.
Participants
A new group of 35 undergraduate students from Wesleyan University participated in Experiment 2 in return for course credit. From these participants, all reported normal hearing, normal or corrected-to-normal vision, and no psychiatric or neurological disorders. Twenty-five participants (71.4%) reported having prior music training, averaging 5.9 years (SD = 3.0). Twenty (57.1%) participants identified as female, and 15 (42.3%) as male. Twenty-eight (80%) reported that their first language was English, and seven had a language other than English as their first language. Other than English, participants' first languages included Spanish, Chinese, and Thai. Twenty-four participants (68.6%) spoke more than one language. Informed consent was obtained from all subjects as approved by the Ethics Board of Psychology at Wesleyan University.
Materials
The second experiment was conducted in the Music, Imaging, and Neural Dynamics (MIND) Lab Suite in Judd Hall at Wesleyan University. An Apple iMac was used for the experiment, with MaxMSP software for all stimulus presentation and response collection.
Stimuli
The same experimental patch on MaxMSP and 12 experimental modules with the 48 sentences borrowed from Slevc et al. (2009) were used from the first experiment. However, to investigate how rhythmic violations would affect reading and interact with violations in linguistic syntax, independent of violations in musical syntax, the experimental patch was muted, so that chords were not heard with each sentence segment. The IOIs of sentence segments remained unaltered, and the same "yes" or "no" comprehension questions were also asked at the end of each trial, with randomized order of the trials for each subject.
Procedure
Similar to Experiment 1, participants were instructed to read sentences carefully, and hit the spacebar as soon as they had read a sentence segment. After running through a practice set, the participants began the actual experiment. The experimenter selected one of the twelve possible experimental modules at random. At the end of each trial, participants answered the "yes" or "no" comprehension question, queuing the next trial.
Data Analysis
RTs and comprehension question responses were saved as text files from MaxMSP, and imported into Microsoft Excel, and SPSS for statistical analysis. Only RTs at the pre-critical, critical, and post-critical regions for each trial were used for analysis. Filler trials were, again, excluded from analysis (21 trials per subject). The same parameters and methods of outlier exclusion were used from the previous experiment, resulting in a RT range of 123.63-1121.40 ms. These criteria led to the exclusion of 19 (1.97%) of observations in Experiment 2. RTs were also log-transformed to normal distribution for statistical tests.
Results between musically trained and non-musically trained subjects were pooled because music was not a factor in this experiment. No significant differences were observed in logtransformed RTs between native English speakers and non-native English speakers [t (34) = 0.96, n.s.]. Similarly, an ANOVA with the fixed factor of linguistic syntax and the random factor of native English experience showed no significant interaction [F (1, 523) = 1.059, MSE = 0.018, p = 0.30]. As we observed no differences that were explainable by native English speaking experience, results were pooled between native and non-native English speakers.
Results
Participants performed significantly above chance (M = 86.93%, s = 6.21) on comprehension questions in all conditions. To compare comprehension accuracy with and without musical stimulus presentation, a One-way ANOVA on average comprehension accuracy as the dependent variable was run with the factor of experiment, comparing average comprehension accuracy for subjects between Experiment 1 and 2. Results showed a significant main effect of experiment on comprehension accuracy, with subjects from Experiment 2 performing better on average on comprehension questions than those from Experiment 1 [F (1, 81) = 12.51, MSE = 0.01, p = 0.001]. This suggests that the added variable of musical expectancy further taxed participants' attention from the taskrelevant comprehension questions in Experiment 1.
A Two-way ANOVA on the dependent variable of logtransformed RT during the critical region was run with the factors of language and rhythm. Results showed a significant main effect of language [F (1, 34) = 7.69, MSE = 0.001. p = 0.009], a significant effect of rhythm [F (2, 68) = 9.69, MSE = 0.001, p < 0.001], and no significant two-way interaction [F (2, 68) = 1.07, MSE = 0.001, p = 0.83]. Mean and SD RTs are shown for each condition in Table 3 and for each cell in Table 4.
Discussion
Results from Experiment 2 showed main effects of language and rhythm, validating the use of this novel task. There was also a higher comprehension accuracy compared to Experiment 1, but no interactions between the two factors of linguistic syntax and rhythmic expectancy (see Table 4).
Experiment 2 further investigates the effects of rhythmic expectancy on linguistic syntax processing. When the factor of music was removed, main effects of language and rhythm were still observed. RTs were longer for syntactically unexpected sentences, replicating results from Experiment 1 as well as previous experiments that used the self-paced reading time paradigm (Ferreira and Henderson, 1990;Trueswell et al., 1993). Notably, this finding of longer RTs during syntactically unexpected critical regions within the garden path sentences provides a validation of the current adaptation of the self-paced reading time paradigm: while previous studies that used the selfpaced reading time paradigm (Ferreira and Henderson, 1990;Trueswell et al., 1993;Slevc et al., 2009;Perruchet and Poulin-Charronnat, 2013) required subjects to advance the sentence segments manually, in the current study we adapted the paradigm with fixed IOIs to enable simultaneous investigations of rhythmic and linguistic syntax expectancy. Effects of rhythmic expectancy were also observed, as participants were slower to respond to critical regions presented earlier or later than the expected IOI. This replicates results from Experiment 1 and suggests that temporal entrainment was possible even with a visual-only reading task, and thus is not limited to the auditory modality. This effect of rhythm on visual processing is consistent with prior work on rhythmic effects of visual detection (Landau and Fries, 2012) and visual discrimination (Grahn, 2012b).
Although main effects of language and rhythm were observed, there was no significant interaction. An explanation for this lack of interaction could be that removing the factor of music resulted in the implemented violations no longer being sufficiently attention-demanding to lead to an interaction between the remaining factors, resulting in parallel processing of language and rhythm. In this view, the data suggests that rhythm affects a general, rather than a syntax-specific, pool of attentional resources. When the factor of music was removed, attentional resources were less demanded from the available pool, reducing the interactive effects of language and rhythm on each other and resulting in no interaction and higher comprehension accuracy. Alternately, it could be that the rhythm only affected peripheral visual processing, without also affecting syntax processing at a central level. While the present experiment cannot tease apart these possible explanations, considering the extant literature on relationships between rhythm and grammar (Schmidt-Kassow and Kotz, 2009;Gordon et al., 2015b) it is clear that rhythm can affect central cognitive processes such as syntactical or grammatical computations.
Finally, another finding from Experiment 2 is that comprehension accuracy was higher compared to Experiment 1, suggesting that eliminating the factor of music restored some attentional resources to the task of comprehension. When the primary task was to read sentence segments for comprehension, musical stimuli in the background could have functioned as a distractor in a seeming dual-task condition of comprehending the entire sentence while responding to each segment (by pressing the spacebar).
Taken together, Experiment 2 helps to validate the paradigm used in Experiment 1. By simplifying the experiment to remove the factor of music, some attentional resources may have been restored, resulting in higher comprehension accuracy overall, as well as main effects of language and rhythm with no interaction between the two.
GENERAL DISCUSSION
The goal of the current study is to examine how rhythmic expectancy affects the processing of musical and linguistic syntax. Experiment 1 shows main effects of language, music, and rhythm, and specificity of the interaction between musical and linguistic syntax in the rhythmically expected condition only. These data patterns confirm that rhythm affects the sharing of cognitive resources for music and language, and is largely consistent with SSIRH (Patel, 2003) and DAT (Large and Jones, 1999). However, some of the follow-up analyses are inconclusive as to the exact nature of these interactions over time. In particular, only in rhythmically early trials did we find that the critical region significantly affected the difference in RT between incongruent and congruent language trials, with no significant interactions with musical expectancy unlike in Slevc et al. (2009). The reason for this specific effect of critical region in rhythmically early trials is unclear. It might arise from some spillover effects of linguistic incongruence that last beyond the critical region in rhythmically on-time and late trials. Alternately, it might be a consequence of the complexity of our task in this experiment design. Although the significant main effects suggest that our manipulations were effective, this inconclusive data pattern may nevertheless result from low power due to relatively few trials per cell in the experiment design of Experiment 1.
As it is possible that results were due to the complexity of our design, Experiment 2 simplifies the design by eliminating the factor of music altogether. Results of Experiment 2 show superior comprehension accuracy compared to Experiment 1, and main effects of language and rhythm without an interaction between the two factors. The main effects help to validate our adaptation of the original self-paced reading time paradigm (Ferreira and Henderson, 1990;Trueswell et al., 1993) for research in rhythmic expectancy. The null interaction, when accompanied by significant main effects, suggests that given the task conditions and attentional allocation in Experiment 2, rhythm and language were processed in parallel and did not affect each other.
The superior comprehension accuracy in Experiment 2 may be explained by an increase in general attentional resources that are now available to subjects in Experiment 2 due to the removal of music as a factor. While it was not specifically tested whether these general attentional mechanisms may be the same or different from the temporal attention that is taxed by temporal perturbations of rhythmic expectancy, other literature on voluntary (endogenous) vs. involuntary (exogenous) attention might shed light on this distinction (Hafter et al., 2008;Prinzmetal et al., 2009). Voluntary or endogenous attention, such as that tested in dual-task situations when the task is to attend to one task while ignoring another, is similar to the general design of the present studies where subjects are instructed to pay attention to sentence segments while ignoring music that appears simultaneously. Involuntary or exogenous attention, in contrast, is driven by stimulus features such as rhythmic properties as tapped by our rhythmic expectancy manipulations. Previous research has shown that voluntary attention tends to affect accuracy whereas involuntary attention affects reaction time (Prinzmetal et al., 2005). This fits with our current findings where comprehension accuracy is affected by the removal of music as a factor (by comparing Experiments 1 and 2), whereas reading time is affected by rhythmic perturbations of the presentation of sentence segments.
In both experiments, effects of rhythm were observed in response to visually-presented sentence segments. While the rhythmic aspect of language might generally manifest itself more readily in the auditory than the visual modality, this effect observed from the visual manipulations suggests that rhythmic expectation for language is not limited to auditory processing, but may instead pervade the cognitive system in a modality-general manner, affecting even the visual modality. As visual detection and discrimination are both modulated by rhythm (Grahn, 2012b;Landau and Fries, 2012) and musical expectation can cross-modally affect visual processing (Escoffier and Tillmann, 2008), the current study provides support for the view that rhythmic, musical, and linguistic expectations are most likely not tied to the auditory modality, but instead affect the cognitive system more centrally.
Results appear to be independent of musical training and native English speaker experience. The link between linguistic and musical grammar processing could have been expected to vary by musical and linguistic expertise: children who perform well on phonemic or phonological tasks also outperform their counterparts in rhythmic discrimination as well as pitch awareness Gordon et al., 2015b). At a neural level, brain areas and connections that subserve language are different in their structure and function among professional musicians (Sluming et al., 2002;Halwani et al., 2011), and some highly trained populations, such as jazz drummers, process rhythmic patterns in the supramarginal gyrus, a region of the brain that is thought to be involved in linguistic syntax (Herdener et al., 2014). Despite these effects of training and expertise, the current study found no effects of musical training or linguistic background, converging with the original study (Slevc et al., 2009) as well as prior reports of the languagelike statistical learning of musical structure (Loui et al., 2010;Rohrmeier et al., 2011). It is possible that only some types of task performance, such as those that tap more sensory or perceptual resources, might be affected by music training via selective enhancement of auditory skills (Kraus and Chandrasekaran, 2010).
In sum, the current study demonstrates that rhythmic expectancy plays an important role in the shared processing of musical and linguistic structure. The subject of shared processing of musical and language structure has been central to music cognition, as is the question of how rhythm affects attentional entrainment. While providing support for an overlap in processing resources for musical and linguistic syntax, the current results also suggest that perturbations in rhythmicity of stimuli presentation tax these attentional resources. By offering a window into how perturbations of rhythmic and temporal expectancy affect musical and linguistic processing, results may be translatable toward better understanding and possibly designing interventions for populations with speech and language difficulties, such as children with atypical language development (Przybylski et al., 2013;Gordon et al., 2015a). Toward that goal, the specific neural underpinnings of these shared processing resources still remain to be addressed in future studies. | 2016-06-18T02:33:19.332Z | 2015-11-23T00:00:00.000 | {
"year": 2015,
"sha1": "4bae9e490ea1005e027376b6eda0f37da28d733b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2015.01762/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bae9e490ea1005e027376b6eda0f37da28d733b",
"s2fieldsofstudy": [
"Psychology",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
267482964 | pes2o/s2orc | v3-fos-license | Influenza A virus propagation requires the activation of the unfolded protein response and the accumulation of insoluble protein aggregates
Summary Influenza A virus (IAV) employs multiple strategies to manipulate cellular mechanisms and support proper virion formation and propagation. In this study, we performed a detailed analysis of the interplay between IAV and the host cells’ proteostasis throughout the entire infectious cycle. We reveal that IAV infection activates the inositol requiring enzyme 1 (IRE1) branch of the unfolded protein response, and that this activation is important for an efficient infection. We further observed the accumulation of virus-induced insoluble protein aggregates, containing both viral and host proteins, associated with a dysregulation of the host cell RNA metabolism. Our data indicate that this accumulation is important for IAV propagation and favors the final steps of the infection cycle, more specifically the virion assembly. These findings reveal additional mechanisms by which IAV disrupts host proteostasis and uncovers new cellular targets that can be explored for the development of host-directed antiviral strategies.
INTRODUCTION
Viruses have developed multiple strategies to hijack and control host cellular activities to evade the immune response and support efficient virus particle production.2][3] To counteract disturbances in the proteome and re-establish basal homeostasis, cells have evolved distinct surveillance mechanisms concerning protein biogenesis, folding, degradation, and sequestration of abnormal and potentially pathogenic conformers. 4One of the most important mechanisms for protein stress detection is the ER unfolded protein response (UPR). 5Upon activation of one or more of the key UPR signal activator proteins (inositol requiring enzyme (IRE1), protein kinase R (PKR)-like ER kinase (PERK) and activating transcription factor 6 (ATF6)), downstream signaling leads to the attenuation of general protein synthesis with selective protein translation (via PERK), preferential degradation of mRNA encoding for ER-localized proteins (via IRE1), and the controlled synthesis of stressattenuating proteins, such as chaperones and folding catalysts (via PERK, IRE1 and ATF6). 6The ER-associated protein degradation (ERAD) pathway may also be activated, allowing the clearance of misfolded and accumulated proteins in the ER. 7Various cytoplasmic chaperones, such as heat shock proteins (HSPs), also assist the folding and assembly of newly synthesized and stress-damaged proteins, to prevent potentially pathogenic protein aggregation. 8Misfolded or aggregated proteins may also be sequestered and compartmentalized into stress foci to hinder their toxic effects and minimize interference. 9everal viruses lead to the accumulation of protein aggregates in specialized compartments, generally termed virus factories, in order to recruit and concentrate viral and host components and facilitate the molecular interactions required for essential steps of genome replication or virus particle assembly, while escaping immune recognition. 10As part of the host antiviral response to infection, cytoplasmic aggregates, comprising both host and viral components, can also be formed and eventually become targeted for degradation to facilitate viral clearance and cellular recovery. 11he importance of protein homeostasis in the context of viral infections is well recognized, supporting that further efforts to understand the underpinning molecular mechanisms may lead to antiviral control.With that in mind, in this work, we performed a detailed analysis of different proteostasis-related mechanisms in the course of influenza A virus (IAV) infection.IAV is the causative agent of most of the annual respiratory epidemics in humans, 12 with a high level of morbidity and mortality in the elderly and individuals with chronical disease conditions. 13IAV belongs to the Orthomyxoviridae family, featuring a segmented genome composed by eight single-stranded negative-sense linear RNA segments, separately enclosed and wrapped by the viral nucleoprotein (NP) in the form of viral ribonucleoprotein complexes (vRNPs). 14pon successful binding to the host cell membrane, IAV virions are internalized, and its genome is released into the cytoplasm, being further imported to the nucleus where it undergoes transcription and replication. 15The translation of viral proteins at the cytoplasmic and ER-associated ribosomes, together with the formation of vRNPs in the nucleus, culminate in the assembly of progeny virions that bud at the plasma membrane before being released from the cell. 15ot much is known concerning the interplay between IAV and the host cell proteostasis, and the available information is somewhat contradictory.Regarding the UPR, while some authors reported the activation of the ATF6 pathway upon IAV infection 16 and others suggested the inhibition of PERK as a possible IAV antiviral strategy, 17 additional studies have described the stimulation of the IRE1 branch upon infection [18][19][20][21] with little or no concomitant activation of the ATF6 and PERK branches. 20,21hese variances may, however, be related to the use of different cell models and/or viral strains.Nevertheless, the UPR has been suggested as a putative target for host-directed antivirals against IAV infection, as its activation by thapsigargin has been shown to block viral replication. 22More recently, UPR activation mediated by thiopurines led to the selective disruption of the synthesis and maturation of IAV glycoproteins HA and NA, eventually blocking viral replication. 23Some studies have described the accumulation of IAV-derived amyloidlike fibers to induce cytotoxicity, 24 as well as the formation of aggresome-like structures in IAV-infected dendritic cells to evade the immune response. 25It has also been shown that the accumulation of cytosolic stress granules, usually formed upon different kinds of cellular stress, is prevented upon IAV infection. 26o clarify how the host cell proteostasis impacts IAV infection, we evaluated how specific proteostasis-related mechanisms are affected throughout the main phases of a single IAV infectious cycle.Our results not only demonstrate that IAV interferes with the UPR at different stages of infection, mainly through its IRE1 branch, but also that this interplay is required for proper virus particle formation and propagation.Importantly, we demonstrate that, upon high rates of viral protein translation, IAV induces the accumulation of insoluble protein aggregates at the cytosol composed of both host and viral proteins.By chemically disrupting the assembly of these protein aggregates, we demonstrate that this process is essential for efficient viral protein production and proper formation of infectious virus particles, supporting the idea that targeting proteostasis-related mechanisms may constitute a valid therapeutic approach to tackle and decrease viral propagation.
Influenza A virus triggers the IRE1 branch of the unfolded protein response to ensure efficient viral propagation
To elucidate which UPR pathways are specifically activated or remodeled throughout the different steps of the virus infectious cycle, we performed a detailed analysis of the distinct UPR signaling pathways (depicted in Figure 1A) at different time points during a single cycle of infection.This comparative approach allowed assessing the interplay between infection and host proteostasis mechanisms and its potential to be used as a therapeutic target.
Adenocarcinomic human alveolar basal epithelial (A549) cells were infected with influenza A/Puerto Rico/34/8 (IAV PR8) 27,28 and UPRrelated genes or proteins were assessed at different time-points post infection, reflecting the most relevant steps of the virus life cycle (depicted in Figure 1B), namely 2, 4 6, 8 and 12 hpi.To investigate the PERK UPR branch, we evaluated the expression/activation of several proteins belonging to this pathway, namely eIF2a, ATF4, CHOP and GADD34.eIF2a, which catalyzes an early step of protein synthesis initiation, becomes phosphorylated upon PERK activation to induce global translation arrest, while allowing the selective translation of different proteins such as ATF4. 5 ATF4 further induces both CHOP and GADD34 expression to ultimately counteract eIF2a phosphorylation in a negative loop to restore translation levels (Figure 1A).Quantification of the expression ratio between the phosphorylated and unphosphorylated eIF2a (Figure 1C) showed an increase in eIF2a phosphorylation already at 2 hpi, being this activation maintained throughout the remaining infection cycle, as reported by others. 29ATF4 expression increases at the initial steps of infection and decreases thereafter (Figures 1C and S1A).Similarly, the expression of both CHOP and GADD34 increases at early time points and subsequently decreases progressively until the end of infection (Figure 1C), although maintaining a higher level than in uninfected (mock) cells.However, the characteristic band smear that indicates PERK activation is not detected by Western blotting upon infection (Figure 1C).To further elucidate the importance of the PERK pathway for IAV infection, cells were infected with IAV PR8 in the presence of a specific inhibitor of PERK, GSK-2656157 (as in 30 ).Upon assessment of the amount of infectious virus particles produced in the presence and absence of this inhibitor, our results clearly indicate that PERK inhibition does not affect virus production (Figure 1D).In addition, upon infection, eIF2a phosphorylation is still observed upon incubation with the PERK inhibitor, and the expression levels of ATF4 and GADD34 are also maintained (Figures 1C and 1E).Overall, this indicates that the observed eIF2a phosphorylation upon IAV infection is unlikely to result from PERK activation, but rather from the activation of another kinase.
To analyze the relevance of the UPR ATF6 pathway for IAV infection, we measured the BiP/grp78 protein and mRNA levels, previously suggested to serve as an indirect measure of ATF6 activation. 31,32Our results demonstrate no alteration of BiP/grp78 levels (Figures 1F, S1A, and S1B), suggesting that the ATF6 branch is not required during infection with IAV.To validate this observation, we infected the cells upon incubation either with a broad-spectrum serine protease inhibitor commonly used to inhibit ATF6, 4-(2-aminoethyl) benzenesulfonyl fluoride (AEBSF) (as in 20,33 ), or PF-429242, 32 which targets the SP1 enzyme that cleaves ATF6 in the Golgi (Figure 1G).Quantification of the formation of infectious virus particles by plaque assay demonstrated a significant decrease in the number of new virus particles upon inhibition of the ATF6 pathway with AESBF, but not with PF-429242 (Figure 1H).Altogether, and considering that AESBF has a less specific and wider activity, these data suggest that ATF6 is not relevant for the efficient replication and propagation of IAV.Lastly, the activation of the IRE1 pathway was assessed via the phosphorylation of IRE1 and the splicing of XBP1.We observed an increase in the phosphorylation of IRE1 starting at 6 hpi up to 12 hpi with IAV PR8 (Figures 1I and S1B), reflecting a stage of the life cycle where viral proteins are actively being translated.Coherently, the levels of spliced XBP1 were significantly increased, (Figure 1J), particularly at 8 hpi.In this case, cells treated with thapsigargin were used as a positive control, with a value of XBP1 splicing of 7.51 G 1,35 (mean G SEM).To further infer the importance of this activation for viral propagation, we have quantified infectious virus particle formation in the presence of a specific inhibitor of this pathway, 4m8C (as in 20,34 ) (Figures 1K and 1L).In the presence of this inhibitor, virus production was attenuated by approximately 40% (Figure 1L), indicating that the activation of the UPR IRE1 branch by IAV is important for IAV particle formation and propagation.
Collectively, our data indicate that UPR activation occurs at different stages of infection, and that an efficient IRE1 pathway activation is required for the proper disenrollment of the IAV PR8 infection cycle.
Influenza A virus induces the accumulation of cytosolic protein aggregates
We proceeded our study by analyzing whether IAV infection leads to proteostasis imbalances through the accumulation of insoluble proteins.A549 cells were infected with IAV PR8 and samples were harvested to further analyze the detergent-insoluble protein fractions by SDS-PAGE, as previously reported. 35Insoluble proteins accumulate in cells infected with IAV (Figure 2A), particularly at 8 hpi when a considerable amount of the viral RNA has been transcribed and replicated, viral transcripts and viral progeny RNA (in the form of vRNPs) have exited the nucleus, and a substantial amount of viral proteins is being synthesized.Our results indicate proteostasis impairment and overlap with the activation of the IRE1 pathway.This effect appears to be reversible at later times post-infection (Figure 2A).
Aiming at visualizing and intracellularly localizing these protein aggregates, we stained A549 infected cells with Proteostat, a dye that specifically marks for misfolded and aggregated proteins (Figure 2B).At 8 hpi, Proteostat stains several cytoplasmic protein aggregates (Figure 2B), in approximately 53% G 5.7% of the infected cells, mainly localized at the perinuclear area.To solidify these results, we have performed the same experiment in another cell type, a previously established HeLa cell line that stably expresses a GFP-tagged protein misfolding sensor HSPB1:GFP (HeLa HSPB1:GFP cells). 36Similarly to A549 cells, the formation of several cytoplasmic protein aggregates stained by both Proteostat and HSPB1:GFP was observed (Figure 2C), in approximately 38% G 0.6% of the infected cells.Cellular proteins that are prone to aggregate are usually aberrant and targeted for degradation or tend to accumulate in cytosolic stress granules and promote the conversion into aberrant aggresome-like structures.With this in mind, and aiming to further characterize the virus-induced protein aggregates detected by the Proteostat dye, we have analyzed for the amount of mono-and polyubiquitinated proteins (stained with FK2 antibody), stress granules formation (with TIA-R), using specific stressors as positive controls, as well as autophagic activity (with p62) at a time of the infection when we observed the formation of Proteostat-stained aggresomes.We found that the mono and polyubiquitination of cytoplasmatic proteins (Figure S2A) increases during infection, but it does not colocalize with Proteostat staining.Lastly, we also demonstrated that the observed aggregates do not correspond to stress granules (Figure S2B).Lastly, p62 foci seem to be inhibited during infection (Figure S2C) and it does not resemble the formation of IAV-induced aggresomes.Altogether, these results reveal that IAV induces the formation of cytoplasmic protein aggregates, at a time point of the infection cycle where an intense production of viral proteins is occurring, 37 as well as the activation of the UPR IRE1 pathway.
The insoluble protein fraction of infected cells contains viral proteins and is enriched in host translation-related proteins
To obtain further insights on the composition of these protein aggregates, we performed liquid chromatography-tandem mass spectrometry analysis (LC-MS/MS) of the detergent-insoluble protein fraction of mock and IAV PR8-infected cells at 8 hpi (Figure 3A).
Five viral proteins were found in the insoluble protein fraction of infected cells, namely the RNA-directed RNA polymerase catalytic subunit (PB1), matrix protein 1 (M1), nucleoprotein (NP), non-structural protein 1 (NS1) and polymerase basic protein 2 (PB2) (Table S1).A total of 156 and 263 human proteins were identified with at least two unique peptides in mock and IAV PR8-infected cells, respectively (Table S2), corroborating our previous results that show an increase in the level of insoluble proteins at this stage of infection.Of the identified proteins, 123 were solely found in the insoluble fraction of cells infected with IAV PR8 (Figure 3B).A gene ontology analysis using the Cytoscape plug-in ClueGo revealed that these proteins belong to a specific set of biological processes related to translation, mRNA splicing and the ribonucleoprotein complex biogenesis (Figure 3B).
Through a complementary analysis, we determined which host insoluble proteins were enriched in IAV PR8-infected cells in comparison to non-infected cells.Proteins with an abundance ratio (IAV PR8 to mock) greater than 1.5 with an adjusted p value smaller than 0.05 were considered enriched.This analysis allowed the identification of 78 host proteins whose abundance is increased in the insoluble protein fraction upon infection (Table S3).Gene ontology analysis using Cytoscape showed that these proteins are involved essentially in processes related to ribonucleoprotein complex export from the nucleus, mRNA stabilization, but also in processes such as the regulation of mitophagy, or the tricarboxylic acid metabolism (Figure 3C; Table S4A).An extra analysis using STRING database showed that our network of proteins enriched in the insoluble fraction of IAV PR8-infected cells are related to nuclear DNA replication, ribonucleoprotein export from the nucleus and several processes related to RNA metabolism (Table S4B).These results indicate that the solubility of several host proteins related to protein translation and RNA processing is altered at this time point of infection.Whether this occurs to benefit viral infection or as a host strategy to counteract infection is still unclear.
Viral and host proteins found to be enriched in the insoluble fraction of IAV PR8-infected samples in comparison to mock (Tables S1 and S3) were then accessed for their natural propensity to aggregate, using PASTA 2.0, 38 a web server for the prediction of protein aggregation propensity by determining fibril formation.Beta-amyloid and bovine serum albumin (BSA) were used respectively as positive and negative controls of aggregation propensity and to define the threshold of best energy values for further comparison analyses.As depicted in Table 1, all IAV proteins identified in the insoluble fractions of infected cells have no propensity to aggregate, namely PB1, M1, NP, NS1 and PB2.On the other hand, only 10 of the 78 proteins from the host have shown to have propensity to aggregate (defined as having best energy < À11), which corresponds to 12.8% of the detected host proteins in this fraction.Although there is no functional enrichment identified among these 10 proteins, meaning they are not commonly related to any specific cellular mechanism, our results indicate that most of the detected proteins in the insoluble fractions of infected cells are indeed accumulating as a result of the viral infection.
The formation of IAV-induced protein aggregates favors infection
To understand if the observed virus-induced accumulation of insoluble protein aggregates is relevant for the process of virus particle formation, we disturbed the assembly of these aggregates and analyzed its consequence for viral propagation.For that, we used HASQ-6Cl (C 42 H 52 ClNO, 1 H and 13 C-NMR spectra (Figure S3), a steroid-quinoline hybrid that has been shown to disrupt and revert protein aggregation processes (named 6c in 39 ) (Figure 4A).This compound was tested for cell viability and toxicity and the ideal experimental concentration was determined (Figure S3).
A549 cells were treated with 50 mM of HASQ-6Cl 12 h prior to infection with IAV PR8 and harvested at 8 hpi (the time point after infection where we observed a higher accumulation of insoluble proteins) to analyze the insoluble protein fraction.The virus-induced levels of insoluble proteins decreased around 50% in the presence of HASQ-6Cl (Figure 4B), demonstrating that this compound is also able to inhibit the formation of IAV-induced protein aggregates.The effect of this protein aggregation-inhibition on IAV infectious particle formation was analyzed by plaque assay at 16 hpi.A decrease of around 50% in the number of infectious viral particles in the presence of HASQ-6Cl was observed (Figure 4C), indicating that the virus infection cycle is disturbed by the disaggregating compound.To infer the specific infection step that was affected by the compound, we analyzed the viral NP mean fluorescence intensity (MFI) in the nucleus of IAV PR8-infected A549 cells, in the presence and absence of HASQ-6Cl.Our results indicate that the amount of nuclear NP is not affected by HASQ-6Cl at 4 or 8 hpi (Figure 4D) and further show that HASQ-6Cl does not interfere either with the IAV entry into the cell or with the export of vRNPs from the nucleus at later times post-infection, suggesting a downstream effect on the viral life cycle.
Although these results cumulatively suggest that the formation of these protein aggregates may be a virus-induced mechanism to favor viral protein production, one should also hypothesize whether the decrease in virus titer in the presence of HASQ-6Cl could also result from its effect on a host mechanism that further impairs viral infection.As we had previously demonstrated that the activation of the UPR IRE1 branch is relevant for proper IAV particle formation, we questioned whether HASQ-6Cl would exert some effect over this pathway and consequently cause the observed decrease in virus titers.To investigate this hypothesis, we measured the splicing of XBP1 mRNA upon 8 h of infection with IAV PR8 in the presence or absence of HASQ-6Cl.Figure 5A shows that there were no observed differences in XBP1 splicing in the presence of HASQ-6Cl, demonstrating that the compound has no effect on the level of IRE1 pathway induction by the virus.Our results also show no significant changes in the level of eIF2a phosphorylation or BiP expression throughout infection in the presence of HASQ-6Cl (Figure 5B).Another possibility might be that this compound could, by itself, induce the improvement of the cellular antiviral response against IAV.As most RNA viruses, IAV is mainly sensed by the RIG-I/MAVS antiviral signaling pathway, 40 which culminates in the production of interferons (IFNs) or IFN-stimulated genes (ISG) that restrict the virus life cycle and warn the neighboring cells for the presence of the pathogen. 41To determine whether HASQ-6Cl intensifies this antiviral signaling response, we analyzed the levels of IFNb and the ISG IFN-induced protein with tetratricopeptide repeats 1 (IFIT1) in A549 cells upon transfection of RIG-I-CARD (a constitutively active form of RIG-I) [42][43][44] in the absence or presence of HASQ-6Cl.Our results demonstrate that there is no significant difference between IFNb or IFIT1 production in the presence of HASQ-6Cl (Figure 5C).In order to solidify these results, we have used the same experimental setup to analyze the activation of the signal transducer and activator of transcription 1 (STAT1, which signals IFNs produced by the neighboring cells) by western blotting using a specific antibody against the phosphorylated activated form of STAT1 (pSTAT1).As shown in Figure 5D, there are no significant differences in the amount of pSTAT1 in the presence or absence of HASQ-6Cl upon stimulation with RIG-I-CARD.Finally, to prove that HASQ-6Cl has no influence on the host immune response against IAV, we did the same analysis in the context of IAV PR8 infection, in the presence and absence of the compound.Also, in these conditions, there was no significant change in the levels of IFNb and IFIT1 mRNA (Figure 5E).These results demonstrate that there is no specific interference of HASQ-6Cl with the host immune response dependent on MAVS signaling.
Altogether, up to this point, our results indicated that HASQ-6Cl's action on physically preventing the formation of the virus-induced protein aggregates might be the real cause of the delay in virus particle formation and the consequent decrease in virus titers.These results, hence, also solidify the hypothesis that the formation of these protein aggregates is in fact stimulated by the virus, as an important part of its infection cycle.
We further characterized by mass spectrometry the insoluble fraction of A549 cells upon 8 hpi with IAV PR8, after pre-treatment with HASQ-6Cl.We first performed comparative analyses of the abundances of each viral protein in the absence or in the presence of HASQ-6Cl during infection, considering three independent replicates.This analysis demonstrates that the proteins that were previously identified as present in the insoluble fraction of IAV PR8-infected cells (NP, NS1, PB1, PB2, M1) are less abundantly present when cells are treated with HASQ-6Cl prior to infection (Figure 5F; Table S5).Afterward, to understand which host proteins were deregulated upon infection in the presence of HASQ-6Cl, we considered the proteins whose abundance ratio between IAV PR8-infected cells pre-treated with HASQ-6Cl (IAV PR8 + HASQ-6Cl) and IAV PR8-infected cells was below 0.65 with adjusted p value <0.05.A gene ontology analysis using both Cytoscape and STRING databases showed that most of the identified proteins are involved in the negative regulation of transcription and ribonucleoprotein complex biogenesis (Figure 5G; Tables S6 and S7).
These results support our previous observations that indicate that the accumulation of both viral proteins and host proteins related to the ribonucleoprotein complex biogenesis (including mRNA splicing, ribonucleoprotein complex assembly or RNA processing) in insoluble aggregates plays an important role during IAV life cycle.
To further complete this study and confirm that the effect of HASQ-6Cl on infection is due to its protein disaggregation properties, we performed similar analyses upon infection with the non-segmented negative-strand RNA virus vesicular stomatitis virus (VSV).As shown in Figure 6A, and in contrast to IAV, VSV does not induce a significant increase in the amount of insoluble proteins during infection.The quantification of the formation of VSV infectious particles by plaque assay in the presence of HASQ-6Cl rendered no significant differences when compared to the particles produced in the absence of this compound (Figure 6B).These results hence suggest that the lack of effect of HASQ-6Cl on VSV infection is related to the absence of viral-induced protein aggregation and solidify our previous results on IAV, demonstrating that the observed accumulation of insoluble proteins is a specific characteristic of IAV infection which is essential for an efficient viral propagation.
The disruption of virus-induced aggregates inhibits the proper assembly of new influenza virions
We have further analyzed the effect of the inhibition of the formation of protein aggregates on the expression of different IAV proteins throughout the whole infectious cycle (Figure 7A).To that end, A549 cells were infected with IAV PR8 in the absence or presence of HASQ-6Cl, and the expression of IAV proteins was assessed at several times post infection by blot.
As expected, most of the viral proteins, namely PB1, PB2, NP, are already detected after 6 hpi, while others, such as M1 and M2, are only expressed at later infection stages (Figure 7A).Except for M1 at 12 hpi, none of the viral proteins' expression was affected by the presence of HASQ-6Cl, indicating that neither the compound nor the presence of protein aggregates influences the general viral protein translation.Nevertheless, the formation of insoluble protein aggregates upon infection seems to be important for the correct expression, or at least stabilization, of M1.As M1 mediates viral assembly and budding at the plasma membrane of infected cells, by mediating the encapsidation of vRNPs into the membrane envelope, [45][46][47] we hypothesize that the formation of these protein aggregates is somehow favoring the final assembly of the new virus particles.
The molecular details on how IAV eight-partite genome assembles are still not well understood.9][50] As infection progresses, viral inclusions augment in size.To understand whether the vesicular transport of vRNPs is being affected by the presence of HASQ-6Cl, A549 cells infected with IAV PR8, in the presence and absence of HASQ-6Cl, were stained against viral NP and host Rab11 and analyzed by immunofluorescence (Figure 7B).Rab11-positive viral inclusion areas were visualized by confocal microscopy and quantified using the ImageJ software.
The vesicles were classified into groups according to size, based on previous reports 27,50,51 : we have set four intervals comprising vesicles with a size (1) up to 0.30 mm 2 , (2) 0.30-0.60mm 2 , (3) 0.60-0.90mm 2 and (4) larger than 0.90 mm 2 .The frequency distribution of viral inclusions at 8 hpi is not significantly different in the presence or in the absence of HASQ-6Cl, although at 12 hpi there is an increase in the smaller and a decrease in the larger viral inclusions, indicating HASQ-6Cl is to some extent interfering with the formation of viral inclusions and, therefore, with the final assembly of new IAV virions.
DISCUSSION
During IAV particle formation, a large amount of viral proteins is synthesized in a relatively short time, whereby protein folding can become a limiting step for their active conformation and trafficking.Proteostasis disruption, including the one triggered by viral infections, may lead to the accumulation of misfolded proteins and the induction of ER stress, activating the UPR. 52n this study we performed a detailed analysis of the activation and importance of each of the three (PERK-, ATF6-, and IRE1-dependent) UPR pathways in the context of a single cycle of IAV infection.This approach differs from previous studies (which, as explained above, present somewhat contradicting data), that typically investigated the relevance of the UPR in IAV infection at fewer and mostly late time points post infection (24 or 48 h post infection), 16,20 often comprising cells in diverse stages/cycles of infection.We determined that, although the levels of eIF2a phosphorylation, alongside with the expression of ATF4, GADD34, and CHOP, increase upon infection, PERK is not essential for viral propagation, as the specific PERK inhibitor GSK-2656157 had no effect on the formation of new infectious virus particles.Furthermore, IAV infection in the presence of GSK-2656157 led to a similar pattern of eIF2a phosphorylation (and consequent ATF4 and GADD34 expression) to the one observed in non-treated infected cells.eIF2a phosphorylation during IAV infection may hence result from the activation of another signaling pathway independent of PERK.The integrated stress response (ISR) combines converging stress response pathways against diverse stimuli that culminate with eIF2a phosphorylation and its downstream signaling. 53,54Besides PERK, other kinases are implicated in the ISR, namely protein kinase R (PKR), general control nonderepressible 2 (GCN2), and heme regulated inhibitor (HRI).As PKR has been shown to be activated in response to IAV infection, 55 the observed changes are most likely due to the activation of this specific kinase.However, some IAV proteins can counteract this host defense factor, [56][57][58] perhaps explaining, together with the ATF4/GADD34-driven negative feedback mechanism, why we observe a transient activation of this pathway.
Although not being able to visualize the specific activation of ATF6 during a single infection cycle, using the levels of grp78/BiP mRNA/ protein expression as an indirect measure of ATF6 activation, 31,32 we observed no stimulation of the ATF6 UPR branch during infection.By using two different ATF6 inhibitors to study the relevance of this branch for the propagation of IAV, we were able to demonstrate that, in concert with the lack of alteration in BiP expression levels throughout the infectious cycle, the ATF6 UPR branch does not play an important role during IAV infection, as suggested in previous studies. 20,21he IRE1 pathway activation was observed upon infection by the increase of IRE1 phosphorylation and the splicing of the downstream factor XBP1. Inhibition of this pathway using the specific inhibitor 4m8C results in a significant decrease in the formation of new virus particles.Our results are in agreement and complementing to those obtained by Hassan et al., 21 where the activation of the IRE1 pathway by XBP1 splicing was observed in HTBE cells, and by Schmoldt et al., 20 that reported that a functional IRE1 is necessary for viral NP expression.The importance of the UPR IRE1/XBP1 pathway for IAV infection is most likely due to an XBP1-dependent upregulation of genes involved in phospholipids synthesis, chaperone expression and the activation of the ERAD machinery, that ultimately may lead to increased ER size and capacity to help viral protein folding and maturation.These results, together with the observation that the incorporation of an ER stressor such as thapsigargin during infection, restricts the expression of viral NP, reinforces the idea that IAV might benefit from low levels of ER stress possibly to facilitate the folding capacity of glycoproteins at the ER, but avoids further potentially deleterious UPR activation. 18he stress-induced disruption of cellular proteostasis often results in the accumulation of insoluble proteins and toxic protein aggregates, which is proposed as a transversal hallmark of several pathological conditions. 59,60In the context of a viral infection, the accumulation of proteins into aggregates can arise from the formation of specialized sites of viral replication and assembly (generally termed viral factories) or even as part of the host cell antiviral immune response. 11,61,62In this study we investigated whether IAV induces the accumulation of insoluble protein aggregates and disturbs the host proteostasis.Our results show that, concomitantly to a high level of viral protein translation, IAV induces an increase in the amount of the cellular insoluble protein fraction and the formation of protein aggregates, mainly at the perinuclear area.This occurs at a point of infection that coincides with the activation of the UPR IRE1 pathway and with a high viral protein translation rate, while host protein synthesis is considerably decreased. 37o elucidate the composition, origin and significance of the observed IAV-induced protein aggregation, we analyzed the insoluble protein fractions by LC-MS/MS.The IAV proteins NP, NS1, PB1, PB2, and M1, accumulate in the insoluble protein fraction, as well as host proteins associated to mechanisms of protein complex assembly and localization, mRNA processing and protein translation.A recent study has demonstrated that, during IAV infection, several host proteins undergo changes in solubility, and found that several viral proteins, essentially vRNP components, become strongly insoluble with infection, 63 strengthening our results.
Additionally, a previous study found several proteins belonging to the above-mentioned pathways immunoprecipitated with IAV H7N9 NP. 64 It is possible that NP recruits these proteins to enhance viral transcript translation and that they generate large insoluble complexes.Our results may also indicate that these translation related processes are deregulated upon the export of viral ribonucleoproteins from the nucleus and the viral host-shutoff to impede host proteins to be fully synthesized, which ultimately may lead to ribosome stalling and accumulation of host aberrant insoluble proteins.
To better understand the biological relevance of the accumulation of insoluble protein aggregates in the context of IAV infection, we used a tool to inhibit this aggregation, the hybrid chemical compound HASQ-6Cl (named 6c in 39 ).The combination of quinolines and steroids in one single chemical entity, generates this new hybrid molecule, capable of interacting with protein aggregates through p-p (quinoline fragment), hydrophobic (steroid fragment) and hydrogen bonding interaction.Both fragments of HASQ-6Cl have important features to be able to interact with b-sheets inhibiting its consequent aggregation process.Such versatility is critical to interact with protein aggregates of yet unknown origin, such as those observed in this work.The design strategy of HASQ-6Cl followed a ''framework combination'' approach, avoiding the use of cleavable linkers, to retain the molecule integrity within the cellular media.Our results demonstrate that HASQ-6Cl decreases the IAV-induced protein aggregation level.Importantly, we have shown that HASQ-6Cl interferes with the virus life cycle at the viral protein production phase, which coincides with the protein aggregates formation, and consequently induces about 50% decrease in the production of infectious viral particles.This effect is likely due to the diminished formation of the virus-induced protein aggregates, as no interference of HASQ-6Cl on the splicing of XBP1, and hence on IRE1 activation, nor on the RIG-I/MAVS antiviral signaling were observed.
Mass spectrometry analysis of the insoluble fractions isolated upon IAV infection in the presence of HASQ-6Cl revealed that the viral proteins that were previously identified as present in the insoluble fraction of infected cells are less abundant in these conditions.Indeed, the presence of HASQ-6Cl during infection decreases the abundance of several viral proteins with different roles during the infection cycle, namely vRNP components (PB1, PB2, and NP), NS1 and M1, that were shown here, and by others 63 to become more insoluble upon infection with IAV PR8.There were also changes observed at the level of the host proteins, with most of the identified proteins being involved in the negative regulation of transcription and ribonucleoprotein complex biogenesis.These results support the hypothesis that the accumulation of both viral proteins and host proteins in insoluble aggregates, or changes in its solubility, plays an important role during IAV infection.Furthermore, it suggests that HASQ-6Cl might have a multiple and broad targeting within cells to prevent the accumulation of proteins in the insoluble fraction and, therefore, restricting infection.
The proper assembly and budding of new virions require the intra-cellular transport of progeny vRNPs from the nucleus to the plasma membrane.6][67][68] Recently, it was shown that nucleozin, a well-studied vRNP pharmacological modulator, can affect vRNP solubility in a Rab11-dependent manner, acting by hardening IAV inclusions to prevent efficient replication. 63Our results show that the formation of these viral inclusions can be impacted HASQ-6Cl by decreasing their size, which is ultimately reflected in a decrease in the number of infectious IAV particles produced after a single replication cycle.Targeting this mechanism may be of particular importance as, besides being crucial for the correct assembly of every genome segment to form a fully infectious viral particle, it is also decisive for the genetic reassortment upon a co-infection of different IAV strains from distinct hosts and the emergence of novel viruses with pandemic potential. 69o complete our studies, and determine whether HASQ-6Cl, and its consequent inhibition of protein aggregation, affected the production of specific viral proteins, we assessed their expression at several times post infection in the absence or presence of the chemical compound.We detected no difference in the production of viral proteins in the presence of HASQ-6Cl, with the exception of M1.M1 is the most abundant protein in virions and mediates viral assembly and the budding of vRNPs at the plasma membrane of infected cells. 45,46This protein has different roles at different stages of the infection, and it is speculated that it can change its conformational and oligomerization state depending on its functional state. 70One hypothetical function of the M1 oligomer is the shielding of newly synthesized vRNA/vRNPs during transport through the host cell's cytosol after nuclear export.One can then further hypothesize that the presence of HASQ-6Cl can in some extent prevent the oligomerization of M1, indirectly manipulating its function during assembly through vRNPs destabilization.
The formation of these protein aggregates and their relevance for viral propagation seems to be specific for IAV, or at least not common to all RNA viruses, as we have shown that VSV infection does not lead to an accumulation of insoluble proteins, and that the presence of HASQ-6Cl does not affect the production of infectious VSV particles.
Overall, our findings demonstrate that IAV manipulates the host cellular processes by activating the UPR and by inducing the accumulation of insoluble viral proteins and host proteins that are generally related to RNA processing.The formation of these aggregates is beneficial for the virus and seems to be required for the correct assembly of viral particles.Interfering with UPR pathways or chemically avoiding the assembly of such aggregates is sufficient to hinder viral propagation.Our results have, hence, uncovered specific IAV-host interaction mechanisms that should be further explored for the development of novel host-directed IAV antiviral therapies.
supernatant with low number of viruses was stored at -80 C and further quantified by plaque assay.To prepare virus stocks, MDCK cells were infected with P0 aliquots in SFM supplemented with 0.14% BSA and 1 mg/mL trypsin-TPCK, with a MOI of 0.01.After virus adsorption for one hour, cells were cultured for two days at 37 C, 5% CO2.After centrifugation for 5 min at 3000 rpm, clarified supernatants were aliquoted and stored at -80 C. Virus stock or samples titers were determined by plaque assay.
VSV were propagated in highly confluent Vero cells, using a MOI of 0.001.The virus inoculum was diluted in SFM and after virus adsorption for one hour, an incubation of 18 h ensued.A centrifugation for 5 min at 500 g, 4 C was carried out to collect the clarified supernatant.Then, an ultracentrifugation of the supernatant for 90 min at 24000 rpm, 4 C followed and, once the supernatant was discarded, 0.35 mL of NTE buffer (0.1M NaCl, 1mM EDTA, 0.1M Tris pH 7.4) was added and incubated on ice, overnight.The next day, 4 mL of sucrose 10% NTE (ice-cold) was overlayed with 1 mL of virus suspension and ultracentrifuged for 60 min at 40500 rpm, 4 C.The supernatant was removed, and the pellet was incubated with 0.5 mL of NTE buffer on Ice, overnight.Finally, the virus suspension was aliquoted and stored at -80 C. Virus stock or samples titers were determined by plaque assay.
Plaque assay and infection experiments
To quantify IAV, MDCK cells were cultured with 10-fold dilutions of virus suspension and allowed to absorb for 1h.Cells were then cultured in 50% avicel-containing SFM supplemented with 0.14% BSA and 1 mg/mL trypsin-TPCK for 1.5 to 2 days.Cellular monolayers were fixed in 4% paraformaldehyde and stained with 0.1% toluidine blue.To quantify VSV, Vero cells were used and the avicel-containing SFM media were in this case supplemented with 1% FBS and the protocol ran for 1 day.
To perform infection, A549 cells were seeded to achieve a confluency of 80% at the time of the infection.The day after, cells were washed with PBS and infected with IAV PR8 or VSV at a MOI of 3, prepared in SFM.After one hour, cells were overlaid with DMEM supplemented with 20% FBS and 1% P/S and incubated for the desired times at 37 C in 5% CO2.In the case of samples used to determine viral production, cells were infected and after 1 h incubation, the supernatant was removed, cells were rinsed with acid wash buffer 1x (135 mM NaCl, 10 mM KCl, 40 mM citric acid, pH 3), washed in PBS and incubated in SFM supplemented with 0.14% BSA, and 1 mg/mL trypsin-TPCK (for IAV only), for up to 16 hpi.
Unfolded protein response inhibitors
UPR inhibitors (see key resources table) were added to the cells, at the indicated concentrations, 1 h prior to infection.Different concentrations of each inhibitor were tested to check for cell viability to define which one to use.The number of the infectious particles formed were normalized to the cell viability in each condition.
Experiments with the HASQ-6Cl compound
The HASQ-6Cl compound was solubilized in 100% ethanol (EtOH) to obtain a stock concentration of 100 mM.The cytotoxicity of HASQ-6Cl and EtOH was assessed by MTT cell viability assay.To do that, cells were seeded into a 96-well plate at a density of 8x103 cells/plate and allowed to adhere for 24 h at growing conditions.The day after, cells were treated with various concentrations of the compound (100, 50, 25 and 10 mM) or the correspondent EtOH percentage (1, 0.5, 0.25 and 0.1%) and incubated for 30 h at 37 C in a CO2 incubator.Compound-containing solutions were sterilized using a 0.2 mm filter.Cells were then washed twice with PBS and incubated to DMEM with 10% MTT (working solution 5 mg/mL in phosphate buffer) for 2 h at growing conditions.Lastly, this medium was removed, and the formazan crystals formed were solubilized using DMSO for 10 min at room temperature and the intensity was quantified at 575 nm.Untreated cells were used as control and the blank value was subtracted to all conditions.Further experiments were performed using 50 mM of HASQ-6Cl.The compound was added to the cells 12h before infection.
Plasmids and transfection
A549 cells were transfected with GFP-RIG-I-CARD by 10h incubation with Lipofectamineä 3000 Transfection Reagent (Invitrogen) according to manufacturers' protocol.
Immunocytochemistry and microscopy analyses
Cells grown in 12ømm glass coverslips were fixated for 20 min with 4% paraformaldehyde in PBS, pH 7.4, followed by permeabilization with 0.2% triton X-100 for 10 min, blocking with 1% BSA for 10 min, and immunostaining with the indicated primary and secondary antibody for 1 h each.This procedure was done at room temperature, with cells being washed three times between each step.Primary antibodies and the fluorophores used are listed in the key resources table.When needed, staining with Proteostat Aggresome Dectetion kit (Enzo Life Sciences International) was performed for 30 min.Cells were additionally incubated with Hoechst dye for 2 min before being mounted in slides, using Mowiol 4-88 (AppliChem Inc.) containing propyl gallate (Sigma-Aldrich).Confocal images were acquired using a Zeiss LSM 880 confocal microscope (Carl Zeiss) and a Plan-Apochromat 633 and 1003/1.4NA oil objectives, a 561 nm DPSS laser and the argon laser line 488 nm (BP 505-550 and 595-750 nm filters).
All images chosen are representative of at least three independent experiments and further processing or quantifications were done using ZEN Blue (Carl Zeiss) or Fiji (NIH) software.Quantification of the NP intensity in PR8-infected A549 cells was performed after sketching the region of interest (ROI) of the nucleus of each cell and obtaining the correspondent intensity mean gray value using the Zeiss Blue Software for image processing (Carl Zeiss).Characterization of viral inclusions, positive for Rab11, by size in infected cells was performed using Fiji/ ImageJ software (NIH).The vesicles were divided into groups according to size, based on previous reports. 27,49,52In uninfected cells, the areas for viral inclusions are consistent with values <0.15 mm 2 . 52With infection, the frequency distribution from inclusions measuring 0.15-0.30mm 2 and bigger than 0.3 mm 2 augmented significantly in relation to non-infected cells. 50As we propose to compare the areas in infected cells only, we have set four intervals comprising viral inclusions with a size (1) up to 0.30 mm 2 , (2) 0.30-0.60mm 2 , (3) 0.60-0.90mm 2 and (4) larger than 0.90 mm 2 .
Isolation of the insoluble protein fraction
To obtain total protein extracts, cell pellets were resuspended in Empigen Lysis Buffer (ELB) (0.5% Triton X-100, 50 mM HEPES, 250 mM NaCl, 1 mM DTT, 1 mM NaF, 2 mM EDTA, 1 mM EGTA, 1 mM PMSF, 1 mM Na 3 VO 4 supplemented with a cocktail of protease inhibitors).Protein extracts were then sonicated, and centrifuged 20 min at 200 g at 4 C.In the end, supernatants were kept for the next phase of total protein quantification.During all procedures, cells remained on ice to avoid the activity of proteases.Quantification of total protein was performed using Pierceä Bovine Serum Albumin Protein Assay Kit (Thermo Scientific), following the manufacturer's instructions.
To isolate the insoluble protein fraction, 100 mg of total protein was diluted in ELB to reach a final volume of 100 mL.Samples were centrifuged for 16,000 g, 20 min at 4 C to obtain the insoluble fraction.The resulting pellet was solubilized in ELB supplemented with 20% NP-40 (10%).Samples were sonicated for three cycles of five seconds (0.5 cycles with amplitude at 50-60%).After sonication cycles, samples underwent another centrifugation round (16,000 g, 20 min at 4 C).The supernatant was removed, the pellet was resuspended in ELB, and after adding the loading buffer, samples were denatured at 95 C for 5 min and run into an SDS-PAGE.In parallel, 10 mg of total protein extracts were run as loading controls.Finally, the gel was stained with BlueSafe (NZYTech) solution for at least 30 min and, after destained with distilled water, gels were scanned using Odyssey Infrared Imaging System (LI-COR Biosciences).The relative amount of insoluble protein for each sample was calculated by dividing the intensities of the insoluble protein extracts by the intensities of the corresponding total protein extracts.
Gel electrophoresis and immunoblotting
Cells were pelleted and washed before being resuspended in ELB supplemented with protease-inhibitors mix.Samples were then incubated on a rotary mixer for 15 min at 4 C and, after sonication using Clifton SW3H 38kHz bath (5 cycles 30 s on; 30 s off), the incubation was repeated.After clearing by centrifugation (17000 g, 15 min at 4 C, the supernatant was collected, and protein concentration was determined using Pierceä Bovine Serum Albumin (BCA) Protein Assay Kit (Thermo Scientific) .Protein samples were separated by SDS-PAGE on 10 or 12.5% polyacrylamide gels, alongside with a pre-stained protein marker (GRS Protein Marker Multicolour Tris-Glicine 4$20%, Grisp), wet transferred to nitrocellulose (PROTANâ), and analyzed by immunoblotting.
Immunoblots were processed using specific primary and secondary antibodies (see key resources table ).For quantification, immunoblots were scanned with Odyssey CLx (LI-COR Biosciences) or with ChemiDoc Imaging System (BioRad), and processed using the volume tools from Image Studio Lite Ver 5.2 software (LI-COR Biosciences).
RNA extraction, cDNA synthesis and quantitative real-time polymerase chain reaction
Total RNA was isolated using NZYol reagent according to manufacturer's instructions and quantified using DS-11 spectrophotometer (DeNovix Inc.), cDNA synthesis was obtained using 0.5 to 1 mg RNA, after a pre-treatment with DNAse (Thermo Scientific), using Revert Aid Reverse Transcriptase (Thermo Scientific) and Oligo-dT15 primer (Eurofins Genomics) following manufacturer's protocol.
When needed, primer sequences were designed using Beacon Designer 7 (Premier Biosoft).The oligonucleotides used are listed in the key resources table.GAPDH was used as a reference gene.RT-qPCR was performed in duplicate using 2x SYBR Green qPCR Master Mix (Low Rox) (Bimake), cDNA samples were diluted 1:10 and the concentration of each primer was 250 nM, with the exception for IFN-b in which we use undiluted samples and a primer concentration of 500 nM.Reactions were run on Applied Biosystems 7500 Real Time PCR System (Applied Biosystems).The thermocycling reaction was done by heating at 95 C during 3 min, followed by 40 cycles of a 12 s denaturation step at 95 C and a 30 s annealing/elongation step at 60 C. The fluorescence was measured after the extension step using the Applied Biosystems software (Applied Biosystems).After the thermocycling reaction, the melting step was performed with slow heating, starting at 60 C and with a rate of 1%, up to 95 C, with continuous measurement of fluorescence.Data analysis was performed using the quantitative 2ÀDDCT method.
LC-MS/MS analyses
Samples were processed for proteomics analysis by the proteomics facility at i3S -Instituto de Investigac ¸a ˜o e Inovac ¸a ˜o em Sau ´de, Porto, Portugal.Briefly, the enzymatic digestion was performed with Trypsin/LysC (2 mg) overnight at 37 C at 1000 rpm.Protein identification and quantitation was performed by nanoLC-MS/MS.This equipment is composed by an Ultimate 3000 liquid chromatography system coupled to a Q-Exactive Hybrid Quadrupole-Orbitrap mass spectrometer (Thermo Scientific).Samples were loaded onto a trapping cartridge (Acclaim PepMap C18 100A ˚, 5 mm x 300 mm i.d., 160454, Thermo Scientific) in a mobile phase of 2% I, 0.1% FA at 10 mL/min.After 3 min loading, the trap column was switched in-line to a 50 cm by 75mm inner diameter EASY-Spray column (ES803, PepMap RSLC, C18, 2 mm, Thermo Scientific, Bremen, Germany) at 300 nL/min.Separation was generated by mixing A: 0.1% FA, and B: 80% I, with the following gradient for total protein fraction: 5 min (2.5% B to 10% B), 120 min (10% B to 30% B), 20 min (30% B to 50% B), 5 min (50% B to 99% B) and 10 min (hold 99% B).Subsequently, the column was equilibrated with 2.5% B for 17 min and a specific gradient for the Insoluble fractions: 5 min (2.5% B to 10% B), 30 min (10% B to 30% B), 50 min (30% B to 50% B), 45 min (50% B to 99% B) and 30 min (hold 99% B).Data acquisition was controlled by Xcalibur 4.0 and Tune 2.9 software (Thermo Scientific).
The raw data was processed using Proteome Discoverer 2.4.0.305 software (Thermo Scientific) and searched against the UniProt database for the Homo sapiens Proteome 2020_01 (74,811 sequences) and the UniProt database for influenza A virus (strain A/Puerto Rico/8/1934 H1N1) 2020_04.The Sequest HT search engine was used to identify tryptic peptides.The ion mass tolerance was 10 ppm for precursor ions and 0.02 Da for fragmented ions.The maximum allowed missing cleavage sites was set to 2. Cysteine carbamidomethylation was defined as constant modification.Methionine oxidation and protein N-terminus acetylation were defined as variable modifications.Peptide confidence was set too high.The processing node Percolator was enabled with the following settings: maximum delta Cn 0.05; decoy database search target FDR 1%, validation based on q-value.Protein label free quantitation was performed with the Minora feature detector node at the processing step.Precursor ions quantification was performing at the processing step with the following parameters: unique plus razor peptides were considered for quantification, precursor abundance was based on intensity, normalization mode was based on total peptide amount, protein ratio calculation was pairwise ratio based, imputation was not performed, hypothesis test was based on t-test (background based).Gene ontology analysis of LC-MS/MS data was performed using Cytoscape (ClueGO app) or STRING online database.
QUANTIFICATION AND STATISTICAL ANALYSIS
Statistical analysis was performed with Graph Pad Prism 9 (GraphPad Software).Quantitative data were attained from at least three independent experiments and represent the means G standard error mean (SEM).To determine the statistical significance among the experimental groups the two-way ANOVA followed by Dunnett's multiple comparison tests were applied; comparison between two groups were made by Student's t test.All the statistical details of the experiments can be found in the figures' legends.P values of % 0.05 were considered as significant.
Figure 1 .
Figure 1.Analysis of the host cell UPR during IAV infection (A) Schematic representation of the three main branches of UPR in the ER.(B) Schematic representation of the influenza A virus life cycle.Important steps in infection are indicated with the usual time frames post infection in A549 cells.(C-E) Analysis of the relevance of the UPR PERK branch during infection with IAV in A549 cells.(C) Western blot analysis of the expression levels of PERK, p-eIF2a/eIF2a, CHOP, ATF4 and GADD34 proteins in A549 cells, following IAV PR8 infection at different times post infection.Tubulin was used as internal control.Data represents the means G SEM of three independent experiments.(D) Similar analyses as in C, but in the presence of 0.33 mM of the PERK inhibitor GSK-2656157.500 nM thapsigargin treatment for 8 h served as positive control for PERK activation.Quantification values were normalized to mock and represent average GSEM of at least three independent experiments.Tubulin was used as internal control.(E) Plaque assay analysis of the viral titer obtained upon IAV PR8 infection of A549, in the presence or absence of 0.33 mM GSK-2656157.Data represents the means G SEM of three independent experiments.(F-H) Analysis of the relevance of the UPR ATF6 branch during infection with IAV in A549 cells.(F) Western blot analysis of the expression of BiP protein in A549 infected with IAV PR8 at various times post-infection and (G) upon stimulation with 500 nM thapsigargin for 8 h, in the presence of 0.1 mM AEBSF or 30 mM PF-429242 (ATF6 inhibitors).Data represents the means G SEM of three independent experiments.(H) Plaque assay analysis of the viral titer of A549 cells infected with IAV PR8 in the presence or absence of 0.1 mM AEBSF or 30 mM PF-429242.Data represents the means G SEM of three independent experiments.(I-L) Analysis of the relevance of the UPR IRE1 branch during infection with IAV in A549 cells.(I) Western blot analysis of the expression levels of the phosphorylated and total forms of the IRE1 protein at different times post infection.Quantification values were normalized to mock-infected cells and tubulin was used as internal control.Data represents the means G SEM of three independent experiments.(J) RT-qPCR analysis of the splicing of XBP1 at different times post infection in relation to mock-infected cells and (K) upon stimulation with thapsigargin or infected with IAV PR8, in the presence or absence of 20 mM 4m8C (IRE1 inhibitor), in relation to untreated cells.Data represents the means G SEM of three independent experiments.(L) Plaque assay analysis of the viral titer obtained upon IAV PR8 in the presence or absence of 4m8C.Data represents the means G SEM of at least three independent experiments.*p < 0.005, **p < 0.001, ***p < 0.0001, ****p < 0.00001 using Student's t test.See also Figure S1.
Figure 2 .
Figure 2. Analysis of the accumulation of misfolded proteins in IAV-infected cells (A) Characterization of the insoluble protein fraction of A549 cells infected with IAV PR8 at different times post infection.Data represents means G SEM of at least independent experiments, *p < 0.05, ****p < 0.0001 using Student's t test.(B) Aggresomes formation in mock-or IAV PR8-infected A549 cells at 8 hpi.Confocal images of (a, d) viral NP, (b, e) Proteostat dye and (c, f) merge image.(C) Aggresomes formation in mock-or IAV PR8-infected in HeLa HSPB1:GFP cells at 8 hpi.Confocal images of (a, e) endogenous HSPB1:GFP, (b, f) Proteostat dye, (c, g) anti-NP and (d, h) merge images.Arrows indicate the presence of aggresome-like structures.Bars represent 10 mm.See also Figure S2.
Figure 3 .
Figure 3. Analysis of the host proteins present in the insoluble fraction of infected cells by LC-MS/MS (A) Experimental approach used to isolate and characterize the insoluble protein fraction.(B-C) Comparison between the insoluble protein fractions in IAV PR8-and mock-infected cells.(B) Venn diagram representing the number of host's proteins identified in the insoluble fractions from IAV PR8-infected and non-infected cells.Characterization of insoluble proteins found solely in the insoluble fraction of IAV PR8-infected cells using the ClueGo plugin in Cytoscape, based on the results from the LC-MS/MS analysis of three independent experiments.For this analysis, proteins identified by peptides and unique peptides >2 in at least 2 experiments were considered.(C) Gene ontology analysis (using ClueGO) of the enriched insoluble proteins in IAV PR8-infected cells.See also Tables S1-S4.
Figure 4 .
Figure 4. Importance of protein aggregation for IAV propagation (A) Simplified representation of the molecular hybridization reaction used to synthesize HASQ-6Cl.The combination of quinolines (in gray) and steroids (in blue) in one new chemical entity, generates a hybrid molecule with wide-range of anti-aggregation functions.(B) Analysis of the insoluble fraction of A549 cells infected for 8 h with IAV PR8 in the absence or presence of HASQ-6Cl.The final ratios were obtained by first normalizing the intensity of insoluble proteins fractions to the intensity of total fraction, followed by its normalization to the correspondent mock, and finally its normalization to IAV PR8.Data represents the means G SEM of three independent experiments.(C) Viral titers of IAV PR8 after treatment of A549 cells with HASQ-6Cl.Values were normalized to IAV PR8 and represent average GSEM of at least three independent experiments, ***p < 0.001, in Student's t test.(D) Confocal images of A549 cells infected with IAV PR8 for 4 h and 8 h in the absence or in the presence of HASQ-6Cl.Viral NP is stained in green and nuclei are in blue (DAPI).Bars represent 10 mm.Data represents the mean fluorescence intensity of viral NP in the nucleus as box and whiskers min to max of three independent experiments, ****p < 0.0001 using ordinary one-way ANOVA followed by Bonferroni's multiple comparisons test.See also Figure S3.
Figure 5 .
Figure 5. Analysis of effect of HASQ-6Cl on the host cell's immune response and UPR signaling (A) RT-qPCR analysis of the splicing of XBP1 mRNA in A549 cells infected with IAV PR8 for 8 h in the absence or in the presence of HASQ-6Cl, in relation to mockinfected cells.Data represents the means G SEM of three independent experiments.*p < 0.05 in two-way ANOVA, with Bonferroni's comparison test.(B) Western blot analysis of eIF2a phosphorylation and BiP expression in A549 cells infected with IAV PR8 at different times post infection in the absence or in the presence of HASQ-6Cl.Tubulin was used as internal control.Data represents the means G SEM of three independent experiments.(C-E) Analysis of the effect of HASQ-6Cl on the innate immune response.(C) RT-qPCR analysis of the IFNb mRNA expression in A549 cells after stimulation with RIG-I-CARD for 10 h in the absence or in the presence of HASQ-6Cl, normalized to control.Data represents the means G SEM of three independent experiments.(D) Western blot analysis of pSTAT1 activation following A549 cells stimulation with RIG-I-CARD for 10 h in the absence or in the presence of HASQ-6Cl.I RT-qPCR analysis of the IFNb mRNA expression in A549 cells after infection with IAV PR8 for 8 h in the absence or in the presence of HASQ-6Cl.Data represents the means G SEM of three independent experiments.(F) Viral proteins' abundance in the insoluble fraction of A549 cells infected with IAV PR8 in the presence or absence of HASQ-6Cl.Color gradient represents the relative abundance of each viral protein in comparison to other viral proteins' abundances, considering both samples, from the lowest (red) to the highest (blue) value.(G) Characterization of the insoluble protein fraction of IAV PR8-infected cells in the presence of HASQ-6Cl.GO term analysis (using ClueGO) of the diminished insoluble proteins in cells infected with IAV PR8 after HASQ-6Cl treatment versus IAV PR8-infected cells (abundance ratio <0.65 and p value <0.05 considering three biological replicates).
Figure 6 .
Figure 6.Analysis of the accumulation of misfolded proteins in VSV-infected cells (A) Characterization of the insoluble protein fraction of A549 cells infected with VSV at different times post infection.(B) VSV titers after treatment of A549 cells with HASQ-6Cl.Values were normalized to IAV PR8.Data represents means G SEM of at least independent experiments, *p < 0.05, ****p < 0.0001 using Student's t test.
Figure 7 .
Figure 7. HASQ-6Cl interferes with viral inclusions and impedes the proper assembly of new virus particles (A) Western blot analysis of each of the viral proteins' expression levels in IAV PR8-infected A549 cells at different times post infection, in the presence or absence of HASQ-6Cl.Tubulin was used as internal control.Quantification values were obtained after normalization to tubulin followed by the ratio between the intensity of a viral protein in an infected sample pre-treated with HASQ-6Cl (IAV PR8+HASQ-6Cl) and in an infected sample (IAV PR8), referred as (IAV PR8+HASQ-6Cl):IAV PR8.Data represents the means G SEM of at least three independent experiments.**p < 0.001 using t test.(B) Characterization of the viral inclusions' size formed upon infection with IAV PR8 for 8 h or 12h in A549 cells, in the absence or in the presence of HASQ-6Cl.Confocal images of (a, d, g, j) viral NP, (b, e, h, k) Rab11 and (c, I, i, l) merge images.Bars represent 10 mm.Data represents the frequency distribution of viral inclusions by size as means G SEM of three independent experiments, *p < 0.05 using two-way ANOVA followed by Bonferroni's multiple comparisons test.See also Tables S5-S7.
Table 1 .
Aggregation prediction of host's and influenza A virus' (in bold) proteins identified in the insoluble fractions of infected cells using PASTA 2.0 Beta-amyloid and BSA were used as a positive of negative control, respectively.Host proteins with best energy values higher than BSA but below cutoff are italic; host proteins with best energy values below BSA are not represented.# refers to viral proteins found in the insoluble protein fraction of infected cells. | 2024-02-06T17:04:40.712Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "6092f61a89830967b6d2c399bc196a3b8035134f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.isci.2024.109100",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c29a95db6cc22b842681d98ade966c53b510127d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
21945340 | pes2o/s2orc | v3-fos-license | Multi-Objective Design Space Exploration for the Optimization of the HEVC Mode Decision Process
Finding the best possible encoding decisions for compressing a video sequence is a highly complex problem. In this work, we propose a multi-objective Design Space Exploration (DSE) method to automatically find HEVC encoder implementations that are optimized for several different criteria. The DSE shall optimize the coding mode evaluation order of the mode decision process and jointly explore early skip conditions to minimize the four objectives a) bitrate, b) distortion, c) encoding time, and d) decoding energy. In this context, we use a SystemC-based actor model of the HM test model encoder for the evaluation of each explored solution. The evaluation that is based on real measurements shows that our framework can automatically generate encoder solutions that save more than 60% of encoding time or 3% of decoding energy when accepting bitrate increases of around 3%.
I. INTRODUCTION
During the past two decades, video coding technologies have evolved rapidly. Each coding standard has introduced new coding tools aiming at reducing the bitrate required to save and transmit the video data. As a consequence, the latest standard that was finalized in 2013, High-Efficiency Video Coding (HEVC) [1] incorporates a tremendous amount of different coding tools that can all be exploited to encode a given sequence as efficiently as possible.
Due to the high amount of coding tools, the complexity of the encoding process has increased dramatically in comparison to former coding standards. The HEVC encoder cannot only choose in between different coding modes such as intra-or inter-coding, it also needs to decide for the most suitable block size. Common solutions like the HM-reference software [2] usually perform exhaustive searches testing most of the possible coding modes for rate and distortion and choosing the mode with the least rate-distortion cost. Hence, research aiming at reducing the encoding complexity can help developing efficient and real-time capable encoding solutions.
To this end, in the past couple of years, many publications presented novel and effective encoding methods, such as the mechanisms of Early Skip (ESD) [3], Early CU (ECU) [4], and CFB fast method (CFM) [5], which have been included in the HM reference software to speedup the mode decision process. In another work, Zhang et al. [6] targeted explicitly the intra mode decision process and investigated how rough mode decisions can help finding a reduced set of mode candidates and achieved a speedup of 2.5. In another work, Heindel et al. [7] proposed analyzing the variance of the intra-reference samples to exclude angular modes when a certain threshold is kept. They achieved encoding complexity savings of up to 50%. For the inter prediction process, Cao et al. [8] proposed a fast motion estimation scheme for fractional pel interpolation and achieved savings of 33%. Shang et al. [9] proposed an analysis approach to perform a fast CTU and PU partitioning through correlations between neighboring CUs and PUs to achieve encoding time savings of 38% on average. Finally, Vanne et al. [10] investigated how early skipping of modes and the encoding order influence the rate-distortion performance and complexity and proposed a corresponding encoding order.
All the above mentioned work aims at reducing the encoding time or complexity while keeping the bitrate increase at a low level. We adopt this approach as encoding is often performed on desktop PCs, servers, or clusters where fast processing is beneficial. As a second objective, we propose using the decoding energy which is especially interesting for batterydriven portable devices like smartphones or tablet PCs which are nowadays commonly used for video streaming. An energy model for estimating the decoding energy has been developed [11] and successfully used to encode decoding-energy saving bit streams [12], where energy savings of 10%−20% (at bitrate increases in the same order of magnitude) were reported.
In our work, we aim at minimizing these two objectives (decoding energy and encoding speed) and the two classic ones, namely bitrate and distortion, jointly. As our first main contribution, we propose the optimization of the mode decision process through an automatic multi-objective Design Space Exploration (DSE) of the coding mode evaluation order with respect to these four objectives. Second, we evaluate a simple heuristic to explore early skip conditions during the automatic DSE approach. For the evaluation of different encoding solutions, we take a SystemC-based, actorized model of the HM-16.0 encoder [13]. This actorized HEVC encoder has the advantage of providing the complete set of HEVC prediction modes, as well as to provide the flexibility to customize the evaluation order of the mode decision process, and to specify early skip conditions for each coding mode. Note that although we are using the HM-reference software, the DSE and the resulting solutions are generally applicable to any encoder implementation. Finally, in the results section we will show that our approach is capable to automatically find a set of Pareto-optimal encoding solutions nicely characterizing the desired trade-off between bitrate, quality, encoding time, and decoding energy.
The paper is organized as follows: Section II introduces the HEVC encoder and its coding modes. Afterwards, Section III introduces actor-based modeling, presents the actorization for the HM software, and shows a proposed genetic representation for design space exploration (DSE) that uses multiobjective evolutionary algorithms [14]. Afterwards, Section IV introduces the four objectives and how they are determined in the DSE and the evaluation. Then, Section V presents rate-distortion performance, complexity savings, and decoding energy savings in comparison to the HM-reference software using real measurements and interprets the results. Section VI concludes the paper.
II. HEVC ENCODING
The main goal of a typical encoder is to compress video sequences such that the rate and the distortion is minimized. To achieve this in HEVC, a frame is divided into so-called Coding Tree Units (CTUs) with a maximum size of 64 × 64 pixels. Such a CTU can be split recursively into Coding Units (CUs) of size 32 × 32, 16 × 16, or 8 × 8, where the so-called depth d can have a value in between 0 (64 × 64) and 3 (8 × 8).
Then, for each CU of a given size, the prediction can either be intraframe or interframe. For the intraframe case, and for a CU size of 8 × 8, the CU can be further split into four rectangular Prediction Units (PUs). For interframe coded CUs, the block can be split into two rectangular PUs, where two symmetric (Nx2N and 2NxN) for all block sizes and four asymmetric partitionings (2NxnD, 2NxnU, nRx2N, nLx2N) for block sizes 64 × 64, 32 × 32, and 16 × 16 are available.
Moreover, each PU can be coded in different coding modes. For our work, the interframe coding modes skip, merge, and inter are of special interest, where in skip mode neither residual coefficients nor a motion vector difference is coded. In merge mode, only the motion vector difference is forced to zero and for the inter mode, none of these constraints hold.
After prediction, the residual error is transformed and quantized. Furthermore, in-loop filterings (deblocking filter (DBF) and sample adaptive offset (SAO)) can be performed. An overview on the complete codec and more details on the coding modes can be found in [1]. For this work, especially the CTU partitioning, the interframe prediction modes, and the prediction partitionings are of interest.
The HM encoder exhaustively tests most of the possible modes and chooses the one with the least rate-distortion costs. Explicit functions are defined calculating this cost for 11 modes as shown in Table I, where each of the modes introduced above, including a split mode (which means that the current CU is split into four subCUs), is comprised. In the next section, we show our DSE approach to explore and generate optimized encoder implementations based on the coding modes shown in Table I.
III. ACTOR-BASED ENCODER
We optimize the mode decision process by searching for a) the best processing order and b) the best skip conditions. For this purpose, we are using the actor-based encoder model introduced in [13] and implemented in the MAESTRO [15] framework. An actor-based model enables us to encapsulate the functionality of an encoding mode and its condition for evaluation in a single actor. Also, it allow us to explore parallel (multi-core) implementations of the standard. In this work, each of the mode evaluation functions introduced in Table I is implemented as an actor.
As the functionality of each actor can be executed independently from other actors, the execution order can be chosen freely. This is the first degree of freedom (DoF) that we will explore in the DSE.
To be able to encode standard-compliant bit streams, not all of the modes have to be executed. Hence, mode guards can be defined that can skip the execution of a certain actor under certain conditions that can be chosen freely, except that it has to be assured that at least one of the actors is always called. These mode guards enabling conditional execution of modes is the second DoF that we explore automatically during DSE.
A. Best-Mode Conditional Evaluation
The coding mode evaluation condition can, e.g., be based on the best mode so far, on the current costs, but also on more complicated conditions depending on current block sizes or specific properties of the best mode or even the second best mode. In our work, we choose the best mode so far as a criterion for each individual evaluation of a coding mode. The best mode so far is the mode that, among all modes that have already been tested, shows least rate-distortion costs. We only evaluate the next mode if the best mode so far is the mode indicated by the mode guard. Naturally, this is a highly constrained approach and more general constraints could be defined. Nevertheless, in this work we choose this basic approach to prove that the concept has the potential to derive highly efficient encoding solutions. Furthermore, the search space for the DSE is much smaller such that efficient solutions can be found in an acceptable amount of time.
B. Design Space Exploration
As an optimization algorithm for our DSE we choose the multi-objective evolutionary algorithm presented in [14]. In order to perform such a DSE, a so-called genetic representation of the design space must be defined, see [16]. Such a representation for the first degree of freedom can be found Table I) appears once, and the order of execution corresponds to the order of appearance.
For the second DoF, we define another vector G that contains elements g ∈ {0, ..., 10}, where the element g refers to the best mode so far as defined in Table I. In our proposal, we decide to only execute the actor if the best mode so far is the mode which is indicated by the corresponding position in vector G. E.g., referring to Fig. 1, G[1] = 4 means that InterNx2N is only tested if the best mode so far is mode 4 (Inter2NxN). Hence, the vector only needs to have nine entries because when starting the search, at least two different modes have to be tested to be able to determine a currently best mode.
Note that for both vectors O and G, we allow different solutions for each depth d. That means that for a smaller block size, a different order and different guards can be used than for a bigger block size. This can be helpful as, e.g., low QPs tend to favor smaller block sizes such that bigger sizes can be skipped more often. Figure 1 shows the execution flow of a solution found by the DSE. The order given by vector O is represented by the red boxes. On top of the boxes, the mode guards (vector G) are represented by diamonds. After testing a coding mode, the comparator determines the currently best mode (top of the figure) which is then used to evaluate the guards.
We choose an iterative optimization approach where all explored non-dominated solutions are stored in an archive. To determine the performance of a solution, we encode six different sequences at constant quantization parameters (QPs) 10, 20, 30 and 40, and determine the values of the four objectives PSNR, rate, encoding time, and decoding energy. We choose different QPs as they may lead to different optimal solutions. The results presented are obtained after 200 iterations for each QP. The determination of the objectives is introduced in the next section.
IV. OBJECTIVES
As optimization objectives for the DSE we choose the traditionally optimized criteria distortion and rate, as a third criterion the encoding time, and finally the decoding energy. The calculation of rate and quality (PSNR) is readily implemented in the HM-encoder solution and can be directly used for DSE. To measure encoding time, we instrument the code with a high-precision clock()-function taken from the C++ standard library. The actor based HM-encoder used for the DSE allows to evaluate the parallel execution of the encoding process providing a lot of flexibility in the design space. However, this means that the encoder is a major refactorization of the original HM code (which means that the code is changed significantly without changing functionality) such that encoding times measured during DSE are approximations. Hence, we generated an "unwrapped" encoder for each selected solution that, in comparison to the baseline HM encoder, only differs in terms of mode evaluation order and mode guards. The order and the guards (implemented as if-clauses) are generated from the vectors O and G as defined in Section III-B.
The decoding energy is computed as follows: during DSE, we estimate the decoding energy by analyzing the coded bit stream for so-called bit stream features f that are used to estimate the complete decoding energyÊ aŝ where n f is the number of occurences of a certain feature and e f the feature's specific energy. A feature is, e.g., the intraframe prediction process of a block of a certain size that can a) be counted and b) requires a rather constant amount of processing energy during decoding. We use the energy estimator that was presented in [11] and that is publicly available at [17]. This method makes sure that the DSE does not depend on real measurements. For evaluation, we performed real measurements of the decoding process to prevent estimation errors. The decoding energy was measured for FFmpeg software decoding [18] on a Pandaboard [19] which has a smartphone-like architecture using an ARM processor. The measurement setup is the same as was presented in detail in [11].
V. EVALUATION
In this section, we show that the proposed approach is able to find not only one, but sets of efficient encoder solutions under multiple objectives. Therefore, in the first subsection, we discuss results from the training process (the DSE). In the second subsection, we validate five different solutions taken from the DSE in a realistic scenario. By performing training and validation on different sets of input sequences as shown in Table II, we show that results obtained by the DSE are valid in general. Therefore, we take a different video sequence from each class of the JCT-VC common test conditions [20] and evaluate encoding time, decoding energy, bitrate, and quality in terms of Y-PSNR for the original HM and the solutions selected from the DSE. As an encoder configuration, lowdelay was used. Finally, the third subsection explicitly presents two showcase solutions and interprets their algorithmic behavior.
A. Training Results
The results of the training process for QP 20 are depicted in Figure 2. Results from the DSEs for the other QPs are not shown as the distribution of the points is similar. In this plot, rate increases, encoding time savings, and energy savings are displayed relative to the standard HM encoder. Each marker corresponds to a Pareto-optimal (non-dominated) solution found during the DSE, where 100 solutions are displayed. The PSNR (which is approximately constant for the fixed QP) is not depicted to enhance visibility. We can see that a shorter encoding time generally causes an increase in bitrate. Furthermore, we can see that rate is higher than HM for all solutions. This is caused by the implementation of the guards where many of the possible modes are not evaluated. Highest decoding energy savings are close to 3%, as indicated by the color of the markers.
To show more properties of some non-dominated solutions, we manually choose five explicit solutions for further analysis. In Figure 2, they are marked by black edged markers. They are chosen based on the following considerations: the first selected solution aims at minimizing the bitrate (leftmost marker PB), the second selected solution is one minimizing the decoding energy (yellow marker PE), the last three solutions denote Pareto-optimal points for different trade-offs between encoding time, decoding energy, bitrate, and quality (the remaining markers P1, P2, P3). We discuss these points in the next subsection in detail.
B. Validation
As our experiments indicate that the best mode orders and guards differ strongly depending on the QP, we combine solutions from the four independent DSEs for QPs 10, 20, 30, and 40 for the four considered points PB, PE, P1, P2, and P3. A combined solution corresponds to a QP-dependent coding order and mode guard realization, which, in a real application, could be chosen depending on the user defined input QP. The selection of the combinations was done by choosing points with a similar rate-distortion-time-energy performance.
To compare the performance of these combined solutions, we calculate the Bjøntegaard delta bitrates (BD-rates) [21], mean encoding times, and Bjøntegaard-Delta decoding energies (BD-energy) [12] for the five considered points and compare them to three manually optimized encoding solutions from literature [10] (points S14, S17 and S27). As a reference, we again take the values from the standard HM-encoder. The results are depicted in Figures 3 and 4. Figure 3 plots the mean encoding time savings and the mean BD-rates for all considered points, where the mean of all validation sequences was taken. The horizontal and the vertical lines indicate the standard deviations. We can see that the reference solutions have a very low loss in bitrate and that they save almost 40% of encoding time. The points we suggest lose more bitrate but save more than 50% of decoding time. Here, especially points PB, P2, and P3 show interesting results, where the solution PB will be discussed in detail later. Figure 4 shows the encoding time savings over the BDenergy. We can see that by choosing point PE, it is possible to save 3% of decoding energy. The other points show a rather low impact on the decoding energy consumption. THE NUMBERS CORRESPOND TO THE MODE INDICES SHOWN IN TABLE I. A HYPHEN IN THE GUARDS VECTOR INDICATES THAT THE CORRESPONDING MODE IS ALWAYS TESTED
C. Interpretation
Finally, we would like to discuss the solutions PE and PB in more detail. The concrete realizations are shown in Table III. We can see that the first two modes (split = 10 and Merge2Nx2N = 2) are always tested on depth d = 0. If a mode is located at the end of the order vector O, it is less likely that it will be tested, because a single guard mode must be the best so far. Watching the mode order on the other depths, we find that modes 2 and 10 are always tested early. In this case, we can say that solution PB works well because mode 2 (Merge2Nx2N) is often chosen, which has the following two advantages: • Merge mode testing is faster than the other modes (no motion or intra mode search required), • In terms of BD-rate, the merge and skip modes are highly effective for high QPs (precision of motion vectors and residual coefficients is unimportant).
For point PE, we found that mode 1 (Inter2Nx2N) is chosen in most cases which is again caused by the mode testing order. Analyzing the resulting bit streams we found that for PE, the number of bipredicted blocks is significantly lower than for PB, which causes the energy savings. Apparently, for Merge2Nx2N, biprediction is more likely to be chosen due to the automatically generated motion vector candidates, cf. [1], where Inter2Nx2N often favors uniprediction due to a more accurate motion search.
Furthermore, we can observe that the variance of the proposed solutions is much higher than for the reference points. This can be explained by the fact that in the reference, only the PU splitting modes for inter coding are skipped. In contrast, our approach allows the complete skipping of basic prediction modes like intra, skip, or splitting which can cause higher deviations depending on the input sequence.
VI. CONCLUSIONS
We presented a multi-objective DSE approach that is able to find encoding solutions that exploit the trade-off between encoding speedup, decoding energy, and BD-rate. Our automatic approach together with a simple early skip condition heuristic could determine solutions achieving speedups of more than 60% and decoding energy savings of more than 3%.
In future work, thanks to the actor-based model, a corresponding study could be performed for multi-core processors where the optimal encoding order could change due to potential parallel execution of coding modes. Furthermore, the encoding process could be further split into additional actors to be able to explore early decision possibilities at an even finer granular level. | 2017-04-24T17:55:40.230Z | 2022-03-03T00:00:00.000 | {
"year": 2022,
"sha1": "537cd688013ef89cdfd50326bd788f77e8b33c45",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "39f9b3fe7e8c78435777b1824cbabdf8185f1e66",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
30539532 | pes2o/s2orc | v3-fos-license | Fabrication of Bragg gratings in sub-wavelength diameter As2Se3 chalcogenide wires
The inscription of Bragg gratings in chalcogenide (As2Se3) wires with sub-wavelength diameter is proposed and demonstrated. A modified transverse holographic method employing He Ne laser light at a wavelength of {\lambda}w = 633 nm allows the writing of Bragg grating reflectors in the 1550 nm band. The gratings reach an extinction ratio of 40 dB in transmission and a negative photo-induced index change of \delta n~0.01. The inscription of Bragg gratings in chalcogenide microwires will enable the fabrication of new devices with applications in nonlinear optics, and sensing in the near-to-mid-infrared region of wavelengths.
Over the last two decades, fiber Bragg gratings (FBGs) have emerged as one of the most widely used fiber optic devices. As linear devices, FBGs are used for sensing in various application fields ranging from bio/chemical systems to mechanical ones [1,2]. FBGs have also received considerable scientific interest in nonlinear applications [3,4] and have been utilized for all-optical switching [5,6], pulse-shaping [7], enhancement of super-continuum generation [8] and for slowing down the speed of light [9]. However, the minimum peak power required to observe nonlinear effects in silica FBGs is in the order of 1 kW. The use of sub-wavelength diameter fibers or microwires, where the mode intensity is greatly increased and a considerable fraction of the mode power is present in the evanescent field outside the microwire, not only reduces the optical power to observe nonlinear effects in FBGs, but also enhances the sensitivity to the outside. The photo-inscription of Bragg gratings in silica microwires is however hard to achieve because the photosensitive core of the silica fiber vanishes upon tapering to such small diameters, which renders the wire insensitive to photo-inscription. Alternate techniques, such as by using femtosecond laser radiation [10], focused ion beam milling [11], metal deposition [12] and plasma etch postprocessing [13], have been utilized in past to fabricate Bragg gratings in microwires with diameters ranging from a few microns to tens of microns. However, these techniques are technologically challenging and often lead to surface damages. As of today, the photo-inscription of Bragg gratings in microwires is desirable, but has not been achieved so far.
Arsenic triselenide (As2Se3) is one of the chalcogenide glasses emerging as promising candidates for photonics and sensing applications [14]. As2Se3 is known to be highly photosensitive, it has a nonlinear coefficient that is ~ 930 times larger than that of silica glass and it is transparent in the 1-15 µm wavelength window [15]. As a result, the combination of microwire fabrication, the use of As2Se3 glass and the photo-inscription of FBGs is a promising approach for linear and nonlinear applications in the near-to-mid infrared region of wavelengths.
Recently, we have reported the fabrication of hybrid microwires made from an As2Se3 fiber surrounded by a protective PMMA coating [17] and observed a high waveguide nonlinear coefficient of γ = 133 W -1 -m -1 . The PMMA coating provides mechanical strength for normal handling of the microwire and allows controlling the level of evanescent interaction with the surrounding environment. In this letter, we report the first inscription of Bragg gratings in chalcogenide microwires. This is also the first time that Bragg gratings are being reported in any optical fiber waveguide with sub-wavelength diameter. The transmission spectrum evolution as a function of time, both during and after the holographic exposure, is recorded and analyzed. A theoretical fit with the coupled mode theory is provided to reveal the grating parameters. Also, we observe that the refractive index of As2Se3 microwires decreases upon exposure to 633 nm light.
The chalcogenide fiber used for the experiment is provided by CorActive High-Tech inc. The fiber has a core/cladding diameter of 7/170 µm and a numerical aperture of 0.2. The fiber is coated with a protective layer of PMMA which is mostly transparent to the photoinscription wavelength of λ w = 633 nm (absorption coefficient = 5.7 ×10 -4 cm -1 [18]). The fiber is butt-coupled to a standard single-mode fiber made of silica and the two fibers are bonded permanently with UV epoxy. A hybrid As2Se3/PMMA microwire is then fabricated from an adiabatic tapering process as described in [17]. Fig. 1 depicts the various parts of a typical microwire. After tapering, the diameter and length of the As2Se3 wire region is 1 µm and 3 cm, respectively. The grating fabrication setup consists of a modified Mach-Zehnder type interferometer, as shown in Fig. 2(a). A He-Ne laser source (Spectra Physics, Model 106-1) at a wavelength of λ w = 633 nm provides the holographic photoinscription pattern. The absorption coefficient of As2Se3 at this wavelength is 1.5 × 10 4 cm -1 [19,20]. This high absorption coefficient at the writing wavelength allows for a quick grating fabrication process. The output of the He-Ne laser source is split into two coherent beams that interfere at the inner surface of a glass prism and their angle θ, as shown in Fig. 2(b), with respect to the prism surface is adjusted to achieve Bragg gratings with a first order resonance wavelength in the telecommunications C/L-band. The Bragg wavelength λ Bragg is controlled by the period Λ of the holographic pattern, which depends on the angle 2φ between the two interfering beams inside the prism, as given below.
where n eff is the effective index of the propagating mode.
The internal angle φ is in turn defined by the external angle θ, and the two angles in the case of a right-angled prism are related by the Eq. 3, as follows.
The microwire is placed over the external surface of the prism with an index matching fluid, as depicted in Fig. 2(b), filling the gap between the prism and the microwire in order to maximize the transmission at the prism interface. The interfering beams are expanded using a focusing lens in each arm of the interferometer. The interference pattern has a Gaussian intensity profile with an adjustable 1/e 2 full-width of 1-10 mm and a total writing power of 3 mW. The polarization of the two beams is identical and perpendicular with respect to the microwire axis in order to maximize the interference pattern contrast. The setup allows an in-situ monitoring of the grating growth process, with a broadband signal sent through the grating and observed on an optical spectrum analyzer. Fig. 3 shows the growth dynamics of the Bragg grating during the photo-exposure. The grating length in this case is 8 mm and the interference pattern is apodized. A reversible and wavelength independent transmission loss of 0.5 dB occurs during the process of photo-exposition, which can be observed from the spectra in Fig. 3. A dip in the transmission spectrum appears at λ = 1574.4 nm, and reaches -8 dB in less than 3 minutes of exposure and eventually, -40 dB after 7 minutes of exposure. Two observations can be made during this process: (1) the Bragg wavelength shifts towards shorter wavelengths, and (2) the width of the Bragg resonance increases. The first observation reveals that the refractive index of As2Se3 glass decreases upon the photo-exposure. This is supported by the appearance of grating apodization representative spectral features next to the longer wavelength edge of Bragg resonance (which is clearer in Fig. 4). Note that the previous studies on As2Se3 thin films have in contrast, reported an increase in refractive index upon exposure to 633 nm light [21,22], which suggests that the refractive index change depends on the waveguide structure or the composition of the chalcogenide glass. A similar photoinduced index decrease in As2Se3 fiber was observed in an earlier report [23], but without quantification. The second observation during the photo-exposure reveals an increase in AC refractive index of the grating. These two observations lead us to quanitfy the changes in DC and AC refractive indices of the grating i.e., Δn DC (t) and Δn AC (t) respectively, by using the following relations.
where, n 0 (= 2.71) is the effective index of the microwire, calculated using the beam propagation method; λ B,current (t) and λ B,initial , in eq. (4) are the current and initial Bragg wavelengths respectively; and the variable Δλ B,current (t), in eq. (5) is the width of the current Bragg resonance. The temporal evolution of Δn DC (t) and Δn AC (t) is shown as inset in Fig. 3. An AC refractive index change as high as 6.0 × 10 -3 is observed and a DC refractive index change of Fig. 4. Transmission spectrum of a 1mm long Bragg grating (red curve), and a theoretical fit of transmission spectrum with coupled mode theory (blue curve). Also, shown is the transmission spectrum of the grating after 3 weeks of exposure to ambient light. (Inset) Grating index profile used for the simulation, assuming the grating to be apodized following a gaussian profile. 10 -2 is observed. The AC refractive index increases to a maximum value after ~10 minutes of photo-exposure, and then decreases. This follows from a decrease in modulation depth of the holographic pattern. In fact, during the photo-exposure of the microwire, when the refractive index at the maxima of interference pattern decreases and eventually begins to saturate, the refractive index at the minima keeps decreasing at the same rate, thereby decreasing the modulation depth or the index contrast. This can be verified from Fig. 3 (inset), where it is shown that the Δn AC (t) starts dropping at the same time when the slope of Δn DC (t) decreases, which indicates the onset of saturation. We find that the DC refractive index continues to decrease even up to 4 hours of photoexposure, although, the rate of change of Δn DC (t) becomes very small after the first 30 minutes of exposure. Fig. 4 shows the spectrum of another grating, ~1 mm in length. A fit of the measured spectrum with coupled mode theory reveals a photoinduced refractive index change of Δn AC = 2.5 × 10 -3 , which corresponds to a grating strength κL of 5.1κ being the coupling coefficient given by πΔn AC /λ B,current (t), and L being the grating length. The presence of spectral features only on the longer wavelength side of the Bragg resonance shows that the grating is well-apodized. We also studied the aging of the grating at room temperature and observed a shift of Bragg wavelength by 1.5 nm towards the longer wavelengths, after 3 weeks of aging. The spectrum of the grating, as shown in Fig. 4, experiences no drastic degradation, which shows that the grating is quite stable in lab environment.
In conclusion, we have fabricated the first Bragg gratings in chalcogenide microwires with sub-wavelength diameters. The transmission spectrum shows a -8 dB dip at λ = 1574 nm within 3 minutes of exposure with a 3 mW laser interference pattern at a wavelength of 633 nm. The Bragg grating dip shifts to 1571.5 nm after 30 minutes, while growing to -40 dB. The observation of the transmission spectrum profile during exposure and subsequent 3 weeks of annealing reveal that the refractive index of As2Se3 decreases under exposition to 633 nm light and the grating strength remains stable. This device will find applications in sensing and nonlinear devices and for mid-infrared light processing. R. A. gratefully acknowledges fruitful discussions with Prof. Suzanne Lacroix. This work was supported by the Fonds Québecois pour la Recherche sur la Nature et les Technologies (FQRNT). | 2018-04-03T01:40:17.715Z | 2011-05-26T00:00:00.000 | {
"year": 2011,
"sha1": "8909b32f20505f0b281135c3aa54d095c514fe9f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1105.5400",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8909b32f20505f0b281135c3aa54d095c514fe9f",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Materials Science"
]
} |
9313487 | pes2o/s2orc | v3-fos-license | Post-transcriptional gene regulation: From genome-wide studies to principles
Abstract. Post-transcriptional regulation of gene expression plays important roles in diverse cellular processes such as development, metabolism and cancer progression. Whereas many classical studies explored the mechanistics and physiological impact on specific mRNA substrates, the recent development of genome-wide analysis tools enables the study of post-transcriptional gene regulation on a global scale. Importantly, these studies revealed distinct programs of RNA regulation, suggesting a complex and versatile post-transcriptional regulatory network. This network is controlled by specific RNA-binding proteins and/or non-coding RNAs, which bind to specific sequence or structural elements in the RNAs and thereby regulate subsets of mRNAs that partly encode functionally related proteins. It will be a future challenge to link the spectra of targets for RNA-binding proteins to post-transcriptional regulatory programs and to reveal its physiological implications.
these diverse steps in the gene expression program, the recent development of genome-wide analysis tools like DNA microarrays allowed fundamental new insights into the systems architecture of gene regulatory programs. For instance, DNA microarrays have been extensively used to study transcriptional programs by comparing steady-state RNA levels between diverse cell types and stages, and by the mapping of binding sites for DNA-associated proteins through chromatin immunoprecipitation (so-called ChIP-CHIP assays [9,10]). Integration of these data allowed the description of complex transcriptional regulatory networks, involving large sets of genes that control coherent global responses in physiological and developmental programs [11][12][13]. In contrast, less is known about the systems architecture that underlies the post-transcriptional steps in the gene expression program (although many RNA regulatory processes also occur co-transcriptionally, we further classify them as post-transcriptional for simplicity). Considering the large number of mRNA molecules in the cell -ranging from 15 000 to 150 000 mRNA molecules in yeast and mammals, respectively -it is rational to assume that the location, activity, and fates of these RNAs is not left to chance but is highly coordinated and regulated by an elaborate system. Such a post-transcriptional regulatory system may be controlled by the hundreds of RBPs and non-coding RNAs (e.g. microRNAs) that are encoded in eukaryotic genomes, possibly defining specific fates of each RNA by the combinatorial binding of distinct groups of RBPs [14][15][16]. Here, we summarize recent work that applied genomic tools to decipher the principles and logic of post-transcriptional regulatory systems. We focus on studies considering the localization, translation and decay of mRNAs in eukaryotes. On one hand, this includes investigations to globally map post-transcrip-tional regulatory programs to understand their extent, the underlying principles and conservation during evolution. On the other hand, it concerns investigations on the mediators or nodes of these programs, which involves the characterization of RBPs and the systematic identification of their RNA targets (Fig. 2).
RNA localization
RNA localization generally refers to the transport or enrichment of subsets of mRNAs to specific subcellular regions. RNA localization can be achieved passively by local protection from degradation or through the trapping/anchoring at specific cellular locations. Moreover, asymmetric distribution of RNA can also be established by the active transport of RNAs via RBP-motor protein complexes [5,17]. Here, we discuss studies that systematically mapped RNA distribution to subcellular structures or organelles, and then refer to investigations aimed at globally identifying localized mRNAs mediated through active mRNA transport by RBPs. In a pioneering study by Pat Brown and colleagues [18], mRNA species bound to membrane-associated ribosomes were separated from free cytosolic ribosomes by equilibrium density centrifugation in a sucrose gradient, and the distribution of transcripts in the fractions were quantified by comparative DNA microarray analysis. As expected, transcripts known to encode secreted or membrane-associated proteins were enriched in the membrane-bound fraction, whereas those known to encode cytoplasmic or nuclear proteins were preferentially enriched in the fractions containing mRNAs associated with cytoplasmic ribosomes. However, transcripts for more than 300 genes in the yeast Saccharomyces cerevisiae were found in the membrane-fraction coding for previously unrecognized membrane or secreted proteins. Rather unexpected, among these was also the message for ASH1 coding for a well-known transcriptional repressor, suggesting alternative signals for membrane association [18]. Similarly, application of this method to map mRNA distributions in the plant Arabidopsis thaliana allowed the classification of 300 previously unknown transcripts as secreted or membrane-associated proteins [19]. A recent extension of this approach to eleven different human cell lines provided a detailed catalog containing more than 5000 previously uncharacterized membrane-associated and 6000 cytoplasmic/nuclear proteins at high confidence levels [20]. Strikingly, this analysis predicts that 44 % of all human genes encode membrane-associated or secreted proteins exceeding previous estimates ranging from 15 % to 30 %. In addition, the comparison of this catalog to data obtained from hundreds of DNA microarray profiles from tumors and normal tissues allowed the identification of candidate genes that are highly overexpressed in tumors and, hence, could be particularly good candidates for diagnostic tests or molecular therapies [20]. Claude Jacqs lab applied a subcellular fractionation approach to determine transcripts associated with free and mitochondrion-associated ribosomes in the yeast S. cerevisiae. Besides the mRNA for ATP14, which was previously known to localize in the vicinity of mitochondria [21], nuclear transcripts for diverse mitochondrial proteins were enriched in the mitochondrial fraction. Interestingly, two characteristics correlated with this mRNA localization: the phylogenetic origin and the length of the genes. mRNAs enriched in the mitochondrial fraction were preferentially longer (as deduced from average length of the encoded proteins) and originate from genes with bacterial homologues, whereas mRNAs in free cytosolic polysomes were shorter and of eukaryotic origin [22,23]. Possibly, such coordinate localization of groups of mRNAs could allow oriented access for controlling their fates. This may also apply to other cellular compartments. For instance, a low-density array study revealed that 22 out of 649 analyzed transcripts were enriched in the cytoskeleton fraction relative to the cytosolic fraction -most of these encoding ribosomal proteins or structural proteins that interact with the cytoskeleton [24,25]. In polarized cells like neurons, mRNA localization has major physiological implications. In dendrites, RNA transport and subsequent local protein synthesis is thought to influence experience-based synaptic plasticity and long-term memory formation; in axons, local translation modifies axon guidance and synapse formation [26,27]. However, to date there are only a handful of well-characterized examples of localized neuronal mRNAs, among them the messages coding for microtubule-associated protein 2 (MAP2), the asubunit of a calmodulin-dependent protein kinase (aCaMKII), brain-derived neurotrophic factor Cell-extracts are fractionated through a sucrose-density gradient and the absorbance at 254 nm is monitored. RNA is isolated from fractions containing free RNA and ribosomal subunits, monosomes (80S) and polysomes, and analyzed with DNA microarrays. The relative position of a message in this profile is an indicator for its translational activity. (b) Systematic identification of RNAs associated with specific RNA-binding proteins. In this so-called ribonomics approach, RBPs are immunoprecipitated or affinity-purified via a tag from cellular extracts. RNAs associated with RBPs are isolated, cDNA copies are fluorescently labeled and hybridized to DNA microarrays. The Cy5/Cy3 fluorescence ratio for each locus reflects its enrichment by affinity for the cognate protein.
Post-transcriptional gene regulatory systems (BDNF), and activity-regulated cytoskeletal-related (Arc) [5,26]. Therefore, several genomics-based approaches have been undertaken to identify novel localized transcripts. For example, Matsumoto et al.
[28] fractionated brain tissue and isolated RNA from the heavy portion of polysomes and synaptosomes to provide a list of potentially dendritic mRNAs that undergo localized translation. Interestingly, the induction of neural activity by an electroconvulsive shock triggered a redistribution of the population of dendritic transcriptome, which may trigger changes in the translatability of this transcriptome, suggesting complex mechanisms of local translation in response to synaptic inputs [28] (Fig. 2A). The hundreds of potentially localized mRNA in neurons now await confirmation by in situ hybridization and exploration as to whether and how these may be regulated through activating stimuli, such as neurotransmitter release.
To date, more than 100 mRNAs are known to undergo active mRNA transport in diverse organisms [17,29].
In neurons, mRNAs are transported over long distances in a microtubule-dependent manner in the form of large granules consisting of RNA-binding proteins, ribosomes and translation factors [27,30]. Several RBPs associated with neuronal RNA transport have been identified, such as zipcode-binding proteins (ZBP1,2; named after their ability to bind to a conserved 54-nucleotide element in the 3-UTR of the b-actin mRNA known as the zipcode), Staufen, hnRNPA2, cytoplasmic polyadenylation protein (CPEB) and members of the familial mental retardation proteins (FMRPs). At least for one of these RBPs, FMRP, a systematic gene array-based screen was undertaken to identify the mRNAs that are transported and possibly regulated by this protein [31]. Using a ribonomics approach [32], which involved the immunoprecipitation or affinity isolation of RBPs followed by the identification of bound RNAs with DNA microarrays [32] (Fig. 2B), the authors immunopurified the protein from mouse brain tissues and found~4 % of all mRNAs (435 messages) associated with FMRP. In addition, they compared the mRNA profiles of polyribosomes between normal human cells and cells derived from fragile X syndrome patients identifying over 200 messages with altered association and hence, these are potentially subject to translational regulation ( Fig. 2A). Notably, nearly 70 % of the homo-A C H T U N G T R E N N U N G logous messages found in both studies had a G-quartet structure, which was demonstrated as an in vitro FMRP target [31]. These data provided a good starting point for further investigations on the most critical targets involved in fragile X pathophysiology and, possibly, on other related cognitive diseases.
Probably the best-studied example for actin-dependent RNA transport concerns ASH1 mRNA localiza-tion to the bud tip of yeast cells during cell division. ASH1 codes for a transcriptional repressor repressing mating type switching in daughter cells [29]. ASH1 mRNA is bound by She2p, an RBP tethered to the myosin motor protein Myo4p via the adaptor protein She3p. This RNA-protein complex travels along actin cables to the emerging bud for local protein synthesis.
To identify other localized mRNAs, affinity purification of components of the She complex followed by the analysis of bound mRNA with microarrays was combined with a robust reporter system for in vivo visualization as a secondary screen [33,34]. This analysis revealed 23 additional transcripts that are localized to the bud-tip and encode a wide variety of proteins, several involved in stress responses and cell wall maintenance [33]. These results reveal an unanticipated widespread use of RNA transport in budding yeast -possibly providing the daughter cells with a favorable start-up package.
In conclusion, the few studies that investigated spatial distribution of mRNAs in the cell on a global level challenge the long-standing assumption of a rather unorganized pool of mRNAs that randomly diffuse in the cytoplasm to be eventually translated. Possibly, many mRNAs may be spatially organized even in nonpolarized cells for local translation or decay in processing (P) bodies. Further applications of both subcellular fraction techniques and ribonomics approaches will certainly reveal a more comprehensive picture of the spatial arrangement of RNAs in cells.
Regulation of translation
Translational regulation has essential roles in development, oncogenesis and synaptic plasticity [35][36][37]. It concerns the differential recruitment of mRNA species to the ribosome for protein synthesis, which results in a lack of correlation between the relative amounts of mRNA and the amount of the encoded protein.
In an innovative study, the relative contribution of transcriptional and translational regulation in yeast was measured using large-scale absolute protein expression measurement called APEX (Absolute Protein Expression Index), which relies upon observed peptide counts from mass spectrometry [38]. Most (73 %) of the variance in protein abundance can be explained by mRNA abundance, which is lognormal distributed around an average of 5600 proteins per mRNA molecule. This indicates that the abundance of most proteins is set per mRNA molecule; however, one third of the mRNAs must be regulated at additional levels including translation and/or protein turnover. In mammalian cells, the fraction of differentially expressed messages may be consider-ably higher ranging from 60 % to 80 %, indicating that gene expression of most messages is heavily controlled at diverse levels [39]. Translation can be divided into three steps: initiation, elongation and termination [6]. During translation initiation, the primary target for translational control, translation initiation factors (eIFs) recruit the mRNA to the small ribosomal subunit (40S subunit). Thereby, eIF4E binds to the cap structure at the 5-end of the mRNA and interacts with eIF4G, which binds to the poly(A)-binding protein (PABP). The initiation complex then scans the mRNA in 5 to 3 direction until the initiation codon is reached where the large ribosomal subunit (60S) joins the complex leading to the formation of active ribosomes. Notably, ribosomes can also be recruited cap-independently to some viral and cellular mRNAs by direct binding of the small ribosomal subunit to internal RNA structures, termed IRES [40]. The assembled ribosomes traverse the coding region with help of elongation factors (eEFs) and synthesize the encoded polypeptide with multiple ribosomes covering the mRNA to form polysomes. At the termination codon, peptide chain-releasing factors (eRFs) are required to release the polypeptide from the ribosome. Two basic modes of translation regulation have been described. During global regulation, translation of most mRNAs is controlled by translation factors. For instance, phosphorylation of eIF2a reduces the amount of active initiation complexes and hence leads to a rapid reduction of translation of most messages. The availability of eIF4E is controlled by 4E-binding proteins (4E-BP) that displace eIF4G from eIF4E, and thus inhibit association of the small ribosomal subunit with the mRNA [6,41]. The second mode of translational regulation concerns mRNAspecific control, where translation of defined groups of mRNAs is modulated without affecting general protein biosynthesis. This can be carried out by specific RNA-binding proteins, which often bind to sequence or structural elements in untranslated regions (UTRs) of protein-coding transcripts and, hence, repress translation via interactions with eIFs [6,42]. A prime example for such regulation represents cytoplasmic aconitase, an enzyme that regulates iron-dependent translation initiation through binding to a stem-loop structure in the 5-UTR of messages involved in iron metabolism (e.g., ferritin mRNA coding for an iron regulatory protein) [43]. Specific control can also be exerted by microRNAs (miRNAs) -small RNAs of 22 nucleotides in length -that have recently been shown to repress translation via base pairing to sequences located in 3-UTRs of target mRNAs [44]. Interestingly, it has recently become apparent that miRNA-and RBP-mediated transla-tional regulation may collaborate or compete on specific mRNA substrates, suggesting interconnections between these different modes of translational regulation [45].
Genome-wide analysis of translational regulation. A reliable measure for translation of cellular mRNA is the degree of its association with ribosomes. Since the rate of initiation usually limits translation, most translational responses will alter the ribosome density on a given mRNA. Actively translated mRNAs are typically bound by several ribosomes (polysomes) and can be separated from the small (40S) and the large (60S) ribosomal subunits and the 80S monosomes by sucrose gradient centrifugation ( Fig. 2A). In classical experiments, total RNA was isolated from fractions of the polysomal gradient and assayed for the mRNA of interest by Northern blot analysis. Several laboratories have further extended this technique using DNA microarray technology to perform genome-wide analysis of mRNAs in polysomes in yeast, Drosophila and mammals [46]. The laboratories of Pat Brown and Daniel Herschlag performed a high resolution translation state analysis in rapidly growing yeast cells, providing profiles for mRNA-ribosome association for thousands of genes [47]. Based on these data, they calculated the ribosome occupancy (fraction of a specific mRNA associated with ribosomes), the ribosome density and the translation rate for each expressed mRNA. The average occupancy was calculated at 71 %, indicating that most mRNAs are likely engaged in active translation. However, about 100 mRNAs showed only weak association with ribosomes and may therefore be considered as potential candidates for translation on demand. The average ribosome density was found to be 156 nucleotides per ribosome, which is about one fifth of the maximal packing density, supporting the premise that translation initiation is the rate-limiting step for protein synthesis. Surprisingly, the ORF length appears to be a major factor determining the ribosome density, which is expressed through an inverse correlation between the ORF length and the ribosome density in diverse species [47][48][49].
Since poly(A) tail length affects translational efficiencies and mRNA stability, two recent studies systematically addressed its length in yeast [48,50]. In a procedure called polyadenylation state array analysis (PASTA), mRNAs were captured with poly(U) Sepharose columns and differentially eluted by increasing temperature. The mRNAs with short tails elute first and those with long tails last. RNAs fractions were analyzed with DNA microarrays to identify groups of mRNAs with similar poly(A) tail lengths. In the yeast S. cerevisiae, mRNA coding for functionally or cytotopically related mRNAs could be attributed to groups with similar tail length. Long poly(A) tails were found among mRNA coding for cytoplasmic ribosomal proteins, whereas short tails were enriched for DNA/Ty elements and among mRNAs coding for nucleolar proteins involved in ribosome synthesis, and proteins with cell cycle-related functions [50]. The comparison of the data with other genome-wide analysis revealed that poly(A) tail length positively correlates with ribosome density and to some extend with mRNA abundance, and it negatively correlates with ORF and UTR length [51]. The poly(A) tail length, and hence ribosome occupancy of messages correlate with the degree of association with poly(A)binding protein (Pab1p). This provides global support for the concept that long poly(A) tails stimulate translation via Pab1p and eIF4G. Interestingly, poly(A) tail length does not correlate with mRNA decay rates. Therefore, it appears that translation rates are not directly coupled to mRNA decay control, although poly(A) shortening is a prerequisite for mRNA decay [52]. Possibly, processes acting on oligo(A)-tailed intermediates may limit the decay rates of large number of yeast mRNAs. A congruent study performed in fission yeast S. pombe monitored translational status, poly(A) tail length, mRNA abundance, mRNA decay rates and RNA polymerase II association under identical conditions [48]. Functional groupings of mRNA in respect to their translational efficiencies, length and abundance were identified with shorter and abundant mRNAs having longer poly(A) tails. Notably, ORF length correlated best with ribosome density and mRNA abundance with ribosome occupancy. In conclusion, both studies revealed similar principles that may organize translation and therefore, these may be evolutionarily conserved. Further studies in other organisms will reveal whether these principles are universally conserved and possibly affected in disease. Several studies were aimed at the systematic identification of translationally regulated messages after subjecting cells to stress and other environmental stimuli. They applied a low-resolution profile analysis, where the mRNA contents of high sucrose gradient fractions (polysomes) were compared with fractions from the low sucrose gradient (the pool of non-translated mRNAs) ( Fig. 2A). In parallel, changes in the levels of total RNA were measured to study the relation between transcription/decay and translationally regulated messages. In yeast, global effects on translation were first studied for the rapid transfer of cells from a fermentable to a non-fermentable carbon source mimicking glucose starvation [53], followed by heat-shock response and rapamycin treatment [54], amino acid starvation, butanol addi-tion (an end product of amino acid breakdown) [55], and application of hydrogen peroxide to induce oxidative stress [56]. First, it should be noted that these relatively harsh treatments induced global translation inhibition that goes along with a decrease in cell growth. Although this global translation inhibition is triggered by similar signaling pathways like phosphorylation of eIF2a, the various forms of stress affected quite different sets of mRNAs. Amino acid and glucose starvation differentially regulated the translation of up to 20 % of all mRNAs, whereas more mild treatments, such as butanol and peroxides, affected less than 4 % of transcripts. Treatment of cells with heat/rapamycin or nutrient removal (amino acid and glucose starvation) co-activated similar translational and transcriptional programs. Here, regulation at the translational level often reflects a magnification of the transcriptional activity -an effect that has been termed potentiation [54]. In contrast, the addition of butanol or peroxide provoked no potentiation, but instead changed the abundance and translation rates of different sets of mRNAs. This could, at least in part, be explained by the recruitment of stored mRNA for translation on demand. Nevertheless, in all cases, the specific sets of mRNAs that undergo treatmentspecific regulation appear to share functional themes that can be attributed to logic response of the cells altered physiological circumstances. For instance, mRNAs coding for proteins related to sugar metabolism and transport, such as hexose transporters, remain associated with polysomes during glucose starvation [53]; rapamycin treatment, which blocks the target of rapamycin (TOR) pathway controlling cell growth, led to a decrease of nearly all yeast mRNAs coding for cytoplasmic ribosomal proteins, whereas mRNAs for proteins acting in the nitrogen discrimination pathway were increased [54]; amino acid starvation strongly coregulated or potentiated transcripts encoding permeases, proteases and proteins involved in degradation pathways, which may reflect an early amino acid scavenging response to starvation [55]. Another interesting aspect of these studies is that the concentration of an applied compound may significantly matter for the outcome. Shenton et al. [56] showed that low concentrations of peroxide (0.2 mM H 2 O 2 ) induced the translation of mRNAs coding for antioxidants, cellular transporters and proteins involved in diverse intermediary metabolism and may reflect the need for metabolic reconfiguration. A tenfold higher concentration of peroxide (2 mM) resulted in the up-regulation of genes involved in ribosome biogenesis and ribosomal RNA processing, possibly reflecting the need to repair factors for efficient protein synthesis. On the other hand, many translationally repressed mRNAs showed Cell. Mol. Life Sci. Vol. 65,2008 Review Article 803 increased steady-state (total RNA) levels. Again it was postulated that this group of messages may represent an mRNA store that could become rapidly activated following relief of the stress condition. It will certainly be interesting to further study whether other mild treatments with pharmacological compounds activate dose-dependent non-linear effects via distinct regulatory programs. If so, this may become of great medical relevance as diverse drugs are known to act differentially depending on their dose. Finally, there are recurring and intriguing observations that mRNAs coding for cytoplasmic ribosomes generally appear to undergo outstanding and strong coregulation. Amino acid and glucose starvation coordinately repress these transcripts abundances and ribosome association very rapidly, whereas the addition of butanol or oxidative stress even lead to the opposite effect -translational activation -that possibly reflects the requirement of cells to replace ribosomal proteins and rRNA that became damaged by free radicals or other toxic products. Therefore, besides tight transcriptional control of these messages, they also undergo decent post-transcriptional control at diverse levels and hence, may represent the most tightly controlled genes in eukaryotic cells [57]. First studies to investigate global aspects of translational regulation in mammalian cells focused on IRES-dependent translation in poliovirus-infected cells [58], and the reaction of mitogenically activated fibroblasts [59], providing early proof-of-principle for the methodology introduced above that involves polysomal fractionation followed by DNA microarray analysis of RNA contents ( Fig. 2A). A further landmark study by Holland and colleagues [60] analyzed polysomal profiles of murine cell lines after blocking oncogenic Ras and Akt signaling. Apparently, these pathways regulate the recruitment of specific mRNAs to ribosomes to a far greater extent than de novo synthesis of mRNAs by transcription and thus, Ras and Akt signaling pathways seem to have a more pronounced effect on translational versus transcriptional regulation. The authors postulated that the immediate and direct inductive oncogenic effect of these signaling pathways could be largely achieved through translational activation. The differences seen in RNA abundances during chronic signaling alterations may be secondary to translational effects caused by mRNAs encoding transcription factors [60]. Similar studies on different cancer types may lead to the identification of potential markers and possibly reveal novel drug targets [37,61]. Moreover, a recent study identified specific subsets of mRNAs regulated by eIF4E overexpression, which is known to lead to tumor transformation. The authors postulated that down-regulation of eIF4E and its down-stream targets may represent a potential therapeutic option for the development of novel anti-cancer drug [62]. As seen for Ras/Akt activation, it is intriguing that changes at translation can even outperform changes at the steady-state mRNA level. This has also been noticed in a study analyzing radiation-induced changes in gene expression of human brain tumor cells or normal astrocytes. Ten times more genes (~15 %) were altered at the level of translation compared to the number of genes regulated at the level of transcription (~1.5 % of 7800 analyzed human genes) [63]. Only a few transcripts were commonly affected at both the transcriptional and translational levels, suggesting that the radiation-induced changes in transcription and translation are not coordinated. Those transcripts that were affected at translation fell into functional groups such as cell cycle, DNA replication and anti-apoptotic functions. This indicates that DNA damage affects post-transcriptional gene regulation of previously synthesized mRNAs, possibly enabling cells to repair DNA instead of being transcriptionally active [63]. Functional relations among messages were also recognized in a recent study performing translational profiling of mouse pancreatic b-cells in response to an acute increase in glucose concentration [64]. More than 300 transcripts (2 % of the analyzed genes) changed their association with polysomes more than 1.5-fold; most of them encoding proteins acting in metabolism or transcription. Notably, this set of messages is related to the group of genes translationally altered during glucose starvation in the yeast S. cerevisiae [53]. Therefore, in mammals and yeast, it appears that mRNAs for functionally related messages may be coordinately regulated at the translational level. It is possible that a comparative analysis in different species may allow evolutionarily conserved translational regulatory programs to be deciphered, which are at the moment still rather speculative. Whereas concomitant changes in RNA abundance and translational rates were rarely detected during radiation response [63], a recent study identifying mRNAs that remain associated with polysomes during hypoxia in transformed prostate cancer cells (a condition that tumors prevent through the induction of angiogenesis) found both homodirectionally/potentiated mRNAs and distinctively regulated messages [65]. After prolonged exposure of PC-3 cells to low oxygen levels, global translation was reduced by half; however, 104 mRNAs, representing about 0.5 % of all analyzed features, became more associated with hypoxic polysomes compared to normoxic ones. Among these, 71 mRNAs were similarly increased in hypoxic polysomes compared with total RNA levels 804 R. E. Halbeisen et al. Post-transcriptional gene regulatory systems representing homodirectional changes; 33 mRNAs were translationally enriched, some of them potentiated (11 of those coding for ribosomal proteins) [65]. In summary, the common principles of translational regulation that emerge from genome-scale studies in diverse eukaryotes suggest a complex but coordinate system of regulation. It must be triggered by a variety of factors that go well beyond the described pathways that influence global translation. In future, there will certainly be an increasing number of studies to decipher translational regulatory programs in cancer, neurogenesis and development. Intriguingly, despite the impact of translational regulation during development, only one recent study systematically investigated translational programs during early embryogenesis in the fruit fly Drosophila melanogaster [49]. The mapping of translational programs in diverse species will likely reveal key regulatory networks and how these are affected in disease.
Regulation of mRNA decay
Steady-state mRNA levels are a result of both RNA synthesis and degradation that are dynamically controlled and can vary up to 100-fold during the cell cycle or cellular differentiation. In eubacteria like Escherichia coli, mRNAs are generally degraded by endonucleolytic cleavage, followed by 3-to-5 exonucleolytic RNA decay through the so-called RNA degradasome consisting of ribonuclease E (RNAseE), 3-exoribonuclease polynucleotide phosphorylase (PNPase), RNA helicase (RhlB) and enolase [66]. In eukaryotes, most cytoplasmic mRNA degradation begins with shortening of the poly(A) tail by deadenylases followed by removal of the 5 cap structure by the decapping enzymes, Dcp1 and Dcp2 [7]. The decapped intermediates are then degraded either by an exonuclease (Xrn1p) in the 5 to 3 direction, or by the cytoplasmic exosome in the 3 to 5 direction. In addition, eukaryotes own specialized pathways that target mRNAs containing premature termination codons (nonsense-mediated decay pathway, NMD) that lack translational termination codons (non-stop decay pathway, NSD) or that bear stalled ribosomes (no-go decay). Degradation of specific mRNAs can also be initiated by endonucleolytic cleavage through sequence-specific endonucleases, or in response to miRNAs or siRNAs [7]. Numerous cis-acting elements located in the 5-UTR, the coding sequence (CDS) or in the 3-UTR of mRNAs can function as binding sites for RNA-binding proteins that regulate decay [67]. For instance, AU-rich elements (AREs), conserved sequences found in the 3-UTR of nearly 5 % of all human genes, interact with specific AREbinding proteins that stabilize the RNA or promote mRNA degradation by recruiting the RNA decay machinery.
In eubacteria and archea, mRNA decay proceeds rapidly, with a median half-life of~5 min. Two main characteristics seem to be evolutionarily conserved: adjusted decay rates for functionally related groups of messages, and the inverse correlation between the half-lives and the relative abundances of transcripts. As seen in the archaebacterium Sulfolobus, transcripts encoding proteins involved in growth-related processes, such as transcription, tRNA synthesis, translation and energy production, generally decay rapidly (t 1/2 4 min), whereas those encoding products necessary for maintaining cellular homeostasis are relatively stable (t 1/2 > 9 min). Short half-lives of highly abundant mRNAs imply high-turnover rates and thus, enable cells to rapidly reprogram gene expression upon changes in environmental conditions [68,71]. Interestingly, the half-life and abundance of distinct classes of transcripts appear to depend on particular RNA degradosome components. This finding suggests the existence of structural features or biochemical factors that distinguish different classes of mRNA targeted for degradation [75]. This may also apply to specific growth phases, as seen in Streptococcus where certain mRNAs become sensitive to stationary-phaseinduced PNPase [76]. Evidence for the existence of coordinated RNA decay regulons in eukaryotes was obtained from global investigation of mRNA decay profiles in yeast and human cells. Here, transcription was shut-off using cells that bear a temperature-sensitive allele of RNA polymerase II or through chemical inactivation, and the decay of thousands of genes was monitored with DNA microarrays over a time course [52,72,74]. Strikingly, mRNA half-lives among components of macromolecular complexes in yeast were significantly correlated [52]. For instance, the transcripts for the four histone mRNAs were among the least stable with closely matched, rapid decay rates (t 1/2 = 7 AE 2 min); the 131 mRNAs coding for ribosomal proteins had average decay rates (t 1/2 = 22 AE 6 min), and the four components of the trehalose phosphate synthetase complex were amongst the longest lived messages (t 1/2 = 105 AE 15 min) [52]. The examination of decay rates in human cells revealed similar mRNA-turnover Cell. Mol. Life Sci. Vol. 65,2008 Review Article 805 patterns among orthologous genes, indicating the presence of evolutionary conserved programs of RNA stability control [74]. Transcripts encoding metabolic proteins have a tendency for longer halflives, whereas transcripts encoding transcription factors or ribosome biogenesis factors are relatively short lived [52,72]. Interestingly, it appears to be a universal feature that average transcript half-lives are roughly proportional to the length of the cell cycle: cell-cycle lengths of 20, 90, and 600 min correspond to median mRNA half-lives of 5, 21 and 600 min for E. coli, S. cerevisiae and human cells, respectively [74]. DNA microarrays have also been applied to investigate specialized decay pathways, such as NMD and nuclear exosome-mediated decay. Mutants for NMD factors Upf1, Nmd2 and Upf3 alter the mRNA levels of an overlapping set of~600 messages (10 % of the transcriptome) in yeast [77,78]. However, mRNA levels in nmdstrains may also be the result of indirect effects because transcription factors are also targeted through NMD and therefore, Guan et al. [78] dissected direct from indirect targets of NMD by profiling global RNA decay rates in nmdstrains. About half (300 transcripts) are likely to be direct NMD targets decayed through 5 to 3 degradation by Xrn1p. NMD-sensitive transcripts tend to be both nonabundant and short-lived, with one third of them coding for proteins that are connected to two central themes: first, replication and maintenance of telomeres, chromatin-mediated silencing and post-replication events related to the transmission of chromosomes during the cell division cycle; and, second, synthesis and breakdown of plasma membrane components, including transport of macromolecules and nutrients, and cell wall proteins [78]. Genome-wide analyses have also identified potential RNA substrates for the nuclear exosome [79]. More than 300 mRNAs showed altered expression levels in different exosome mutants. Several genes, located downstream of independently transcribed snoRNA genes, were overexpressed in exosome mutants. Further analyses suggested that many snoRNA and snRNA genes are inefficiently terminated. Such read-through transcripts into downstream ORFs are normally rapidly degraded by the exosome and, hence, could explain their enrichment in exosome mutants. A couple of studies investigated the implications of specific RBPs on RNA turnover. Global mRNA turnover in mutant cells was monitored through gene expression analysis expecting adverse effects on subsets of messages. Grigull et al. [72] examined the effects of deletions of genes encoding deadenylase components Ccr4p and Pan2p and putative RNA-binding proteins Pub1p and Puf4p after inhibition of transcription by chemicals and/or heat stress. This examination showed that Ccr4p, the major yeast mRNA deadenylase, contributes to the degradation of transcripts encoding both ribosomal proteins / rRNA synthesis and ribosome assembly factors largely mediating the transcriptional response to heat stress. Pan2p and Puf4p also participate in degradation of these mRNAs, while Pub1p preferentially stabilized transcripts encoding ribosomal proteins. Notably, the Puf4-affected genes correlate with biochemically identified targets of Puf4p [80] . A second study focused on Pub1p, a yeast RNA-binding protein thought to destabilize mRNAs through binding to AU-rich sequences in 3-UTRs [81] . Global decay profiles in pub1 mutants revealed a significant destabilization of proteins involved in ribosomal biogenesis and cellular metabolism, whereas genes involved in transporter activity demonstrated association with the protein, but displayed no measurable changes in transcript stability [81] . Therefore, in this case, the direct targets only partially related to the functional outcome under specific physiological conditions. This could be mediated through additional RNAprotein interactions forming a network through combinatorial binding. Finally, Foat et al. [82] combined a computational and experimental approach to identify transcripts that are destabilized under specific environmental conditions (sugar sources) by yeast mRNA stability regulators. For Puf3p, which was known to primarily associate with mRNAs coding for mitochondrial proteins [80] , they computationally inferred and experimentally verified target destabilization in the presence of glucose, as some of these mRNAs were up-regulated in puf3 mutants grown in a non-repressing carbon source, but down-regulated in a repressing carbon source [82] . Mammalian cells have evolved a variety of specific mRNA decay programs that play important roles in medically relevant processes such as inflammation, hypoxia and cancer pathogenesis. For example, the expression of diverse cytokines is differentially regulated after T cell activation, and glucocorticoids inhibit inflammation through destabilization of proinflammatory transcripts like cyclooxygenase-2 [67]. Global mRNA decay profiles revealed mRNAs, which appear specifically regulated by these programs. For instance, in resting T lymphocytes, the majority of transcripts are stable with half-lives of more than 6 h, but a small proportion (~3 %) of expressed transcripts exhibits rapid decay with halflives of less than 45 min [83]. These short-lived transcripts encode a variety of regulatory proteins such as cell surface receptors, transcription factors and regulators of cell growth and apoptosis. Su et al. [84] 806 R. E. Halbeisen et al.
Post-transcriptional gene regulatory systems focused on the massive degradation of transcripts occurring during meiotic arrest at the germinal-vesicle (GV) stage, and found that degradation is apparently not promiscuous but preferentially affects specific groups of messages. In particular, transcripts involved in processes associated with meiotic arrest at the GV stage and the progression of oocyte maturation, such oxidative phosphorylation, energy production, and protein synthesis, were rapidly degraded, whereas those encoding participants in signaling pathways maintaining the oocyte in the MII-arrested state were among the most stable. In conclusion, these studies exemplify that stimulus-dependent transcript destabilization is an important mechanisms for controlling gene expression in a coordinated manner. Many activation-induced transcripts contain AREs in the 3-UTR [85]. The presence of these motifs in mRNAs often correlates with shifts in the distribution of decay rates; however, their sole presence cannot reliably predict turnover behavior. ARE-binding proteins may therefore differentially determine the fate of mRNA depending on the cellular and environmental context [85]. Tristetraprolin (TTP), a wellknown ARE-binding protein, has several characterized physiological target mRNAs including tumor necrosis factor (TNF)-a, granulocyte-macrophage colony-stimulating factor, and interleukin-2b. Microarray analysis of RNA obtained from wild-type and TTP-deficient fibroblast cell lines identified 250 transcripts with altered decay rates, some of them containing conserved TTP binding sites [86]. The RNA-binding protein T-cell intracellular antigen 1 (TIA-1) functions as a post-transcriptional regulator of gene expression and aggregates to form stress granules following cellular damage. TIA-1 regulates mRNAs for proteins involved in inflammatory responses such as TNF-a and cyclooxygenase 2. Immunoprecipitation (IP) of TIA-1-RNA complexes, followed by microarray-based identification and computational analysis of bound transcripts revealed at least 300 potential targets, many of them bearing an U-rich motif [87].
In conclusion, global analysis of mRNA turnover underlines the importance of RNA decay in the control of mRNA levels and strongly suggests the presence of specific RNA turnover programs. mRNA decay certainly involves combinatorial interactions of RBP enabling stimulus-dependent decay programs through the integration of diverse signals. Besides temporal control, RNA decay may also occur spatially restricted, as seen with Drosophila IRE1, a protein activated during the unfolded protein response in the endoplasmic reticulum directing the decay of specific subset of mRNAs, many of which encode plasmamembrane proteins [88]. Moreover, still rather unex-plored is the role of P-bodies and stress granules as storage place for untranslated mRNA and site for mRNA degradation. Perhaps different subtypes of Pbodies exist for subgroups of RNAs? At least the recent observation that ARE containing mRNAs are localized to specific cytoplasmic granular structures containing exosome subunits that are distinct from Pbodies or stress granules, support the idea of specialized structures for storage or degradation of distinct groups of mRNAs [89].
Identification of specific RNA-protein interactions
Putative RNA-binding proteins comprise 3 -11 % of the proteomes in bacteria, archea and eukaryotes [90]. The large number of RBPs in all kingdoms of life may merely reflect the ancient origin of RNA regulation, which is possibly the most evolutionary conserved part of cell physiology. RBPs often contain distinct RNAbinding domains that specifically interact with sequences or structural elements in the RNA. Approximately one hundred protein domains associated with RNA metabolism have been described to date, half of them believed to have originated at early stages in evolution, whereas others, such as the RNA recognition motif (RRM), are exclusively present in eukaryotes and therefore may have been acquired later in evolution [90].
A successful approach to globally identify the in vivo RNA targets of RBPs involves immunoprecipitation or affinity purification of epitope-tagged proteins followed by the analysis of associated RNAs with DNA microarrays or by sequencing (Fig. 2B). In a pioneering study, Keene and colleagues [91] used this ribonomics approach to study RNAs associated with three RBPs in a cancer cell line. Although low-density arrays were used to identify the bound mRNAs, each RBP was associated with a distinct subset of the mRNAs present in total cell lysate. Moreover, these subsets appeared to change after cells were induced to differentiate. These results led to the proposal that groups of mRNAs encoding functionally related proteins are organized as so-called post-transcriptional operons [92]. In analogy to prokaryotic operons, this model predicts that specific RBPs may coordinate groups of mRNAs coding for functionally related proteins in eukaryotes. Cis-acting elements in the mRNA may provide the means to mimic the coordinated regulatory advantages of clustering genes into polycistronic operons [16,92]. A prime example for the coordination of functional related transcripts by specific RBPs is represented by the Pumilio-Fem-3 binding (PUF) proteins [80,93]. PUF proteins comprise a conserved family of struc-Cell. Mol. Life Sci. Vol. 65,2008 Review Article 807 turally related RBPs that negatively regulate gene expression of specific mRNAs [94]. Applying DNA microarrays to identify their RNA targets revealed that each of the five yeast PUF proteins associated with distinct groups of 40 to 220 different mRNAs with striking common themes in the functions and subcellular localization of the proteins they encode: Puf3p binds nearly exclusively to cytoplasmic mRNAs that encode mitochondrial proteins; Puf1p and Puf2p interact preferentially with mRNAs encoding membrane-associated proteins; Puf4p preferentially binds mRNAs encoding nucleolar ribosomal RNA-processing factors; and Puf5p is associated with mRNAs encoding chromatin modifiers and components of the spindle pole body [80]. The results were further corroborated by the identification of distinct sequence motifs in the 3-untranslated regions of the mRNAs bound by Puf3, Puf4, and Puf5 proteins. A physiological relation between Puf3p and its mRNA targets has also been observed -as suggested from its association with mRNA-encoding mitochondrial proteins, puf3 mutant cells showed a slow-growth phenotype on non-fermentable carbon sources indicative of a functional connection to mitochondrial physiology [80]. Genome-wide identification of RNAs associated with the orthologous PUF protein from Drosophila melanogaster, called PUMILIO, revealed distinct clusters of mRNAs in embryos and in ovaries of adult flies [93]. More than 1000 messages were significantly associated with the protein. Subgroups of these Pumassociated mRNAs had commonalities, such as function in the anterior-posterior patterning system, and the subunits of the vacuolar H-ATPase. Moreover, a characteristic sequence motif was present in 3-UTRs of PUMILIO-bound mRNAs resembling the one previously identified for the yeast Puf3 protein. [93]. Hence, the data obtained from the yeast and Drosophila studies provided an additional source for considering their evolution. For instance, conservation of amino acid residues in the RNA-binding domain (the PUM-homology domain) between homologous PUF proteins correlated with identified core motifs in 3-UTR of mRNA targets. However, the proteins encoded by the mRNA targets appeared not to be particularly conserved. This discordance suggested that acquisition or loss of RBP binding motifs in UTRs of genes may provide a surprisingly fluid evolutionary mechanism to modify post-transcriptional regulatory connections [93]. Ribonomic studies have now been conducted for more than 30 specific RBPs ( Table 1). The results form these studies generally support and extend the proposed post-transcriptional operon model. Each of the analyzed RBPs has a unique RNA binding spectrum comprised of 20 -1000 distinct transcripts that often share functionally related themes. The spectra of targets overlap with other RBPs, suggesting combinatorial binding of RBPs. Occasionally, sequence or structural elements could be identified among mRNA targets using bioinformatics tools, and novel physiological consequences were discovered (e.g., [95]). The ribonomics approach has recently been implemented on the argonaute (Ago) protein family to discover novel mRNAs that potentially undergo miRNA dependent regulation [96,97]. Although the number of detectable Ago-associated mRNAs was low (~90 messages) compared to the thousands of genes expected to undergo miRNA dependent regulation, the comparison of Ago-associated mRNAs in wild-type and miRNA mutants may provide a tool to decipher miRNA-specific targets [97]. Besides specific RBPs, ribonomic approaches have also been applied to general RNA-binding proteins for the identification of messages expressed in particular tissues or cell-types. Affinity-tagged poly(A) binding protein (PABP) was expressed with tissuespecific promoters to identify muscle-or ciliated sensory neuron-specific transcripts in the worm Caenorhabditis elegans [98,99], and mRNAs in photoreceptor cells of flies [100]. The method was also used to measure gene expression of endothelial cells that were co-cultured with breast tumor cells [101]. A similar approach with tagged ribosomal proteins may become another tool to determine gene expression in specific cells [102,103].
Final conclusions
The application of genomic tools to study post-transcriptional gene regulation suggests additional levels of coordination and regulation that are beyond the traditional view of equally treated cellular mRNAs that are similar processed, exported, and eventually translated in the cytoplasm [14][15][16]. The decay, localization and translation of mRNA seem to undergo coordinate control by regulatory programs, which may be embedded in a multifaceted post-transcriptional regulatory system. The properties of this system are controlled by RNA-binding proteins or non-coding RNAs (e.g., microRNAs [104]) that coordinate functionally related sets of mRNAs through binding to sequence elements in the RNA. Considering the hundreds of RBPs encoded in eukaryotic genomes, post-transcriptional control may be comparable in its richness and complexity to transcriptional regulatory systems. This provides a means to link RNA regulation to other cellular regulons such as signal transduction pathways allowing rapid and efficient reprog-808 R. E. Halbeisen et al.
Post-transcriptional gene regulatory systems ramming of gene expression in response to changing physiological conditions. Further analysis of RBPs and their target RNAs may finally lead to a map of the proposed post-transcriptional regulatory system. However, besides the architecture, it will also be important to study the plasticity and dynamics of this regulation by measuring how it reacts in response to environmental or developmental changes, and how it is perturbed in certain diseases [35,105]. Finally, a major challenge will be to connect the different levels of gene expression systems though large-scale data integration [39]. | 2016-05-04T20:20:58.661Z | 2007-11-26T00:00:00.000 | {
"year": 2007,
"sha1": "e0befd15cc66eb378f895d0b41aa740cb80ba8ad",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00018-007-7447-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0befd15cc66eb378f895d0b41aa740cb80ba8ad",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
52020212 | pes2o/s2orc | v3-fos-license | High prevalence of hypertension in an agricultural village in Madagascar
Elevated blood pressure presents a global health threat, with rates of hypertension increasing in low and middle-income countries. Lifestyle changes may be an important driver of these increases in blood pressure. Hypertension is particularly prevalent in African countries, though the majority of studies have focused on mainland Africa. We collected demographic and health data from 513 adults living in a community in rural Madagascar. We used generalized linear mixed models to assess body mass index (BMI), age, sex, and attributes related to household composition and lifestyle as predictors of blood pressure and hypertension. The prevalence of hypertension in this cohort was 49.1% (both sexes combined: N = 513; females: 50.3%, N = 290; males: 47.5%, N = 223). Blood pressure, as well as hypertensive state, was positively associated with age and BMI. Lifestyle and household factors had no significant relationships with blood pressure. The prevalence of hypertension was similar to that found in urban centers of other African countries, yet almost double what has been previously found in Madagascar. Future research should investigate the drivers of hypertension in rural communities worldwide, as well as the lifestyle, cultural, and genetic factors that underlie variation in hypertension across space and time.
Introduction
Elevated blood pressure affects millions of people around the world [1], with the World Health Organization identifying hypertension as a major global health challenge [2].Globally, high blood pressure contributes to 92 million disability-adjusted life years (DALYs) and 7.6 million premature deaths [3].In 2000, hypertension was responsible for 62% of stroke, 49% of ischemic heart disease, and almost 13% of all deaths worldwide [4].Over ten years later, hypertension remains a leading global risk factor for associated cardiovascular diseases, ranking above tobacco smoking and household air pollution [5].
The prevalence of hypertension in low and middle-income countries (LMICs) is increasing [6,7].In fact, approximately two-thirds of the global burden of hypertension is found in developing countries [4].Compared to other regions of the world, hypertension is most prevalent in Africa [2], where nations have seen marked increases in hypertension over the past half century [8].For example, a 2012 report documented a 24% increase in the prevalence of hypertension in sub-Saharan Africa from 1998 to 2003 [7].While it is possible that increased monitoring and improved health data collection could explain the temporal increase in documented hypertension in LMICs, changes in risk factors for hypertension likely underlie some of the observed increase [2].
In LMICs, the prevalence of hypertension tends to be greater in urban areas than rural ones [8,9].This difference can be attributed to a myriad of behavioral and lifestyle factors that influence obesity and stress, two known risk factors for hypertension [2,[8][9][10][11][12].For example, exposure to calorie-rich, processed foods, coupled with a decrease in physical activity, can contribute to weight gain and associated cardiovascular health risks.Similarly, people in urban settings are more likely to experience elevated stress levels related to aspects of industrialization, including increased crime and financial concerns [13].It is well known that stress is linked to elevated blood pressure [14][15][16].A stronger understanding, and ultimately control, of the risk factors for hypertension is important for monitoring a suite of associated public health challenges, as hypertension is a critical contributor to chronic kidney disease [17], cerebrovascular disease, and cardiovascular disease [18].
The relationship between changing lifestyles and health can be understood through the lens of evolutionary medicine, and more specifically, evolutionary mismatch-the concept that differences between the current environment and the one in which humans evolved have direct consequences for health [19].In the context of hypertension in sub-Saharan Africa, a mismatch perspective may help explain why rural communities are susceptible to hypertension in response to elevated stress and changing diets and occupations [20].Because the pace of lifestyle change in industrialized contexts is much faster than that of biological evolution, populations may become quickly mismatched to novel exposures as countries undergo economic development, the consequences of which are relevant to both global public health and medicine [21].An evolutionary lens is therefore able to provide richer explanations for disparities in hypertension, such as the increased prevalence in urban populations compared to rural counterparts [13], which is an area that has been identified as requiring further investigation [9].
We investigated body mass index (BMI), age, sex, and blood pressure in an economically developing rural population in Madagascar.This region is increasingly shifting to a reliance on cash crops, such as vanilla, producing a cascade of factors that increase both stress and access to factors of the post-industrial lifestyle, including alcohol and processed foods.More generally, Madagascar is interesting because of its unique ecological and genetic factors, including an admixture of individuals with African and Austronesian ancestry [22].Thus, this study contributes to others that have focused on lifestyle differences in large urban centers and rural villages on mainland Africa [9,23], and adds to a smaller number of studies on Malagasy populations [24].
Building on previous research described above, we investigated predictors of blood pressure and hypertension in rural Madagascar.Focusing on key correlates in previous studies, we predicted that elevated blood pressure covaries positively with BMI and age [13,25], and that males have higher blood pressure than females [25].We also investigated less commonly studied variables that may be relevant to transitions in this community, including the effects of behavior and stress, predicting that blood pressure covaries positively with tobacco use [13], alcohol use [2,26], and larger household size (a proxy for stress) [2].
Materials and methods
This study took place in Mandena, Madagascar (approximately -18˚42'00" S 47˚50'00" E), an agricultural village of approximately 3,000 people in the SAVA region (an acronym capturing the names of its four major cities: Sambava, Antalaha, Vohe ´mar, and Andapa).Rice production is central to subsistence agriculture in this population, and vanilla is generally sold as a cash crop.
Data were collected over three field seasons in Mandena: (1) Seven weeks from July to August 2015; (2) four weeks from July to August 2016; and (3) four weeks from May to June 2017.Importantly, data collection began at the start of vanilla season, a period when farmers are spending considerable amounts of time monitoring, harvesting, and preparing the crop for sale, including protecting their crops from theft in the field and processing harvested vanilla beans in the village.The SAVA region in particular is tightly connected to global vanilla distribution; as such, communities in the area experience marked seasonal and yearly fluctuations in crop yields and market prices, financial security (i.e.surplus of cash immediately following vanilla sales), and stress.
This study is nested within a larger project that utilized an evolutionary perspective to understanding the relationship between changing lifestyles and health outcomes in this Malagasy village.As new projects were initiated, modifications were made to improve data collection and to collect new health metrics.As such, variation in cohort size and certain data collection procedures exists across the three field seasons, as documented below.All study procedures were approved by Duke University's Institutional Review Board (Protocol C0848) and by Malagasy health authorities.
The study included a total of 1,142 measurements of blood pressure and other health metrics from 513 adults enrolled over the three field seasons (223 men and 290 women; ages ranged from 18 to 89 years).During each field season, individuals first learned about the project through a town-hall style meeting led by the village president at a central building in Mandena.Interested individuals were encouraged to return to the building in the following days to enroll in the study.To maintain anonymity in the records, each participant was given a unique identification number that was used in all subsequent surveys.
Through a Malagasy translator, each participant provided informed written consent and completed a general health survey, coupled with measurements of height, weight, temperature, blood pressure, and heart rate.All blood pressure and heart rate measurements were taken with the Omron 10 Series Upper Arm Blood Pressure Monitor (Omron Healthcare, Inc.).Individuals were required to sit for at least five minutes before the first blood pressure reading, and at least five minutes separated a total of up to three different readings.We used a standard bathroom scale that was purchased in Madagascar to measure weight, and a tape measurer adhered to a vertical wooden beam in the central building to measure height.Height and weight measurements were used to calculate BMI as weight in kilograms divided by the square of height in meters.Following standard guidelines, individuals were categorized as: underweight <18.5 kg/m 2 ; normal !18.5 to <25 kg/m 2 ; overweight 25-30 kg/m 2 ; obese !30 kg/m 2 [27].Any participants with clinically elevated measurements were instructed to speak with a Malagasy nurse who worked alongside our team in Mandena.
Comparison of data collection across the three study years are shown in Table 1.In 2015, blood pressure was recorded once per participant; in 2016 and 2017, up to three readings were taken per participant.We recorded multiple measurements to assess changes in blood pressure over repeated readings, with declines indicating a decrease in blood pressure over longer sedentary time and/or reduced stress associated with the procedure (S1 Fig) .In 2016, 23% of 142 participants had their blood pressure recorded in their homes, while all other measurements were collected at the central building, including all blood pressure readings in 2015 and 2017.
We used household size (number of people) as a proxy for stress, as assessed using the general health survey in 2015.We also collected data on tobacco and alcohol usage in the previous week (yes/no).
We used generalized linear mixed models (GLMM) to analyze blood pressure, with one reading per participant from 2015 and all readings from participants in 2016 and 2017 (N = 1,142).To incorporate the decrease in blood pressure over repeated readings, we included individual ID and measurement (first, second, or third) as random effects, and BMI, age, and sex as fixed effects.We then used model selection and averaging to predict both systolic (SBP) and diastolic (DBP) blood pressure as continuous variables.All analyses were run in R [28].
We were also interested in investigating the relationship between hypertension and factors related to transitions toward industrialization.Using the ACC/AHA guidelines outlined in Whelton et al. [29], we defined hypertension as systolic blood pressure !130 mm Hg or diastolic blood pressure !80 mm Hg.It is worth noting that due to their recent publication, these guidelines are not used in many previous studies (hypertension was previously defined as systolic blood pressure !140 mm HG or diastolic blood pressure !90 mm Hg).Blood pressure categories were used to describe cohort demographics and to generate a binary classification for each individual (i.e., hypertension present or absent), which was then used in subsequent analyses.This was made difficult by having only one reading per individual in 2015, coupled with the aforementioned evidence that blood pressure declined with reading number.Thus, to create a better estimate of blood pressure in 2015, we constructed a linear model that used the first blood pressure reading to predict the third (SBP: intercept = 10.918,β = -0.876,tvalue = -25.1,p<0.001, df = 130; DBP: intercept = 8.268, β = -0.874,t-value = -14.4,p<0.001, df = 130).We then used the "predict" function to predict the third blood pressure reading for each individual in the 2015 cohort.The complete set of third blood pressure readings (i.e. the predicted readings for 2015 and the measured readings for 2016 and 2017; N = 513) was then used to determine hypertensive state (presence or absence), with non-hypertensive individuals assigned to "normal" or "elevated" state.Because the predict method makes a conservative assumption regarding the third reading, the presence of hypertension based on predicted third readings is unlikely to be overestimated.With this method, only eight out of 203 (3.9%) individuals were re-classified from hypertensive (first reading) to non-hypertensive (predicted third reading).We re-ran models without the predicted data and obtained similar results, further supporting the validation of this statistical approach.A GLMM was then constructed to predict hypertensive state with a binomial link (i.e. a logistic regression), using individual ID as a random effect, and BMI, age, and sex as fixed effects.
Finally, we investigated lifestyle and household predictors based on data from 2015 (N = 47).We used model selection and averaging of generalized linear models to predict SBP
Results
Of the 513 participants, we identified 49.1% as hypertensive (Stages 1 and 2 combined).Of those that were not hypertensive, we found that 8.4% had elevated blood pressures, with only 42.5% having clinically normal measurements (Table 2 In statistical analyses of SBP and DBP, we found that both were positively predicted by age and BMI, but not by sex (Table 3 and S2 Fig) .Similarly, hypertensive state was positively predicted by age and BMI, while effects of sex were non-significant (Table 3).A greater proportion of women (50.3%) were hypertensive compared to men (47.5%) (Table 2).
Further analyses revealed no significant effect of lifestyle and stress-related factors that we investigated (i.e., household size and tobacco and alcohol usage), although sample sizes were smaller than the overall cohort.Full results and demographic data are provided in the Supporting Information.
Discussion
The overall prevalence of hypertension in this rural Malagasy community was nearly 50%.This is higher than what is reported in many rural communities on mainland Africa [12,13,26,30], and interestingly, similar to rates observed in urban settings in other African countries [9,31].It should be noted that our categorization of hypertension follows the ACC/AHA guidelines recently published in Whelton et al. [29,32], while much of the existing literature, including another study in Madagascar [24,32], used the previous standards for the diagnosis of hypertension (i.e., SBP!140 mmHg or DBP!90 mmHg).Based on previous guidelines [32], we would have reported the prevalence of hypertension in Mandena to be 26%.This rate of hypertension is similar to findings from a number of studies that used the previous guidelines-including in rural Madagascar (27%) [24], Ghana (24.1% and 27% [13,33]), Nigeria (19.3%), and Kenya (21.4%) [34].Thus, use of the new ACC/AHA guidelines in other LMICs is likely to reveal the high rates of hypertension found in our study.As the new guidelines reflect health concerns that can occur at lower blood pressures, re-evaluating hypertension in these communities can ensure more robust surveillance of associated morbidities, such as cardiovascular and chronic kidney diseases [17,18].
We found that BMI covaried positively with measures of SBP and DBP.These patterns mirror previous studies of blood pressure in rural communities [13,33,35,36].It is likely that cultural and behavioral changes in Mandena contribute to this observed trend.For example, increased access to processed foods and soft drinks can lead to greater BMI and higher blood pressure, an association often used to explain the growing rates of hypertension in urban centers of developing countries where these dietary items are common [7].Anecdotal observations by the authors who performed data collection suggest an increase in processed foods and soft drinks in Mandena, as well as transportation infrastructure that could promote more sedentary lifestyles (i.e.cars, motorcycles, and buses).While BMI measurements in this population are relatively low (89% of individuals were classified as underweight or normal weight), similar associations between BMI and blood pressure have been reported in other lean populations [37,38].An evolutionary perspective can help to rigorously evaluate factors that may be directly affected by transitions toward industrialization (i.e., where the opportunity for mismatch is high), including diet, physical activity level, and stress.While there is a positive association between BMI and hypertension, it is interesting that only 1.2% of individuals are classified as obese.It is possible that additional metrics related to body mass and metabolism, such as waist circumference, would better elucidate patterns in this population [39].Because the majority of hypertensive individuals in our study are not overweight or obese (Fig 3), future research should explore the disconnect between expected (overweight) and observed (lean) phenotype in those presenting with hypertension.
Older age significantly predicted hypertensive state, a finding that is in line with many other studies [9,12,25,30,36].This is relevant to global public health initiatives more generally, as programs that successfully alleviate mortality (i.e., via improved nutrition, vaccination, or access to healthcare) may in turn be challenged by an increasing incidence of hypertension due to the greater number of individuals that reach older age [24].Our data indicate clinically elevated blood pressures in the younger demographic, indicating that it is not solely the elderly who are at risk for elevated blood pressure (S2 Fig) [24,40].Understanding behavior and health status among the younger demographic may help elucidate drivers of the progression to hypertension.
Contrary to our predictions, sex did not have a significant effect on hypertension, even within specific age groups (data not shown).While previous studies have found higher blood pressure in men than women [25], studies in rural Madagascar and Ghana also failed to document a significant association between sex and hypertension [24,33,41].It may be that cultural and livelihood practices contribute to the lack of significant sex differences.For example, both men and women work in the agricultural fields in Mandena, which may yield similar levels of cardiovascular health in both sexes.
To our knowledge, this is only the second study to examine blood pressure in rural Madagascar.A previous study also found a positive relationship between age and hypertension, and raised concerns over the risk of a hypertension epidemic in Madagascar's progressively aging society [24].Strengths of our study include its relatively large sample size, with data collection spanning three years, and the inclusion of additional lifestyle measures.Possible limitations include biased sampling, as unemployed and/or unhealthy individuals may be more likely to visit our health clinic during the day compared to their employed, healthy counterparts.Similarly, all data were collected during the austral winter, which may reflect a seasonal bias in activity patterns or stress levels.We also note that stages of hypertension were determined using Western standards, which may not capture important biological factors of this genetically unique, non-Western population [25,42].Future work in this population should consider genetic factors that can influence high blood pressure, particularly as they relate to individuals of African and Austronesian ancestry (i.e. the ancestral populations of the modern day Malagasy) [43].It would also be interesting to assess the influence of comorbidities, such as communicable diseases, on blood pressure-including the intriguing potential tradeoff between elevated blood pressure and protection against malaria [44].
In conclusion, we discovered a remarkably high prevalence of hypertension in an agricultural community in rural Madagascar.Our research raises a number of questions to address in future studies, including research aimed at understanding the drivers of hypertension throughout the life course.Future studies in this region could incorporate data on dietary sodium intake and family history of hypertension, along with metrics of stress, including stress associated with vanilla production.More generally, our study contributes to the growing body of literature that calls for making the study of hypertension in rural communities a priority in global health research [8,10,30,35,45].
Fig 3 .
Fig 3. Blood pressure and BMI.The solid lines separate hypertensive and non-hypertension individuals, while the dashed lines demarcate overweight and obese individuals from those who are normal or underweight.The top left quadrant includes individuals who are under/normal weight and hypertensive, and the bottom right quadrant includes those who are overweight/obese and nonhypertensive.https://doi.org/10.1371/journal.pone.0201616.g003 | 2018-08-18T21:15:57.469Z | 2018-08-16T00:00:00.000 | {
"year": 2018,
"sha1": "5290fba548ff65b6dc89b253f51bd99c15300254",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0201616&type=printable",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4dcaec8bfb0e85e5d3f3ce80f96cea42e8edbbe7",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1771297 | pes2o/s2orc | v3-fos-license | Alternative medicine and anesthesia: Implications and considerations in daily practice
Nowadays, herbal medicines are widely used by most of the people, including the pre-surgical population. These medicines may pose numerous challenges during perioperative care. The objective of the current literature review is to dwell upon the impact of the use of herbal medicines during the perioperative period, and to review the strategies for managing their perioperative use. The data was generated from various articles of different journals, text books, web source, including, Entrez Pubmed, Medscape, WebMD, and so on. Selected only those herbal medicines for which information on, safety, usage, and precautions during the perioperative period was available. Thereafter, the information about safety, pharmacokinetics, and pharmacodynamics from selected literature was gathered and analyzed. The whole review focused on the fact that these commonly used alternative medicines could sometimes pose as a concern during the perioperative period, in various ways. These complications could be due to their direct action, pharmacodynamic effect, or pharmacokinetic effect. In view of the serious impacts of herbal medicine usage in perioperative care, the anesthesiologist should take a detailed history, especially stressing on the use of herbal medicine during the preoperative anesthetic assessment. The anesthesiologist should also be aware of the potential perioperative effects of those drugs. Accordingly, steps should to be taken to prevent, recognize, and treat the complications that may arise due to their use or discontinuation.
Introduction
Since ancient times, the practice of Ayurveda has reigned supreme in Indian society. In the modern era, advancements in allopathy have further widened the therapeutic scopes in various diseases. Nowadays, an increasing number of patients are presenting to the hospital for various surgical procedures. A majority of these patients have invariably used Ayurvedic and herbal medicines for their current surgical problem or would have taken similar medicines for other comorbid diseases. The clinical scenario becomes challenging when these patients hide their current treatment regimens, especially their treatment with herbal medicines. The common belief behind these medication profiles, among the general public, is the total harmless nature of these therapeutic agents.
functions due to the practice of polypharmacy by this surgical subset of patients, and the associated co-morbidities that can greatly increase the mortality and morbidity. [5] The clinical consequences and complications can affect any organ system and may lead to myocardial infarction, stroke, bleeding, higher anesthetic consumption, delayed recovery from anesthesia, respiratory complications, renal disturbances, nullification of the therapeutic effect of other medications, and even transplant rejection. Drug interactions of these medicines in the surgical population are of utmost importance and warrant complete knowledge on the part of the anesthesiologists, of the pharmacokinetic and pharmacodynamic properties of these medicines. [6] However, the difficulties become manifold when these patients do not reveal the use of any such medication, either deliberately or out of ignorance during the preanesthetic evaluation. As such, appropriate plans to deal with such possible hazards cannot be formulated beforehand and may lead to devastating clinical consequences later on. [1] The lack of information further makes it virtually impossible for physicians to assess the possible concentration of the drug, its active ingredients, dose requirement during surgical procedures, metabolism, and metabolite formation of these agents, as also their possible excretory pathway from the body.
Preoperative Evaluation
During preoperative evaluation, a complete knowledge about the current medication helps in formulating a well-planned anesthetic and surgical procedure, to counter the possible adverse outcomes as a result of these herbal products. An altered coagulation profile can potentially be caused by garlic, ginseng, and ginkgo, and can result in more than the anticipated blood loss during a surgical procedure. [7] The effect of general anesthesia can be potentiated by kava and valerian, which are known to possess sedative properties. [8] The hepatic microsomal enzyme cytochrome P450 3A4 is stimulated by St. John's wort and has been observed to alter the metabolism of an immunosuppressant and oncological chemotherapy. [9] The worst fears in our country at present are the overuse of Chinese herbal medicines that have flooded the drug market. These products are not standardized and are a mixture of various herbs, which can prove to be extremely detrimental to surgical patients. [10] The consumption of these medications is not limited to adults alone. Even the pediatric population is administered herbal, homeopathic, and Ayurvedic medicines for various ailments. [11] The most common among these medications include Echinacea, Aloe, sweet oil, Arnica, and so on. The consequences of these medications on the interaction with allopathic medicines can be very devastating, comparable to similar interactions in the adult population.
It has been observed by various studies that a majority of anesthesiologists are not aware of the mechanism of action and side effects of these Ayurvedic and herbal medicines. [12] Data from other studies have shown that complementary and alternative medicines were used by 57.4% of the population. Among these, the most commonly used medicines include herbal medicines (6.8%), megavitamins (6.8%), homeopathic medicines (1.4%), and 1.2% use folk remedies. [13] Even the pregnant population has been a regular user of herbal medicines, as 7.1% of parturients, who were in their mid-pregnancy stage, have reported use of herbal products, and only 14.6% of these parturients were actually aware of the intended pharmacological use of these drugs. [14] The drug interactions can be of immense significance in these parturients, especially those who prefer painless labor. The increased bleeding tendencies in these patients make them highly vulnerable to the potential complication of neuraxial anesthesia along with possible fluctuations in maternal hemodynamics. [15,16] The maternal ingestion of ginger for a prolonged duration can possibly cause inhibition of fetal binding of testosterone, as ginger is a potent thromboxane synthetase inhibitor. [17]
Application of Guidelines and Protocols
Although guidelines and protocols have been established by the American Society of Anesthesiologists (ASA) with regard to the possible drug interactions in patients on herbal and alternative medicines, somehow these are either not properly followed or are sometimes difficult to adhere to, such as in emergency surgical situations. Their combination with conventionally prescribed medicine can prove to be fatal, as has been observed by various studies. [18] The use of multiple medications as a part of balanced anesthesia over a short duration of time can have multiple interactions with these herbal and Ayurvedic drugs. Genuine efforts must be made by the attending anesthesiologist to elicit any history of herbal medication possibly by a survey questionnaire; similar attempts have been made earlier also. [8] Augmentation of the bleeding tendencies with use of ginkgo and garlic, exaggeration of hypertension with ginseng, and excessive sedation with St. John's wort, are few of the common side effects encountered daily in the routine practice of anesthesia. [8,19] Garlic is also known for increasing bleeding tendencies due to inhibition of platelet aggregation, which mandates extra cautiousness during administration of epidural anesthesia, as there are increased chances of the development of an epidural hematoma. [20,21] Ginkgo biloba is commonly used by the people as it is considered to possess memory-improving qualities. It has also been reported to possess anti-inflammatory properties as well as it inhibits platelet activity. Thus, its use is also fraught with increased danger of perioperative bleeding. [22,23] Therefore, it is mandatory to stop these drugs in patients taking non-steroidal anti-inflammatory drugs before any proposed surgical procedure. [24] The mechanisms by which St. John's Wort and valerian augment the anesthetic effect include modulation of the Gamma Amino Butyric Acid (GABA) neurotransmitter. The properties of ginseng are utilized during the treatment of type-II diabetes mellitus patients as it lowers the blood sugar levels. [25] The risk of wound infection is possibly increased with the use of Echinacea, as it has immunosuppressant properties. Although, homeopathic Arnica is believed to control bruising and promote healing after local tissue trauma, various studies have come out with contrasting observations. [26] As per the new guidelines, ASA has recommended discontinuation of all herbal medicines two weeks prior to surgical intervention. [27] However, these recommendations cannot be applied uniformly to all types of herbal medicines, as they invariably have different half lives, some have very short half lives, while others have fairly prolonged half lives, along with different pharmacokinetic attributes. Depending on the generation of active metabolite, it has been suggested that sometimes it is far better to apply individual discretion during stoppage of these medicines rather than going by fixed recommendations. It can unduly prolong the waiting period for surgery for drugs of short duration half lives, or can still pose challenges with those drugs whose half life is more than two weeks. As evidence, kava and ephedra have to be stopped 24 hours prior, ginkgo 36 hours prior, and St. John's wort more than a week prior to surgery. [8] Moreover, patients present to hospitals only a few days before the recommended surgical procedure and as such it becomes difficult to implement the required set of protocols.
In India, the marketing companies have been spreading a massive campaign in popularizing traditional Chinese herbal medicines and other herbal products, with emphasis on promoting the health benefits of these medicines. [28] Claims have also been made about treating almost every type of illness with these herbal products, including serious illnesses, as compared to western medicines. [29] Various observational studies have shown concerns from time to time with regard to potential perioperative complications due to a possible drug interaction. [8,19] Among the consequences of these possible drug interactions the most important are, impaired coagulation, electrolyte disturbances, cardiovascular effects, and prolongation of anesthesia duration, which are of high concern to the operating surgeon and the attending anesthesiologist. [30] These side effects and interactions are a result of various possible mechanisms during the perioperative period. This can range from direct effects such as intrinsic pharmacological effects, pharmacodynamic interaction causing alteration of the effect of conventional drugs at the effector site, and pharmacokinetic interaction leading to alteration of the absorption, distribution, metabolism, and elimination of conventional drugs.
Common Drugs Profile from Anesthesia Perspective
Various researches and trials have been conducted in the past and many are going on at present, which concern the drug interaction of Ayurvedic and herbal drugs with allopathic medicines. Based on the evidence of these activities, many new properties of these drugs have come to the fore, which can be of significant concern to the anesthesiologist.
Echinacea purpurea
At the molecular level, Echinacea has got a lipophilic fraction, which is more active than the hydrophilic fraction. The higher activity of the lipophilic fraction is attributed to the presence of alkylamide polyactylenes and essential oils. It is considered a very useful prophylactic and therapeutic agent in the treatment of viral, bacterial, and fungal infection of the upper respiratory tract. The immunostimulatory properties of Echinacea have also been studied. However, evidence is lacking about a possible interaction of simultaneous concomitant use of Echinacea and immunosuppressive drugs. [31][32][33] Therefore, a general consensus exists about the avoidance of Echinacea in patients who are being administered immunosuppressive drugs, especially in patients undergoing organ transplant procedures. A contrasting feature to this fact is the immunosuppression with long-term use of Echinacea, for greater than eight weeks, which increases the risks of poor wound healing and opportunistic infections. Moreover, it has also been incriminated in causing allergic and anaphylactic reactions. [34] Therefore, cautious use is warranted in patients with asthma and atopic or allergic rhinitis. Furthermore, there are also concerns about its potential hepatotoxicity, but nothing conclusive has been established as yet. Still, the use of this product has to be done carefully in patients with pre-existing hepatic dysfunction as well as in surgeries where hepatic functions and hepatic blood flow is more likely to be compromised.
Ephedra vulgaris
It is considered highly useful for promoting weight loss, increasing energy and for treating most of the respiratory infections. The therapeutic actions of this product are mainly due to its various active alkaloid ingredients like ephedrine, pseudoephedrine, norepinephrine, and methylephedrine. Among these metabolites, ephedrine, a non-catecholamine sympathomimetic agent, is the most active compound, which acts on α1, β1, and β2 adrenergic receptors, and exerts its actions by release of endogenous norepinephrine. In larger doses, these sympathomimetic effects can prove to be hazardous for the cardiovascular and central nervous systems, as they cause intense vasoconstriction in the cerebral and coronary vasoconstrictions. As such, anesthesia concerns grow exponentially, as these agents can greatly sensitize the myocardium, to develop arrhythmogenic potential on administration of exogenous catecholamines. Furthermore, prolonged use of this product can lead to a catecholamine-depleted state, and thus, predispose the patients to a high risk of hemodynamic instability, during anesthesia and surgery. The simultaneous use of Monoamine-oxidase Inhibitor (MAO-I) can cause intensive drug interaction, as it can lead to life-threatening complications like, malignant hyperthermia, hypotension, and coma. There have been reports of the formation of renal stones on its prolonged use. Considering the evidence-based pharmacological profile, ephedra must be discontinued at least 24 hours prior to the surgical procedure. [35]
Garlic -Allium sativum
Garlic is considered to be a natural antibiotic. It has also been observed to exert anti-tussive, expectorant, and diuretic activities and is a cholesterol lowering agent. The lipid and cholesterol lowering effects significantly reduce the risk of atherosclerosis and subsequently lower the blood pressure and incidence of thrombus formation. [36] These effects are largely attributed to an active metabolite allicin, which contains sulfur and gives garlic its characteristic smell. A decrease in pulmonary and systemic resistance has been observed in the laboratory animals with allicin. The platelet aggregation is inhibited in a dose-dependent manner. Another active compound of garlic, ajoene, is responsible for the irreversible inhibition of platelets, by potentiating the effects of platelet inhibitors. [37,38] It is believed to possess immunomodulator and anticancer properties. [39] It is associated with few side effects like bad breath, bad odor from skin, gastrointestinal upsets, and skin rashes. On the basis of insufficient data about the pharmacokinetics of its constituents and its effect on platelet function, it should be stopped at least seven days prior to surgery, especially when there is a possibility of epidural hematoma formation and postoperative bleeding.
Ginkgo biloba
It is derived from the leaves of Ginkgo biloba and exerts its pharmacological action through terpenoids and flavonoids. It is commonly used in the treatment of cognitive disorders and memory-related dysfunction, as it is believed to stabilize the cognitive functions, especially in cases of Alzheimer disease with multi-infarct dementia. The other possible uses of this product include peripheral vascular disease, vertigo, age-related muscular degeneration, and so on. The main mechanism of its action seems to be due to its effect on vasoregulation, modulation of neurotransmitter, and its receptor activity, as well as, inhibition of the platelet-activating factor. Contrary to the claims, there have been instances when bleeding complications have been observed, especially spontaneous intracranial bleeding, hyphema, and postoperative bleeding in laparoscopic cholecystectomy, in patients who are Ginkgo users. [40][41][42][43][44][45] The data and pharmacological profile of the drug mandates its discontinuation 36 hours prior to surgery, to decrease the risk of bleeding.
Ginseng -Panax ginseng
It is considered to be a stress relieving product besides restoring homeostasis and is popularly called adaptogen. [46] The pharmacological action of this product is mainly due to ginsenosides, which belong to a group of compounds called steroidal saponine. As such, the mechanism of action very much resembles that of steroidal hormones. The blood glucose lowering effect of ginseng makes it a useful agent to be used in non-diabetic and type II diabetes for reduction of postprandial blood sugar levels. Thus, precaution should be exercised in fasting patients, who are posted for surgery and currently taking ginseng, as there exists a possible risk of severe hypoglycemia. It can influence the coagulation pathway as well as inhibit platelet aggregation. [47,48] These effects are evident in the prolongation of both coagulation time of the thrombin and activated partial thromboplastin time. The evidence-based literature recommends its stoppage, at least seven days prior to surgery.
Kava -Piper methysticum
The active ingredient of kava is kavalactone and is prepared from dried roots of pepper methylene, which possesses anxiolytic and sedative properties. The effects of this product on the central nervous system are dose-dependent, which are helpful in the suppression of epileptogenic focus and provide local anesthetic effects. These central effects definitely potentiate the action of anesthetic agents through GABA-mediated inhibitory neurotransmitters, thus mimicking sedatives and hypnotics. Kava dermopathy can be a niggling side effect of the product on prolonged use. The pharmacological properties of this drug dictate its discontinuation at least 24 hours prior to surgery. [49,50]
St. John's wort -Hypericum perforatum
It is considered a useful medicine for the treatment of mild-to-moderate depression, but only as a short-term measure. However, its utility is doubtful for treating major depression and other mental disorders. [51] The pharmacological actions are mainly mediated by two of its active compounds, hypericin and hyperforin, which act by inhibiting the re-uptake of serotonin, norepinephrine, and dopamine by the neurons. One of the side effects of these medicines is the central serotonin excess syndrome, which can occur with or without the concomitant use of serotonin re-uptake inhibition. It is a potent microsomal enzyme inducer and causes induction of the cytochrome isoform P 450 3A4 and P 450 2C9. Drugs utilizing this enzyme substrate, P 450 3A4, include alfentanil, midazolam, lignocaine, calcium channel blockers, and serotonin receptor antagonist, and as such, their clinical effects are reduced. The anticoagulant effect of warfarin and actions of NSAIDs are reduced due to induction of P 450 2C9. This literary evidence strongly recommends the discontinuation of the drug at least five days prior to surgery. [51,52]
Valerian -Valeriana officinalis
The most common use of the valerian compound is in the treatment of insomnia. The sedative properties of valerian are due to its active compound, sesquiterpines. These sedative and hypnotic effects are dose-dependent and are mediated through modulation of the GABA-mediated neurotransmitters and the receptor function. As the receptor sites are covered with centrally acting anesthetic agents, there exists a high risk of potentiating of anesthetic and adjuvant drug actions that act as GABA receptors. These mechanisms can also produce physical dependence on prolonged use. The abrupt discontinuation before any surgical procedure has the ability to produce benzodiazepine withdrawal like symptoms. [8,53]
Ginger -Zingiber officinale
The six-ginger ale and galanolactone are the active compounds of ginger, which provide therapeutic efficacy in respiratory problems, CNS symptoms, hypoglycemia, sore throat, and so on. [54] It is also considered to be extremely useful in parturients, who are suffering from hyper emesis. [55] The anti-emetic effect can be utilized to prevent oncotherapy related nausea and vomiting. [56] The side effect profile of this product ranges from hyperglycemia to prolongation of bleeding time by thromboxane inhibition. [57] The main concern when using this compound is that caution must be exercised during administration of neuraxial anesthesia, as there are potential possibilities of epidural hematoma formation.
Guduchi -Tinospora cordifolia
The active compounds of Guduchi include clerodane, furanoditerpene, syrengin, and cardiol, which exert cardioprotective, expectorant, anti-allergic, analgesic, and anti-inflammatory effects. [58] It is prescribed for the treatment of DM as it has been seen to lower the blood glucose level. Few studies have quoted its renoprotective, cholesterol-lowering, and anti-oxidant actions also. [58][59][60][61] The beneficial effects in hepatic diseases like cirrhosis and hepatitis have also been demonstrated in few of the earlier studies, [62] however, the recommendations for discontinuation of Guduchi are not clear as per ASA guidelines.
Turmeric -Curuma longa
It is the most commonly used herbal product in almost every household of the Indian subcontinent as a dye and food component. The beneficial effects of turmeric are believed to be due to its anti-infective, analgesic, anti-inflammatory, and anti-oxidant actions. [63] The ability to inhibit microsomal enzyme P 450 causes the prolonged duration of many drugs like fentanyl, midazolam, warfarin, theophylline, bupivacaine, ropivacaine, and lignocaine. Neurocognitive effects are utilized for the treatment of depression, and stress protection from active metabolites of paracetamol, in toxic doses, has also been observed by few researchers. [64] The discontinuation of this product before surgery has not been mentioned in the ASA guidelines. [65] Few studies have observed its anti-neoplastic activity, which can possibly help in protection from various cancers such as breast, prostrate, colon, pancreas, and leukemia. [66] The hepatic protection from active metabolites of paracetamol in toxic doses has also been observed by few researchers. [67] The discontinuation of this product before surgery has not been mentioned in ASA guidelines.
Conclusion
These are only a few products belonging to the Ayurvedic, herbal, and homeopathic sciences that can be of huge significance during administration of anesthesia and surgery. The list is quite long, but it is out of the purview of the present article to discuss all these products. As such, a leaf can be taken out of the present article when possibly considering the clinical significance of other drugs during anesthesia and surgical procedures, and almost similar precautions can be exercised while undertaking such interventions. The main message of this review is to be extra cautious whenever patients consuming these herbal and Ayurvedic products are presented for surgical intervention, whether emergency or elective. As such alternative medicines that are considered to be very safe may not always holds true and the risk arises only during presentation of such patients for surgery, where we dread the interaction of the alternative medicines with anesthetics. In this article an attempt has been made to mention all the properties that can show interaction with the anesthetic drugs and procedures. Yet there is a wide scope to study actions from allopathic sciences retrospectively in an Ayurvedic manner. Further studies can be carried out to expand the specialty of anesthesia in Ayurveda and enrich the knowledge base. The measures discussed can be of immense clinical benefit if they are applied in a judicious and timely manner, thus providing a smooth and safe surgical atmosphere. | 2018-04-03T03:13:45.623Z | 2012-10-01T00:00:00.000 | {
"year": 2012,
"sha1": "7a58a6316dc36dd104c3210cc53b9625eef32a76",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3665191",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "42417bb4f1988bd004fe362ea6f5cf8ab0c921e6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265163853 | pes2o/s2orc | v3-fos-license | Industrial optimization using three-factor Cobb-Douglas production function of non-linear programming with application
: This paper is about the e ff ectiveness of the Cobb-Douglas (C-D) production function in industrial optimization, estimating the number of factors used in the production process of the water industry, for instance, capital and human labor. Moreover, we have modeled a nonlinear optimization problem for a local water industry using two and three factors of production. For this purpose, we have taken into account the Cobb-Douglas production function with di ff erent production factors using the Lagrange multiplier method with the ordinary least squares method. In the course of the solution, a linear function is used to calculate the cost function, and the C-D production function is used to calculate the production function. The Lagrange multiplier method with the ordinary least squares method is then used to solve the constrained optimization problem for the product of production. Furthermore, we compared the outcomes from both examples of two-and three-factor C-D production functions in order to validate the Lagrange multiplier method for the C-D production function. Moreover, the three-factor C-D production function is solved by the Lagrange multiplier method with the ordinary least squares method, which provides optimal results as compared to previous studies in literature. The validity of the proposed methodology is explained by using the products of a local production industry in Pakistan.
Introduction
Optimization is a useful technique for determining the optimal solution to a problem.In other words, optimization is the problem of choosing suitable inputs under given circumstances in order to get the best possible output.For instance, optimization can be used in production models to adjust different inputs and make them more effective in order to get the best output for the production of a particular industry [1].In this scenario, once we have modeled a problem, it can be solved using the available optimization techniques to find the optimal solution [2].
An optimization problem usually consists of three ingredients: an objective function, a set of constraints and a number of decision variables.There are two types of optimization problems: constrained optimization problems and unconstrained optimization problems.Constrained optimization problems have restriction(s) on the objective function, while unconstrained optimization problems have no restriction on the objective function [3,4].
If at least one of the objective functions or the constraint function is nonlinear, then the problem is known as a nonlinear optimization problem.There are many techniques that have been used for the maximization or minimization of nonlinear optimization problems.Sometimes, a problem cannot be modeled correctly using linear programming; therefore, one can use nonlinear programming approaches [5,6] to model the problem.For the constrained optimization problem under consideration in this paper, the objective function, known as the cost function, is linear while the constraint function, known as the Cobb-Douglas (C-D) production function, is nonlinear.In the course of solution, the constrained optimization problem is converted into an unconstrained optimization problem and then solved by the Lagrange multiplier method using the ordinary least squares approach.
There have been many problems in literature in which the C-D production function has been used.A mixed-integer linear programming (MILP) model was established for the optimization of production scheduling [7].A two-factor C-D production function is carried out with the aim of picking the suitable C-D production model for calculating the production process of the selected manufacturing industries in Bangladesh [8].By combining the allometric scaling concept, which is used to estimate the parameters of the C-D function, with the application in transportation problems in China, a novel algorithm for creating geographical C-D models is developed [9].
For the improvement of different factors in the Polish metallurgical industry, the power regression C-D function was used with the aim of developing a number of production factors, for instance, net production, production sold and volume of steel production [10].The proper management of a country's resources is an important issue for its economic development.Along similar lines, the optimization of water management for three industries that rely on water demand prediction, subject to a number of ambiguities, is handled by the use of the C-D production function [11].A two-factor production function was used to model sustainable economic development with the goal of labor production in relation to the commodity production system's capital-labor ratio [12].An economic model has been presented by the application of proper Inada conditions to the C-D production function, which converges to or diverges from per capita product and the steady state of capital [13].Moreover, a two-factor C-D production function was presented in which the effects of labor force and capital on agricultural heritage systems are carried out to maximize profit as well as the sustainability of agricultural heritage through the application of a two-factor C-D production function in which they examine the impacts of major factors on agricultural productivity [14].An algorithmic or analytical procedure was conducted to handle the issue of optimal utilization of resources towards a feasible and profitable model via the C-D production function [15].An application of the C-D production function model was used to find the role of land in urban economic growth [16].A general oligopolistic market equilibrium with nonlinear programming was considered, in which each firm's factor contributes to the system, and then solved by tensor variational inequality [17,18].
It becomes necessary in a real-world optimization problem to adopt a set of nonlinear terms in a mathematical model in order to get particular operational features of the decision-making problem.On the other hand, when nonlinear terms exist in the course of a solution, they add to the computational complexity of the problem.For this purpose, the researchers have developed proper transformation as well as linearization methods for the optimization problems that consist of nonlinear terms.
In this paper, a special type of production function, which was founded by Cobb and Douglas in 1928 and is known as the C-D production function, is under consideration.This function is based on empirical studies that have been applied to the economy for optimal production [19].This function supplies a number of different inputs to the problem and, as a result, produces a unique output for the problem.These inputs may be two or more than two in number, depending on the factors of production used in the industry.Also, the function having more than two factors is known as the generalized C-D production function [20,21].There have been many results in literature in which twofactor C-D function was presented for production.This work presents the optimal solution technique of the C-D function for three factors of production using the Lagrange multiplier method and the ordinary least squares method, with applications.Moreover, Nervole's C-D production function with three inputs using the ordinary least squares method has been presented in [22].Furthermore, this work also developed an environment to transform and linearize an optimization problem with nonlinear objectives or nonlinear constraints by using existing techniques [23].
The proposed work is summarized as follows: (1) To develop a model in order to transform as well as linearize the nonlinear terms.
(2) To choose a suitable model [1] as an optimization problem and to solve it using a novel methodology known as C-D production function using the Lagrange multiplier method with the ordinary least squares method.(3) To find whether the best possible solution to the problem is obtained or not [2].(4) To apply the C-D production function using the Lagrange multiplier method with the ordinary least squares method for the solution of the two-factor C-D production function as well as for the three-factor C-D production function.(5) To apply the case study to industrial optimization in order to minimize costs more efficiently.
First we solved the two-factor C-D production function [24] using the Lagrange multiplier method with the ordinary least squares method.Second we used three factors of the C-D production function with the same technique.The obtained results show that cost minimization in cases of three factors of production is more efficient as compared to cost minimization in cases of two factors of production.
Cost minimization
The problem under consideration depends on three basic factors: • Cost of the company for manufacturing production.
• Quantity of production.
• Income from the sales of production according to market prices.
For instance, the role of the water industry is to filter water in different purifying tanks, then add chemicals, pack them properly, and then sell them in the market.In this task, the main objective of the industry is to minimize costs or maximize production.There have been many approaches in literature in which the total cost of an industry is presented as a linear function [24,25].
where n represents production factors.
is a vector of production factors.
is the vector of prices of production factors.
The following relationships present the production volume [26]: 2) The income of the company from production sales is given by: where, is a vector of the quantity of produced goods.
is the vector prices of produced goods.The profit of the company is the difference between its income and its cost of production.
The main focus of this paper is to minimize the cost at a specific level of production and, consequently, to maximize production with more factors of production.
The optimization problem originating from our data analysis consists of a linear function and a nonlinear function.We have used the linear function as a cost function, while the nonlinear C-D production function is used as production output.Let us consider a linear cost function as an objective function and the nonlinear C-D production function as a constrained function.Then, this becomes a constrained optimization problem with an equality constraint.For the sake of simplicity, we convert this constrained optimization problem to an unconstrained optimization problem by using the Lagrange multiplier method, and then we solve this constrained optimization problem for stationary points.Let us start with a function Z of two independent variables that is subject to an equality constraint function g.The objective and constraint function are given as follows: min z(x, y).
We have, using the Lagrange multiplier, L(x, y, λ) = z(x, y) + λg(x, y). (2.5) In order to solve the problem, we must first determine the values of x, y, and λ.Note that we consider the cost function to be an objective function and the C-D production function as a constraint function, and solve it for different factors of production.Furthermore, we extend this work as an application to optimize production factors in a specific industry.The production function for three inputs is as follows: After linearizing, we get ln P = ln A + a 1 ln X + a 2 ln Y + a 3 ln Z. (2.7) After this, we will use least squares linear regression to find out the structural parameters, which are A, a 1 , a 2 , a 3 .The corresponding cost functions is given by Along similar lines as Eq (2.5), we can get the corresponding Lagrange function for three-factor C-D production function.
In other words, the overall methodology can also be summarized as follows: (1) First, model an objective and constraint function using two inputs for the given industry.
(2) Transform a constrained optimization problem into an unconstrained optimization problem.
(3) Solve the unconstrained problem using the Lagrange multiplier method with ordinary least squares.(4) Repeat the above procedure for three factors of production for the given industry.
(5) Compare the outcomes in both cases.
Production function and its structural parameters
Given input prices, a cost function shows how much it costs to produce various output levels.In the course of solving such problems, one or both productions and factors of production may be stated by using their values.It is good practice to present the products of an industry in proper units that have a number of production components.In a similar way, human labor can be calculated when needed.When the availability of aggregated data is smaller, it can be measured with headcount or work time.When a higher level of aggregation is available, then the value of human labor seems more suitable.The most challenging task is the quantitative description of the capital used in the industry.In the majority of analyses, it becomes challenging due to the use of a number of factors of production.The greater a company's asset base, the less it is associated with high productivity [27].In the following, we analyzed a local water industry with a two-factor C-D production function and a three-factor C-D production function using the Lagrange multiplier method with the ordinary least squares method and compared their results.
Cost minimization in the water industry using the two-factor C-D production function
For this problem, the data is taken from a local water industry Abysin Water Industry, which is one of the local registered branches of Chemtronics Water Services in Lahore, Pakistan.First, the problem is solved for two factors, and then it is generalized.In this case, the cost function consists of two factors (labor and capital) of production, which is given by: where, C(X,Y) represents the cost of the industry in rupees, α 1 represents the fixed cost of the industry, α 2 represents the unit price of labor per hour, α 3 represents the unit price of capital per kg, X represents the number of labor hours, Y represents the amount of capital in kg.
All the prices in the industry are in Pakistani currency (the rupee).The raw materials used as capital are taken in kilograms, i.e., the unit for capital is kg. Here, Putting these values in Eq (3.1), we get, For production, we use the C-D production function, which is given by: where P represents the amount of production in liters.
After Eq (3.3) we have, The analyzed data for the water industry with two inputs, i.e., human labor and capital, and production as an output for the year 2022 is given in Table 1.
We have used ordinary least squares regression in order to find the structural parameters of the given C-D production function.The data analysis has been done using Microsoft Excel, which is given in Regressions 1 and 2 for two and three inputs, respectively.
Using Eq (3.2) and H(x, y) in Eq (3.5), we have, Figure 1 represents the actual and theoretical costs of the industry.For example, in January 2022, we can see that the actual cost was greater than the estimated cost, but from the data analysis, our calculated cost is less than both the actual and estimated cost.Similarly, if we compare the cost from the table with our calculations, we can see the difference.It means that there is a sufficient difference in both costs, meaning that costs are minimized to a great extent.Now, the production function, as in [26], is given by where P represents amount of production in liters.
After linearizing, we get ln P = ln A + a 1 ln X + a 3 ln Y + a 3 ln Z. (3.17) A data analysis of the water industry consisting of three inputs for the year 2022 is given in Table 3.
The three inputs are human labor, capital, and chemicals, respectively.This data is collected from the water industry for the year 2022, and all results are calculated on a monthly basis.
From these analyses, we have, Putting these values in (3.16), we get Using the Lagrange multipliers method, we have, Table 4 presents actual and estimated costs on a monthly basis for the water industry for the year 2022.
Results and comparative analysis
We have solved the C-D production function with two and three factors of production using the Lagrange multiplier method with the ordinary least squares method.Moreover, this is an optimal solution approach for the C-D production function with three factors of production using the Lagrange multiplier method with the ordinary least squares method.Despite the fact that the proposed approach is a different solution technique as compared to the existing solution techniques in the literature, we still compared the general features of the proposed methodology for the water industry with different factors to Nervole's approach.In the following, we compared the presented solution approach with Nervole's approach [22].In Table 5, Nerlove used the function to estimate the cost We used the given function as an output 3 Nerlove approach needs much algebra for evaluation It has a simple implementation 4 Computational complexity is too much Having less computational complexity In addition, the paired t-test is used to determine if there is a significant difference between the means of two related data sets as well as to find the mean square error from the findings of both the two-and three-factor C-D production functions.In the case of the findings of the two-factor C-D function, the outcomes of a paired t-test on two data sets result in a P-value of 0.547 and degrees of freedom (df) of 11.The paired t-test compares the means of the two variables to determine if there is a significant difference between them.The null hypothesis is that there is no significant difference between the means of the two variables.This means that there is no significant difference between the means of the two variables at the 5% level of significance.This showed that the P-value is greater than the critical value of 0.05, indicating that we cannot reject the null hypothesis.
In the case of the findings from the three-factor C-D function, the findings of a paired t-test result in a P-value of 0.559 and degrees of freedom (df) of 11.Moreover, the findings from the three-factor C-D using a paired t-test were also conducted, which also showed that there is no significant difference between the means of the two variables with the same 5% level of significance.
Based on the findings of the paired t-test, we conclude that there is no significant difference between the means of the two related variables from the findings of two and three factors in the C-D production functions.This shows that the C-D production function plays a key role in the production problem when using the Lagrange multiplier method with the ordinary least squares method.Furthermore, we have worked on the cost comparison of two and three factors in the C-D production function.In the case of two factors of production, the cost value is 597107 per unit, while in the case of three factors of production, the cost value is 581014 per unit.In both cases, our calculated cost is less than the actual cost of the industry.Besides, the cost calculation for the three-factor C-D production function is less than that of the two-factor C-D production function.Clearly, we can see the differences in Tables 6-9.
Conclusions
In this paper, the optimal solution developed by C-D is carried out with different production factors.From these analyses, we concluded that the C-D production function plays a key role in the production problem when using the Lagrange multiplier method with the ordinary least squares method.Moreover, we solved the constrained optimization problem with a two-factor and three-factor C-D production function using the Lagrange multiplier with the ordinary least squares method.In the case of two factors of production, the cost value is 597107 per unit, while in the case of three factors of production, the cost value is 581014 per unit.This showed that, with more production factors in the C-D production function, the cost value is minimized to a high extent.This validates that the C-D production function with more factors using the Lagrange multiplier is more effective than previous approaches in literature.Moreover, the presented solution methodology is compared to Nervole's C-D production function.This means that the individual expression of each factor as an input has a key role in obtaining the best optimized results.Moreover, we optimized the overall cost of the water industry using the three-factor C-D production function as an application of C-D production using the Lagrange multiplier method with the ordinary least squares method.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Figure 1 .
Figure 1.Estimated cost versus actual cost of the industry using two inputs.
Figure 2
Figure 2 represents the actual and estimated cost of the industry for the three-factors C-D production function.
Figure 2 .
Figure 2.Estimated cost versus actual cost for the industry for three inputs.
Table 5 .
we have differentiated Nerlove's C-D function and the C-D production function with the Lagrange multiplier method.Comparative analysis between Nerlove's C-D function and the C-D production function with the Lagrange multiplier method.No. Nerlove's C-D cost function C-D production function 1 Nerlove used the C-D cost model We used the C-D production function 2
Table 1 .
Inputs and output values of the water industry with two inputs.
36000 − 3212.468X 0.3568 Y 0.0542 ).(3.6) Regression 1. Linear regression for two inputs.Regression 2. Linear regression for three inputs.Now taking partial derivatives of L with respect to X, Y and λ respectively and equating to zero, i.e.,
Table 2
compares the actual cost with the estimated cost.
Table 2 .
Actual and estimated cost values for water industry using two inputs.
Table 3 .
Inputs and output values of the water industry with three inputs.
Table 4 .
Actual and estimated cost values for the water industry using three inputs.
Table 6 .
Data set for the two-factor C-D production function.
Table 8 .
Data set for the three-factor C-D production function. | 2023-11-15T07:16:04.744Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "aaa27d76c7fa3c966698a19f32de45552b85b9c8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/math.20231532",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eca63fb6afd42db6837c4a7d98d42e32f9c7c49f",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
14692693 | pes2o/s2orc | v3-fos-license | Comparison between Ischemic Stroke Patients <50 Years and ≥50 Years Admitted to a Single Centre: The Bergen Stroke Study
Introduction. Young adults are likely to differ from old patients concerning cerebral infarction. Methods. We compared characteristics of patients aged under and above 50 years, admitted to the Department of Neurology with cerebral infarction between 2006 and 2009, based on prospective registration. Investigation followed one common protocol for both groups. Results and Discussion. One hundred patients (8.2%) were <50 years old, and the proportion of males was higher in this group (72% versus 55.8%, P = .002). Young stroke patients are more often current smokers (44.1% versus 23.6%, P < .001). Common causes for stroke in the young were cervical artery dissection (18% versus 0.6%, P < .001) and cardiac embolism due to disorders other than atrial arrhythmias (18% versus 5.5%, P < .001). Among the old, atrial fibrillation and flutter dominated (29.1% versus 5%, P < .001). Stroke severity and location did not differ. Old patients more often suffered from pneumonia (10.6% versus 2%, P < .003) and urinary tract infection (14.6% versus 2%, P = .001). Conclusions. Males dominate, and current smoking is more common in the young. Cervical artery dissection and nonarrhythmic heart disorders are frequent causes among young patients, while traditional risk factors dominate the old. Stroke severity is similar, but old patients seem more exposed for infectious complications.
Introduction
Cerebral infarction may have serious consequences for patients in their prime of life and influence on choice of education, vocation, and family planning. More knowledge regarding pathophysiological mechanisms and prognosis is urgently needed. Several studies have shown that risk factors and etiology differ between young and old patients. Migraine is frequently reported among young adults [1][2][3][4][5] whereas traditional risk factors such as hypertension and dyslipidemia are usually less frequent. Large-artery atherosclerosis is rare [3,6] whereas cervical artery dissection is a common cause of cerebral infarction among young adults [2,4,6,7]. Cardioembolic stroke is in the majority of cases caused by cardiac conditions with low to uncertain embolic risk, such as patent foramen ovale and atrial septal aneurysm [4,8]. Methodological differences may obscure comparison between different centres. There has not been many comparisons between young and old patients treated and investigated in a single centre.
The aim of this study was to compare characteristics of cerebral infarction between young and old patients undergoing treatment and investigations according to one common protocol in a single centre. ischemic attacks where CT or MRI showed infarctions related to the clinical findings [9]. The patients were dichotomized into two groups: <50 years (young patients) and ≥50 years (old patients). All patients had CT or MRI. Isolated acute ischemic lesions on CT or MRI were defined as lacunar infarctions (LI) if <1.5 cm and located as subcortical or in the brainstem. All other acute ischemic lesions were defined as nonlacunar infarction (NLI). NLI comprised subcortical and brainstem infarction ≥1.5 cm, cortical infarction, mixed cortical and subcortical infarction, and cerebellar infarction. Leukoaraiosis was defined as the presence of hypodense periventricular abnormalities on MRI (T2).
Methods
The National Institute of Health Stroke Scale (NIHSS) was used to assess stroke severity. NIHSS measurements were performed on admittance and 7 days after stroke onset or earlier if the patient was discharged earlier (NIHSS7). Likewise, modified Rankin Scale (mRS) score and Barthel Index (BI) were obtained 7 days after stroke onset or earlier if the patient was discharged earlier. Blood pressure, body temperature, and serum glucose on admittance were registered. Diagnostic workup included ECG, Holter monitoring, echocardiography, and duplex sonography of neck vessels. Holter monitoring was performed among patients with embolic stroke and no known atrial fibrillation.
Risk factors including hypertension, smoking, diabetes mellitus, myocardial infarction, angina pectoris, peripheral artery disease, and atrial fibrillation were registered on admittance. Hypertension was defined as prior use of antihypertensive medication. Current smoking was defined as smoking at least one cigarette per day. Diabetes mellitus was considered present if the patient was on glucose-lowering diet or medication. Angina pectoris, myocardial infarction, and peripheral artery disease were considered present if diagnosed by a physician any time before stroke onset. Atrial fibrillation required ECG confirmation any time prior to stroke onset. A history of prior cerebral infarction was registered. Old infarctions on CT or MRI were registered, including both clinically silent and symptomatic infarctions. Etiology was determined by the Trial of Org 10172 in Acute Stroke Treatment classification (TOAST) [10], performed by a neurologist (HN). Clinical classification was based on the Oxfordshire Community Stroke Project (OCSP) scale which includes lacunar syndrome (LACS), partial anterior circulation syndrome (PACS), total anterior circulation syndrome (TACS), and posterior circulation syndrome (POCS) [11].
Complications including pneumonia, urinary tract infection, and seizures were registered.
Statistics.
Chi-square test, Fisher's exact test, and student's t-test were performed when appropriate. Logistic regression was performed to analyse the effect of the two age groups (young or old patients) on outcome day 7 adjusting for sex Data are expressed as mean or n (%). NIHSS, The National Institute of Health Stroke Scale; LACS, lacunar stroke syndrome; TACS, total anterior circulation stroke syndrome; PACS, partial anterior circulation stroke syndrome; POCS, posterior circulation stroke syndrome; mRS, modified Rankin Scale. and NIHSS score on admission. mRS score 0-2 versus 3-6 was used as dependent variable. STATA 11.0 was used for analysis.
The following risk factors were more frequent among old patients: myocardial infarction, angina pectoris, hypertension, atrial fibrillation, and prior cerebral infarction. Mechanic aortic valves and current smoking were more frequent among young patients (Table 1).
There was no difference concerning NIHSS score on admittance or OCSP classification. Systolic blood pressure was lower among young patients on admittance: 155 mmHg versus 168 mmHg (Table 2).
Outcome on day 7 (or on discharge if discharged earlier) was similar regarding mRS score and NIHSS score, whereas mean Barthel Index was higher among young patients: 86.9 versus 78.1. Figure 1 shows mRS scores according to age. The mortality rates did not differ significantly on day 7, respectively, on discharge (P = .5). Logistic regression showed that mRS score 0-2 versus 3-6 was associated with NIHSS score on admittance (odds ratio (OR) 1.29 (95% confidence interval (CI) 1.25-1.34), P < .001), but not with sex (OR .76 (95%CI .57-1.01), P = .064) or young versus old patients (OR .69 (95%CI .40-1.20), P = . 19). Subanalysis for patients >45 years and <45 years, traditionally regarded as "young" in stroke literature, did not change the results concerning stroke severity on admission (NIHSS): 6.9 in the young versus 6.2 in the old group, P = .6, neither was there a difference regarding short-term outcome at day 7: mRS 2.3 versus 2.3, P = .81.
Pneumonia and urinary tract infections were less frequent among young patients. Seizures were seen in about 4% in both groups (Table 2).
Cardiac embolism was found in 21% of the young patients versus 29.4% of the old patients and included most frequently in the young with patent foramen ovale (in 2 cases combined with atrial septal aneurysm), mechanical heart valve and paroxysmal atrial fibrillation, or combinations 4 Stroke Research and Treatment of these conditions. Other causes were found in 23% of young patients versus 0.9% of the old patients, and cervical artery dissection was the most frequent one (18%). More rare conditions included pseudoaneurysm of the ICA, giant aneurysm of the MCA, prothrombotic disorders, and Moya moya. Large-artery atherosclerosis was less frequent among young patients: 3% versus 12.4% (Tables 2, 3, and 4).
The frequency of atrial fibrillation on ECG on admittance was low among young patients compared to old patients: 2.4% versus 17.0%. Likewise the frequency of atrial fibrillation disclosed on Holter monitoring was low among young patients: 1.8% versus 17.7% (Table 5).
Based on MRI findings, there were no differences concerning location of cerebral infarction. Fewer young patients showed leukoaraiosis (7.8% versus 50.4%) or had sequels after old infarctions on MRI (10% versus 21.3%) ( Table 6).
Discussion
The proportion of males was larger among the young patients than among the old patients. The proportion of males was also higher compared to other studies of cerebral infarction among young adults [7,12]. Accumulation of traditional risk factors probably starts earlier in males than in females. Women have a longer life expectancy, which may play a role for the relatively larger proportion of female stroke patients in the older group. On the other hand, it is possible that a change in risk factors or life style has reduced the Stroke Research and Treatment 5 frequency of stroke among young females in recent years.
Smoking has decreased among young women [13], and there has been a change regarding the use of oral contraceptives [14]. Another possible reason is better diagnostic methods of cerebral infarction because of high use of DWI. Psychogenic neurological symptoms are, for example, more frequent among females [15,16] and may sometimes be mistaken for stroke but are easily distinguishable by DWI. Other studies showed migraine as a cause of stroke in up to 20% in the early 1990s [17], while newer studies find this in only few patients [4,7,[18][19][20][21]. Complex migraine might have been misdiagnosed as cerebral infarction in the pre-DWI era. It is unlikely that this mistake was performed in this study because there was no difference regarding the frequency of migraine among young and old patients. The diagnosis of migraine was based on an interview by a neurologist during the hospital stay strengthening our findings. Thus, our result indicates that migraine is not particularly related to cerebral infarction among young patients compared to old patients. Most traditional risk factors were less frequent among young patients. However, the fact of smoking made an exception. It has previously been shown that smoking is more frequent among young patients with cerebral infarction compared to matched controls [6]. In our study, the proportion of current smoking was clearly higher among the young compared to the old, and the proportion of 6 Stroke Research and Treatment past smoking was lower in the young patients group. The frequency of diabetes mellitus did not differ between young and old ischemic stroke patients. Large-artery atherosclerosis was a rare cause of cerebral infarction among the young patients. Its frequency was also lower than among young patients with cerebral infarction in previous studies [6,7]. This may indicate that symptomatic atherosclerosis has decreased among young people in recent years.
There was no difference concerning small vessel disease among young and old patients, and the frequency was similar to the findings in other studies of cerebral infarction among young adults [6,7]. This is perhaps surprising because there is much uncertainty regarding the pathophysiological mechanisms of lacunar infarctions [22][23][24].
The frequency of cardiac embolism was similar between young and old patients (Table 2), and the proportion of cardiac embolism in the young is in line with other findings [3,7,19,20,25]. However, the specific cardiac sources differed between young and old patients. Atrial fibrillation was the dominating cardiac source among old patients but infrequent among young adults. In young adults the dominating heart disorders were patent foramen ovale with and without atrial septal aneurysm, followed by mechanical heart valves. This matches with the findings in other studies [7,19], but mechanical heart valves were more frequently found as the cause of infarction in our study.
The proportion of other causes did not differ from most investigations [3,4,6,7,18,21,26]. Cervical artery dissection was with 18% the most common other cause among the young patients. Dissections were mostly located in unilateral ICA, less frequently in unilateral VA, and in a few cases in bilateral ICA.
Neither proportion of patients with unknown etiology was different from other studies, which is 31-62% in young patients [3,6,20,27] and 35% in stroke patients overall in this category [26].
The distribution of infarctions in the anterior and posterior circulation was similar between young and old patients. The frequency of posterior circulation infarction was lower than in some other studies including young patients [7,12]. We believe that this reflects better diagnostic precision in this study because most patients underwent DWI. Frequent MRI may also explain that we found a higher frequency of leukoaraiosis in old patients compared to recent studies [7,12]. In our study, 7.8% among the young versus 50.4% among the old patients had leukoaraiosis. Old infarctions on MRI were found in 10% of the young patients versus 21.3% of the old ones. Multiple infarctions were common but less frequently seen in our study compared to recent publications [7,12], and there was no difference between young and old patients.
There was no difference with respect to severity of neurological deficits on admittance between young and old patients. There was also small difference in the one-week outcome or mortality at day 7. Only Barthel Index was significantly higher among young patients whereas modified Rankin score or NIHSS score did not differ, neither was there any difference concerning the one-week improvement among young and old patients on multivariate analyses. This may indicate that young adults in our investigation do not tackle cerebral ischemia better than old patients concerning short-term outcome, which is in contrast to recent observation made by a Swiss group [28]. Differences in methodology (e.g., stroke unit cohort versus populationbased study) may account for this discrepancy. However, subanalyses suggested that patients >80 years may experience less improvement than patients <80 years (analysis not shown). This is one of the largest studies making a hospitalbased direct comparison between ischemic stroke patients <50 years and ≥50 years admitted to a single centre, which we consider to be one of its strengths. All patients underwent investigations and treatment according to one common protocol. Another strength was the frequent use of MRI which promotes high diagnostic precision. However, there are some limitations; using the Baltimore-Washington Cooperative Young Stroke Study Criteria may complicate comparison with other studies using other criteria such as the WHO criteria. However, specificity is high in our study due to the frequent use of MRI. As described in Section 2, certain risk factors were registered as present when diagnosed before stroke onset. We might have missed some patients with untreated hypertension, atrial fibrillation and diabetes here, especially in the young patient group. We did not register outcome at 3 months, which gives an incomplete impression about the patients' outcome in the different groups. Young patients may improve more in long-term outcome compared to old patients. Although investigations were thorough in most patients, not all patients underwent complete workup. We might have missed few patients with, for example, atrial fibrillation or carotid stenosis due to that fact.
In conclusion, there are important differences between young and old patients with respect to risk factors, etiology, and distribution of gender. However, severity of stroke on admittance and short-term outcome is similar among young and old patients. | 2014-10-01T00:00:00.000Z | 2011-01-20T00:00:00.000 | {
"year": 2011,
"sha1": "b25111a674f10b4a00d5074586fe7a1345184095",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/srt/2011/183256.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc723f44d740a71dbf21177342b371e4c3ab9726",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263947052 | pes2o/s2orc | v3-fos-license | Comparison of heat-sensitive moxibustion versus fluticasone/salmeterol (seretide) combination in the treatment of chronic persistent asthma: design of a multicenter randomized controlled trial
Background Asthma is a major health problem and has significant mortality around the world. Although the symptoms can be controlled by drug treatment in most patients, effective low-risk, non-drug strategies could constitute a significant advance in asthma management. An increasing number of patients with asthma are attracted by acupuncture and moxibustion. Therefore, it is of importance that scientific evidence about the efficacy of this type of therapy is regarded. Our past researches suggested heat-sensitive moxibustion might be effective in treatment of asthma. Our objective is to investigate the effectiveness of heat-sensitive moxibustion compared with conventional drug treatment. Methods/Design This study is comprised of a multi-centre (12 centers in China), randomized, controlled trial with two parallel arms (A: heat-sensitive moxibustion; B: conventional drug). Group A selects heat- sensitive acupoints from the rectangle region which consist of two outer lateral lines of dorsal Bladder Meridian of Foot-Taiyang, and two horizontal lines of BL13(Fei Shu) and BL17 (Ge Shu);6 inch outer the first and second rib gap of anterior chest. Group B treats with fluticasone/salmeterol (seretide). The outcome measures will be assessed over a 3-month period before each clinic visit at days 15, 30, 60, and 90. Follow-up visit will be at 3, 6 months after the last treatment session. Adverse event information will be collected at each clinic visit. Discussion This trial will utilize high quality trial methodologies in accordance with CONSORT guidelines. It may provide evidence for the effectiveness of heat-sensitive moxibustion as a treatment for chronic moderate persistent asthma. Moreover, the result may propose a new type moxibustion to control asthma. Trial Registration The trial is registered at Chinese Clinical Trials Registry: ChiCTR-TRC-09000599
Background
Asthma is a common chronic inflammatory disease of the airways characterized by variable and recurring symptoms, airflow obstruction, and bronchospasm [1]. It is also a complex disease involving many cells and mediators [2]. Asthma affects 300 million people worldwide [3], with an increasing prevalence in Western Europe (5%) and the USA(7%) in particular [4,5]. Despite the fact that there is still no cure for asthma, it has been established in a great number of small and large studies that many patients can reach a good asthma control with controller treatment [6]. Generally speaking, Medications used to treat asthma are divided into two general classes: quick-relief medications used to treat acute symptoms and long-term control medications used to prevent further exacerbation [7]. The therapeutic options available for patients with asthma depend on the severity of the condition. Although the symptoms can be controlled by drug treatment in most patients, effective low-risk, non-drug strategies could constitute a significant advance in asthma management [1][2][3]. Therefore, an increasing number of patients with asthma are attracted by complementary and alternative medicine (CAM) [8]. A survey showed that roughly 50% of asthma patients used some form of unconventional therapy [9].
Acupuncture has traditionally been used in asthma treatment in China and is increasingly applied for this purpose in Western countries. Moxibustion is a traditional Chinese method of acupuncture treatment, which utilizes the heat generated by burning Moxa (it is also called Mugwort or Moxa) to stimulate the acupuncture points. The technique consists of lighting a moxa-stick and bringing it close to the skin until it produces hyperemia due to local vasodilatation. The intensity of moxibustion is just below the individual tolerability threshold. Moxibustion has anti-inflammatory or immunomodulatory effects against chronic inflammatory conditions in humans [10]. Moxibustion method for treatment of asthma diverse curative effect and its mechanism may be due to improve lung function, antagonize of inflammatory mediators, modulate of immune function, regulate the role of cyclic nucleotide levels, and can also affect the neuroendocrine network and inflammatory cells [11][12][13][14]. Therefore, these inflammatory substances may be reduced and weakened by moxibustion. Especially for chronic persistent asthma, moxibustion may get a better effect.
Although firm evidence has not been established, the results of some clinical trials suggest that acupuncture and moxibustion may effective in the treatment of asthma [15][16][17][18]. However, these researches do not confirm the efficacy of acupuncture and moxibustion. This may be because all relevant RCTs were limited by methodological defects, including inappropriate sample size, variability of acupuncture and sham protocols, and missing information. Therefore, rigorous high-quality randomized controlled trials are needed.
Thinking about moxibustion itself, the selection of location for manipulating Moxa plays an important role in obtaining good effects [19]. The moxibustion point location may be connected with the changes in the condition of the disease. Our team of experts were astonished to find that the main factor in selecting the location of acupuncture points is link with the area which is affected by disease, not only the standardized fixed position.
In the human nature, there are two state of being in acupuncture points, the sensitized or awake state and the rest state. When the human body suffers form disease, the acupoints on the surface of the body are stimulated and sensitive by various stimulants including heat. The specific areas stimulated by heat are then called "heat-sensitive points". One of the characteristics of these areas is that they are specific or closely related to acupuncture points and have the same clinical effect as "a small stimulation induces a large response". The Inner Canon of Huangdi or Yellow Emperor's Inner Canon is an ancient Chinese medical text that has been treated as the fundamental doctrinal source for Chinese medicine for more than two millennia and until today. According with its core viewpoint and theory, acupoint is described and understood with the state, which is certain area of the body surface in the course of diseases. Among the changes, sensitized status is the common one, described that acupoints on the body surface may be sensitized in the course of diseases. Acupoint heatsensitization is a type of acupoint sensitization. Our research found that the heat-sensitive phenomenon to a point or an area is a new type of reaction featured in a pathological state [20][21][22][23]. We applicated the acupoint heat-sensitization phenomenon and rule in the past twenty years. Our team experimental evidence indicates that the state of a point might change from the rest state to the heat sensitized state while suffering from diseases. Its characteristic was thought that these special acupoints might produce heat response and farther warm sensation, as a result of stimulation of moxibustion heat. If we can search out these heat-sensitized acupoints associating with pathological state, good effect will be achieved. Therefore, selecting the heat-sensitized acupoint may obtain therapeutic effect far better than acupuncture and moxibustion at acupoints of routine rest state. So we defined the approach which treated various diseases through heat-sensitized acupoint, as heat-sensitive moxibustion therapy. We carried out many clinical trials to test and verify the efficacy of heat-sensitized acupoint, such as myofascial pain syndrome [22], lumbar disc herniation [24], pressure sores [25] and knee osteoarthritis [26]. The result of clinical trials almost suggested superiority effect of heat-sensitized acupoint and encouraged us to proceed. Hence, we planned a rigorous multi-centre randomized controlled trial with a large sample size.
Method/design
Objective The aim of this study is to investigate the effectiveness of heat-sensitive moxibustion compared with fluticasone/salmeterol (seretide) in patients with chronic moderate persistent asthma in China.
Outcome measures Primary outcome
At present, the goal of asthma care is to achieve and maintain control of clinical manifestation of the disease [27,28]. Hence, we use Asthma Control Test (ACT) to quickly access asthma control, a simple 5-question tool that is completed by the patient and parents/caregivers and recognized by the National Institutes of Health (Table 1) [29,30]. Patients should write the number of each answer in the score box provided, and then add up each score box for your total. ACT will be assessed before treatment and at days 15, 30, 60, and 90 in the treatment period. Follow-up visit will be at 3, 6 months after the last treatment session.
Secondary outcomes
Measurement of lung function provides an assessment of the severity, reversibility, and variability of airflow limitation. Forced expiratory volume in 1 s (FEV1) and peak expiratory flow (PEV) will be used in this trial. Attack frequency also will be assessed. These outcomes also will be assessed before treatment and at days 90 in the treatment period. Follow-up visit will be at 3, 6 months after the last treatment session. Adverse event information will be collected at each clinic visit.
Design
A multi-centre, randomized, two parallel arms (group A and B) and assessor blinded, positive controlled trial will be conducted at the twelve centers in China ( Table 2).
The study will be sequentially conducted as follows: a run-in period of one week prior to randomization, a treatment period of 90 days, and a follow-up period of six months. At the end of the run-in period, participants will be randomized to the heat-sensitive moxibustion group or the drug group by the central randomization system (Figure 1). This system is provided by China Academy of Chinese Medical Sciences, which adopted the computer telephone integration (CTI) technology to integrate computer, internet and telecom. The random number list will be assigned by interactive voice response (IVR) and interactive web response (IWR) [31]. The success of blinding will be assessed at each participant's last visit. Assessor who did not participate in the treatment and who is blinded to the allocation results will perform the outcome assessment.
Inclusion criteria
According to the guideline of asthma treatment and prevention in China (GATPC) [32], patients are divided into three types, including acute exacerbation, chronic persistent and clinical remission. Severity grade of asthma in GATPC is described in Table 3. Our trial will only choose participants with chronic moderate persistent asthma ( Figure 1).
Patients will be required to complete the baseline asthma diary. Written informed consent will be obtained from each participant. Participants 18~65 years of age will be recruited from outpatient and inpatient in 12 centers. Standard of diagnosis listed as follows: (1) Common signs and symptoms of asthma include: recurrent wheezing, coughing, trouble breathing, chest tightness; (2) Symptoms that occur or worsen at night, and symptoms that are triggered by cold air, exercise or exposure to allergens; (3) Scattered or diffuse expiratory wheezing sound could be heard in the lungs in attack; (4) The above symptoms will be relieved or disappear after treatment; (5) Ruling out conditions other than asthma; (6) When typical clinical symptoms can not be observed, lung function tests should be used to confirm, such as positive challenge test, positive bronchodilator test(an increase in FEV1 of ≥12% and ≥200 ml), and mutation rate of PEF≥20% one day/two weeks. Since the research involves moderate asthma (grade III), the inclusion criteria will restrict the following conditions: According to the below GATPC diagnosis standard, meanwhile, heat-sensitivity appears within the rectangle area which consist of two outer lateral lines of dorsal Bladder Meridian of Foot-Taiyang, and two horizontal lines of BL13 (Fei Shu) and BL17 (Ge Shu); 6 inch outer the first and second rib gap of anterior chest. Participants will be instructed to stop asthma symptomatic relief medication during the run-in and treatment periods and will be provided the usual care instruction for asthma.
Exclusion criteria
Participants will be excluded if they have other diseases that also cause breathlessness or dyspnea, such as bronchiectasis, cor-pulmonale, pulmonary fibrosis, tuberculosis, pulmonary abscess, chronic obstructive pulmonary disease and so on. Participants will not be eligible if the female are in the duration of pregnancy or lactation. The following conditions are also excluded items: Complicated with serious life-threatening diseases, such as heart and brain blood vessels, liver, kidney and hematopoietic system disease and psychotic patients; Hormone-dependent type patients, or the people who used adrenal cortical hormone (intravenous, intramuscular injection, subcutaneous injection and oral administration) within 4 weeks before recruiting.
Treatment protocol
Heat-sensitive moxibustion Moxibustion will be performed by certified acupuncture medical doctors at 12 centers. Qualified specialists of acupuncture in traditional Chinese medicine with at least five years of clinical experience will perform the acupuncture in this study. All treatment regimens will be standardized between 12 centers practitioners via video, hands-on training and internet workshops. Participants will be randomly assigned to the heat-sensitive moxibustion group, or the drug group. In the former group, a 22 mm (diameter) × 160 mm (length) moxasticks (Jiangxi Hospital of Traditional Chinese Medicine, China) will be used. The patient is usually in the comfortable position for treatment, with 24°C~30°C temperature in the room. He should wear loose clothes. For the heat-sensitive moxibustion group, the moxasticks are lit by the therapist and held over the rectangle area which consist of two outer lateral lines of dorsal Bladder Meridian of Foot-Taiyang, and two horizontal lines of BL13(Fei Shu) and BL17 (Ge Shu);6 inch outer the first and second rib gap of anterior chest. The warming suspended moxibustion about distance of 3 cm above the skin are used to search the acupoint heat-sensitization phenomenon. The following patients sensation will suggest the special heat-sensitization acupoint: diathermanous sensation due to moxa-heat, defining as the heat sensation conducting from the Moxa local skin surface into deep tissue, or even into the thoracic cavity; expand heat sensation due to moxa-heat, defining as the The First Affiliated Hospital of Nanchang University Nanchang Jiangxi Nanchang Hospital of Traditional Chinese and Western Medicine Nanchang Jiangxi Figure 1 The flow diagram is intended to depict the passage of participants through this RCT. heat sensation spreading the surrounding little by little around the moxa point; transfer heat sensation due to moxa-heat, defining as the heat sensation transferring along some pathway, or even to the arms. The therapists mark the point as heat-sensitive acupoint. We try our best to seek all the special acupoints in each patient by the repeated manipulation. The therapists begin to treat patients from the most heat-sensitive intensity acupoint. Treatment sessions end when patients are feel the acupoint heat-sensitization phenomenon disappeared. Generally speaking, we find the time range from 30~60 minutes. In 1 st month, patients receive the treatment once a day in the first eight days, and 12 times treatment in the next twentytwo days. Treatments will be given 15 times a month for the remaining two months.
Drug group
In recent years, considerable insight has been gained in to the optimal management of adult asthma. Those with persistent asthma are usually not well controlled without inhaled corticosteroids (ICS). Adding a long-acting betaagonist (LABA) to ICS appears to be well controlled [33]. The combination of an ICS and LABA is preferred in these patients, and is better than doubling or even quadrupling the dose of ICS to achieve better asthma control and reduce exacerbation risks [34,35]. Two such combinations, salmeterol xinafoate and fluticasone propionate (SFC, Seretide(tm)) and formoterol and budesonide (FBC, Symbicort(tm)) are widely used and have been shown to be effective in controlling asthma of varying severity in adults and children [36][37][38].
Therefore, we selected the Seretide Accuhaler, containing two medicines, fluticasone propionate and salmeterol xinafoate, which is the mainstay of current asthma treatment. This drug is recommended by GINA as a regular treatment. Our trial uses salmeterol/fluticasone 50 μg/250 μg twice a day. Patients receive the treatment for a total of 180 sessions over 90 days.
Statistical analysis plan
We will conduct analysis on an intention-to-treat basis, including all randomized participants with at least one measurable outcome report. Analyses will be conducted using 2-sided significance tests at the 5% significance level. An analysis using the Cochran-Mantel-Haenszel procedure will be done to asses center effect. The statistician conducting the analyses will remain blind to treatment group and data will only be unblinded once all data summaries and analyses are completed. All analyses will be conducted in the SAS statistical package program (ver. 9.1.3).
Baseline data
Baseline characteristics will be shown as mean ± standard deviation (SD) for continuous data including age, previous duration, and so on. As for participants' gender, n (%) of male and female in each group will be shown as baseline characteristics. We will conduct between-group comparison in baseline using two-sample t-test or Wilcoxon rank sum test for continuous data and using Chi-square test or Fisher's exact test for gender composition considering p < 0.05 as statistically significant.
If any imbalances in baseline characteristics between groups are encountered, we will conduct ANCOVA (analysis of covariance) using these imbalanced variables as covariates and allocated group as fixed factor.
Outcome data
For primary and secondary outcome measures, these will be summarized descriptively (mean, SD, median, minimum and maximum) at each time point by treatment group. The t-test, Mann-Whitney U and Wilcoxon test were used for comparison of variables, as appropriate.
All adverse events reported during the study will be included in the case report forms; the incidence of adverse events will be calculated. The percentage of subjects with adverse events in each group will be Dropped or missing data Reasons for dropped or missing data will be explored by descriptively. Missing data will be replaced according to the principle of the last observation carried forward. Follow-up data The primary outcome will be the ACT scores. The primary analysis will compare the two groups in 3 months. A secondary analysis will compare the two groups at 6,9 months to assess if any differences between groups have been maintained over time.
Loss to follow-up is likely to lead to biased estimates of intervention effect. We will try to avoid bias due to attrition by carefully following up the participants in both groups. We will phone participants who fail to complete questionnaires after a second reminder. We anticipate a 20% loss to follow-up in this trial, and will implement procedures to minimize loss to follow-up and patient withdrawal, and where possible, we will collect information on reasons for patient withdrawal.
Data integrity
The integrity of trial data will be monitored by regularly scrutinizing data sheets for omissions and errors. Data will be double entered and the source of any inconsistencies will be explored and resolved.
Sample size
We wished to estimate the sample size according to noninferiority clinical trial between the heat-sensitive and drug group. Sample size depends on the level of confidence chosen, the risk of type II error (or desired power), and δ. The parameter margin of non-inferiority δ can be specified as a difference in means or proportions. It is often chosen as the smallest value that would be a clinically important effect. To determine δ, we carried out a small sample pilot study previously. The primary endpoint chosen was ACT. The result of outcome showed difference in means between the two groups approximately was 0.45. The choice of δ = 0.15 (30% of Δ) appeared to be reasonable based on clinical relevance and statistical judgment.
If we apply a two-sided 5% significance level(δ = 0.15, α = 0.05, β = 0.2), 95% power the calculated required sample size is approximately 120 participants in each group, according to the following equation. Allowing for a 20% loss to follow up, a total of 144 participants will be required in each group, with 288 participants in total.
Adverse events
We define adverse events as unfavorable or unintended signs, symptoms or disease occurring after treatment that are not necessarily related to the moxibustion intervention. In every visit, adverse events will be reported by participants and examined by the practitioner.
Ethics
Written consent will be obtained from each participant. This study was approved by all relevant local ethics review boards. Ethics Committee of Affiliated hospital of Jiangxi Institute of Traditional Chinese Medicine had approved this trial: code issued by ethic committee is 2008(13).
Discussion
To our knowledge, the goal of asthma care is to achieve and maintain control of clinical manifestations of the disease for prolong periods. When asthma is controlled, patients can prevent most attacks, avoid troublesome symptoms day and night, and keep physically active [1]. Acupuncture therapy was often perceived as an effective option to control asthma successfully by patients with chronic asthma. The use of acupuncture in asthma patients is increasing as an adjunct and also as a substitute for effective and proven therapies [39]. In China, moxibustion are considered as an ancient treatment to prevent and control asthma, and still widely used today. A number of clinical trial suggested moxibustion as one of traditional acupuncture therapy, should effective in the treatment of asthma. But the methodological problems of published trials haunt us the trust of moxibustion. Therefore, we design this rigorous clinical trials meeting the CONSORT statement and guidelines to guarantee a high internal validity for the results. At present, various conventional medications are used to slow down and control with the disease. An ICS/ LABA combination in a single inhaler represents a safe, effective and convenient treatment option, and recommended by GINA. So, we selected the fluticasone/salmeterol (seretide) as the control treatment in the protocol. Actually, the aim of this trial is to search an effective CAM treatment to control asthma, as good as conventional drug. The focused features of moxibustion are low cost, less adverse event and low risk.
According to the current theory of traditional Chinese medicine, moxibustion resulting from the burning of Moxa produces the radiant heat and drug effects to acupoints. This treatment penetrates deeply into the body, restoring the balance and flow of vital energy or life force through acupoints. So the selected of location for manipulating Moxa plays an important role in obtaining good effects. Generally speaking, the location acupoints are fixed along meridians. The conventional moxibustion is considered as improving general health and treating diseases by stimulating these fixed acupoints. And doctors consider hyperemia due to local skin vasodilatation as the indicator of moxibustion's effect. However, our clinical experience and observation in the past suggested that stimulating these fixed acupoints might not the best treatment site for moxibustion. In the human nature, there are two states of acupoints, the stimulated or awake state and the rest state. When the human body suffers form disease, the acupuncture points on the surface of the body are stimulated and sensitive by various stimulants including heat. And acupoint heatsensitization is a type of acupoint sensitization. Acupoint is more than fixed skin site but external sensitive point reflecting the diseases. Therefore, acupoint is variable and depends on the pathological state. Traditional fixed acupints are thought as indicators to searching specific sensitive acupoint. That is, traditional fixed acupints don not consider the state as the key factor to local the acupoint, so the course of fixing the position is imprecisely.
When we light the Moxa hold over the heat-sensitive acupoints, the patients will produce some heat-sensitization phenomenon. The following patients sensation will suggest the special heat-sensitization acupoint: diathermanous sensation due to moxa-heat, defining as the heat sensation conducting from the moxa local skin surface into deep tissue, or even into the thoracic cavity; expand heat sensation due to moxa-heat, defining as the heat sensation spreading the surrounding little by little around the moxa point; transfer heat sensation due to moxa-heat, defining as the heat sensation transferring along some pathway, or even to the arms.
Acupuncture and moxibustion originated in China several thousands of years ago. The ancient Chinese medical classic Huáng Dì Nèi Jīng translated as 'The Yellow Emperor's Inner Classic' has been treated as the fundamental doctrinal source for Chinese medicine for more than two thousand years. According to the chapter Annotations on 'The Yellow Emperor's Inner Classicchapter: jiu zhen shi er yuan'(translated as 'Nine Needles and twelve yuan-primary acupoints"), it says: "the socalled joints are the places where Shenqi flows in and out, not just referring to skin, muscles, sinews and bones." This explains that the acupuncture points are not located according to the flesh and bones, which by definition have a fixed location, but they are alive and have a dynamic state due to the activity of "shen-qi". In the Huáng Dì Nèi Jīng Ling Shu (黄帝内经灵枢) translated as "Annotations on 'The Yellow Emperor's Inner Classic' -chapter: jiu zhen shi er yuan(translated as 'Nine Needles and twelve yuan-primary acupoints"), it says: "so the disease of the Five Zang-organs can be treated by needling the twelve Yuan-Primary acupoints. The twelve Yuan-Primary acupoints show how the five Zang-Organs receive the nutrients of food and water and how Essence-Qi is infused into the three hundred and sixty-five joints. That is why the disease of the Five Zang-Organs are manifested over the twelve Yuan-Primary acupoints which show certain manifestations. Awareness of the twelve Yuan-Primary and observation of the manifestation to know the pathological changes of the Five Zang-Organs". We can learn an important fact from this section of the classical text. It clearly states that acupuncture points reflect the pathological state of the internal diseases and can be stimulated in treatment. Physically, people are not always aware of the existence of the acupuncture points. In contrast, patients can usually feel some changes in the area of the acupuncture point when affected by disease. Through the observation of these changes, the ancient doctors located the acupuncture points. In the Annotations on 'The Yellow Emperor's Inner Classic -chapter: 'back-shu acupoints', it states: "The Feishu (BL13) acupoint is located between the third thoracic vertebra. The Xinshu (BL15) acupoint is located below and lateral to the fifth thoracic vertebra. The Geshu (BL17) acupoint is located below and lateral to the seventh thoracic vertebra. The Ganshu (BL18) acupoint is located below and lateral to the ninth thoracic vertebra. The Pishu (BL20) acupoint is located below and lateral to the eleventh thoracic vertebra. The Shenshu (BL23) is located below and lateral to the fourteenth thoracic vertebra. These acupoints are all located beside the spinal column and 3 cun away from the spinal column. The method to locate these acupoints is to press the regions. When pressed, the patient will feel aching and distending or feel that the original pain is relieved." This section of the classical text illustrated that the back shu points are found by founding the sensitive areas on the skin. In the Annotations on 'The Yellow Emperor's Inner Classic -chapter: 'five xie' (translated as 'five kinds of pathogenic factor')it says: "cough involving the shoulder and back, to treat such a disease, acupoints located on the lateral side of the chest and lateral to the third thoracic vertebra can be needled. Before applying acupuncture, the doctor may use his fingers to quickly press the concerned region; the place where the patient feels comfortable when pressed is the acupoint and should be needled". From this section, we can conclude that the sensitivity of the points is the key factor to locate the position of the acupuncture points.
Among the changes, sensitized status is the common one, described that acupoints on the body surface may be sensitized with various types of sensitization. This sensitized acupoint is not only the pathological phenomenon reflecting the diseases but also stimulating location with acupuncture and moxibustion. Acupoint heat-sensitization is a type of acupoint sensitization, which derived from our clinical experience and research in past twenty years. The special acupoint makes accordance with the classical thought and theory from the Inner Canon of Huangdi.
Our empirical evidence engaged us to formulate the following hypothesis: selecting the heat-sensitized acupoint may obtain therapeutic effect in asthma. The main aim of this trial is to test and verify the hypothesis. If we can confirm this hypothesis, the results of our trial will be helpful to supply the evidence on searching better safe approach to control asthma. | 2016-05-04T20:20:58.661Z | 2010-12-15T00:00:00.000 | {
"year": 2010,
"sha1": "2a8eed88c073f0a94d3addc50519085bbf3257a2",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/counter/pdf/10.1186/1745-6215-11-121",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "881797935127abce4db2f6cdfcd2526acbcb567e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259358541 | pes2o/s2orc | v3-fos-license | Fatal case of subdural empyema caused by Campylobacter rectus and Slackia exigua
ABSTRACT We report a fatal subdural empyema caused by Campylobacter rectus in a 66-year-old female who developed acute onset of confusion, dysarthria, and paresis in her left extremities. A CT scan showed hypodensity in a crescentic formation with a mild mid-line shift. She had a bruise on her forehead caused by a fall several days before admission, which initially raised subdural hematoma (SDH) diagnosis, and a burr hole procedure was planned. However, her condition deteriorated on the admission night, and she died before dawn. An autopsy revealed that she had subdural empyema (SDE) caused by Campylobacter rectus and Slackia exigua. Both microorganisms are oral microorganisms that rarely cause extra-oral infection. In our case, head trauma caused a skull bone fracture, and sinus infection might have expanded to the subdural space causing SDE. CT/MRI findings were not typical for either SDH or SDE. Early recognition of subdural empyema and prompt initiation of treatment with antibiotics and surgical drainage is essential for cases of SDE. We present our case and a review of four reported cases.
INTRODUCTION
Subdural empyema (SDE) is an infection between the dura and the arachnoid membranes. 1-3 SDE commonly affects young and middle-aged people with a male predilection. 1,[3][4][5][6] Regarding its etiology, meningitis is the most common cause of SDE in infants. The sources of SDE are otitis media and sinus infection in older children and adults. 1,7 Without immediate and appropriate management, SDE can be fatal, and the mortality rate has been reported to be between 6% and 35%. 8 Moreover, SDE causes long-term complications such as hydrocephalus, residual hemiparesis, and epilepsy in more than half of the cases. 9 Computed tomography (CT) and magnetic resonance imaging (MRI) have been the gold-standard methods for the diagnosis of SDE. 10 However, prompt and accurate diagnosis of SDE is often tricky since its imaging results may resemble subdural hematoma (SDH). 1,4,6,7 In this report, we present a fatal case of Campylobacter rectus-induced subdural empyema mimicking SDH that showed atypical characteristics on imaging tests confirmed by autopsy.
CASE REPORT
A 66-year-old female developed acute onset of confusion, dysarthria, and paresis in her left extremities during Axitinib treatment started 1 month earlier as a fourth-line treatment for metastatic renal cell carcinoma. The next day, she visited the urology doctor in charge and was referred to our neurosurgery department. Her vital signs were not remarkable. Glasgow Coma Scale was E2V5M6. Neurological examination on admission revealed dysarthria, leftsided sensory disturbance, and motor hemiparesis. Manual muscle test grades of both upper and lower left extremities were 3 out of 0 to 5, while the right limbs scored 5.
Laboratory data showed an increased white blood cell (WBC) count of 39,000 /µL (normal range 3,300-8,600 /µL) and significantly elevated C-reactive protein (CRP) of 18.88 mg/dL (normal range 0-0.14 mg/dL). The levels of CRP remained high at nearly 10 mg/dL during the clinical course of refractory renal cell carcinoma. In contrast, the levels of WBC had been within the normal range until the previous visit, which was 17 days before admission. The remaining laboratory exams showed hypoalbuminemia, hypokalemia, and mild increases in the creatinine level.
A non-contrast brain CT scan revealed a subdural hypodense collection in a crescentic formation along with the right convexity and mild mid-line shift toward the left side ( Figure 1A). A subcutaneous swollen mass accompanied the lesion on the right frontal head with a hyperdense core. Moreover, mucosal thickening and fluid accumulation were observed in the frontal sinuses. MRI showed that the crescentic region had low signal intensity surrounded by a peripheral hyperintense rim on fluid-attenuated inversion recovery (FLAIR) images ( Figure 1B). The diffusion-weighted imaging (DWI) showed an internal hypointensity cavity associated with surrounding hyperintensity signals similar to the FLAIR sequence images ( Figure 1C). On the corresponding apparent diffusion coefficient (ADC) map, the central area of the crescentic collection had a high value, in contrast to the low value of the peripheral region ( Figure 1D). Regarding the swollen subcutaneous lesion on the right frontal head, the lesion had hypo-and hyperintense signals on T1-and T2-weighted images, respectively. The right frontal bone and frontal lobe just below the swollen mass also had hyperintensity on DWI with a low ADC value ( Figure 1C, D).
Detailed history taken from a family member revealed the episode of an accidental falling with head trauma several days before admission. The fall was thought to have been due to left-sided hemiparesis caused by a chronic subdural hematoma. Chronic subdural hematoma with traumatic subcutaneous hemorrhage was suspected. Therefore, she was admitted to our neurosurgery ward, and a burr hole drainage was planned.
However, the patient's condition rapidly deteriorated into pulseless electrical activity (PEA) at night, and she died despite long-term resuscitation efforts. A CT scan performed after resuscitation showed no notable imaging change lacking evidence of massive bleeding or brain stem herniation. In an attempt to reveal the cause of her death, an autopsy was performed.
AUTOPSY FINDINGS
A bruise on her right forehead, covering a subcutaneous hemorrhage, and a tiny fracture of the frontal skull were seen (Figure 2A). At the opening of the skull, the crescentic formation represented a copious amount of purulent secretion. The gross examination of the brain revealed a right-sided abscess within the subdural and subarachnoid spaces without evidence of bleeding ( Figure 2B).
On microscopy, the abscess partly invaded the brain parenchyma, which was surrounded by mild cortical microvacuolation exhibiting edema ( Figure 3A). Neutrophils with numerous gram-negative rods and concomitant Gram-positive coccobacillus were observed in the purulent material ( Figure 3B).
Evidence of severe inflammation underpinned by massive neutrophil invasion was seen in other organs, including the liver and spleen. In addition, the patient's 3-7 Autops Case Rep (São Paulo). 2023;13:e2023433 cerebrospinal fluid (CSF) culture bottles yielded Campylobacter rectus and Slackia exigua. Thus, we concluded that the patient's death had been caused by sepsis secondary to meningoencephalitis derived from subdural empyema. Regarding the cause of subdural empyema, Campylobacter rectus, and Slackia exigua were the most likely causative microorganisms based on the autopsy findings. MRI (B-D). A -A CT scan showed a hypodense crescentic collection in the subdural space (arrow) with a mild mid-line shift. A swollen mass was also seen on the right frontal head; B -A FLAIR image showed low signal intensity in a crescentic formation surrounded by a peripheral hyperintense rim; C -DWI showed that the crescentic cavity consisted of internal hypointensity and linear hyperintensity of the dura mater extended to the right frontal contusion site. The subcutaneous mass on the right frontal head, an adjacent frontal bone, and lobe also showed a restrictive pattern, most likely indicating a subdural hematoma accompanied by bone and brain contusions; D -The ADC value of the central area of the crescentic collection was high, whereas its peripheral region showed a low ADC value.
DISCUSSION
Herein, we describe septic meningoencephalitis secondary to a Campylobacter rectus and Slackia exigua subdural empyema. SDE's most common etiological microorganisms are anaerobes, aerobic Streptococci, Staphylococci, Haemophilus influenzae, Streptococcus pneumoniae, and other gram-negative bacilli. 13 In subdural empyema secondary to paranasal sinusitis cases, the anaerobic and microaerophilic streptococci such as Streptococcus milleri and Streptococcus anginosus are the most common reported microorganisms. 1,9,13,14 Campylobacter rectus detected in our case's CSF culture is a rare SDE pathogen. C. rectus, previously known as Wolinella recta, is an anaerobic gram-negative rod comprising the normal oral subgingival flora. The association between periodontal disease and this Campylobacter species is well known. 15 In contrast, extraoral infection by C. rectus was reported in only a few cases in the literature. Up to our literature search, only four cases of subdural empyema due to C. rectus have been reported (Table 1). [15][16][17][18]
Figure 2.
Macroscopic findings of the skull and brain. A -Hemorrhage under the bruise was evident on reversed skin, and a small bone fracture was found on the surface of the skull (arrow); B -Cerebral sulci were obscure on the right frontoparietal lobe due to subarachnoid pus collection that partly spread to the left convexity. Extra-axial meningioma measuring 45 mm in diameter, occupied the right frontal region, which was periodically followed without surgical intervention (scale bar= 5 cm). Three of these 4 cases had a dental abscess and/or sinusitis. Our case had sinusitis, which was pointed out in the MRI study conducted 10 months before hospitalization. The sinusitis was not treated and had been expanded from the maxillary sinus to the frontal sinus. Increased WBC and CRP were thought to indicate a bacterial infection in those case reports. In our case, the high levels of CRP for several months before the onset of the neurologic symptoms were considered due to advanced-stage renal cell carcinoma. Another microorganism, Slackia exigua, is a Gram-positive, obligate anaerobic coccobacillus associated with dental infection but rarely causes extraoral disease. 19 To the best of our knowledge, there have been no reports of SDE caused by Slackia exigua.
Another unique point of our case is its MRI presentation. MRI is superior to CT in demonstrating extra-axial fluid and rim enhancement, and DWI is particularly helpful in differentiating SDE and SDH. 1,10 SDE is characterized by hyperintensity on DWI with a low apparent diffusion coefficient (ADC) value, indicating restricted diffusion 1,9,10 . This restriction is thought to be partially due to the viscosity of the empyema fluid 1,10 . In our case, however, there were uncommon findings of hypointense signals encompassed by a restrictive capsule on DWI with a reversed pattern on an ADC map. The atypical findings might be due to the low empyema fluid viscosity showing restricted diffusion since the development of the infection was considerably rapid.
Management of SDE mainly includes early initiation of antibiotic therapy and surgical procedures, though SDE is occasionally cured with antibiotic treatment alone. Hence, early surgical intervention by burr hole drainage or craniotomy evacuation is the key to diagnosing and treating SDE, and administration of adequate antibiotics might result in timely recovery and salvage of maximal neurological function. 7,9,[15][16][17][18] | 2023-07-07T22:16:37.630Z | 2023-05-24T00:00:00.000 | {
"year": 2023,
"sha1": "a2197da73dcc1dc728184231b2a7c257b2933397",
"oa_license": "CCBY",
"oa_url": "https://autopsyandcasereports.org/article/10.4322/acr.2023.433/pdf/autopsy-13-e2023433.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "598c823ade31ed11972bc338ac9183fbc0b945de",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.